id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2305.07958 | More for Less: Safe Policy Improvement With Stronger Performance
Guarantees | In an offline reinforcement learning setting, the safe policy improvement
(SPI) problem aims to improve the performance of a behavior policy according to
which sample data has been generated. State-of-the-art approaches to SPI
require a high number of samples to provide practical probabilistic guarantees
on the improved policy's performance. We present a novel approach to the SPI
problem that provides the means to require less data for such guarantees.
Specifically, to prove the correctness of these guarantees, we devise implicit
transformations on the data set and the underlying environment model that serve
as theoretical foundations to derive tighter improvement bounds for SPI. Our
empirical evaluation, using the well-established SPI with baseline
bootstrapping (SPIBB) algorithm, on standard benchmarks shows that our method
indeed significantly reduces the sample complexity of the SPIBB algorithm. | Patrick Wienhöft, Marnix Suilen, Thiago D. Simão, Clemens Dubslaff, Christel Baier, Nils Jansen | 2023-05-13T16:22:21Z | http://arxiv.org/abs/2305.07958v1 | # More for Less: Safe Policy Improvement
###### Abstract
In an offline reinforcement learning setting, the safe policy improvement (SPI) problem aims to improve the performance of a behavior policy according to which sample data has been generated. State-of-the-art approaches to SPI require a high number of samples to provide practical probabilistic guarantees on the improved policy's performance. We present a novel approach to the SPI problem that provides the means to require less data for such guarantees. Specifically, to prove the correctness of these guarantees, we devise implicit transformations on the data set and the underlying environment model that serve as theoretical foundations to derive tighter improvement bounds for SPI. Our empirical evaluation, using the well-established SPI with baseline bootstrapping (SPIBB) algorithm, on standard benchmarks shows that our method indeed significantly reduces the sample complexity of the SPIBB algorithm.
## 1 Introduction
_Markov decision processes_ (MDPs) are the standard model for sequential decision-making under uncertainty [23]. _Reinforcement learning_ (RL) solves such decision-making problems, in particular when the environment dynamics are unknown [22].
In an _online_ RL setting, an agent aims to learn a decision-making policy that maximizes the expected accumulated reward by interacting with the environment and observing feedback, typically in the form of information about the environment state and reward. While online RL has shown great performance in solving hard problems [15, 20], the assumption that the agent can always directly interact with the environment is not always realistic. In real-world applications such as robotics or healthcare, direct interaction can be impractical or dangerous [13]. Furthermore, alternatives such as simulators or digital twins may not be available or insufficiently capture the nuances of the real-world application for reliable learning [24, 1].
_Offline RL_ (or batch RL) [10] mitigates this concern by restricting the agent to have only access to a fixed data set of past interactions. As a common assumption, the data set has been generated by a so-called _behavior policy_. An offline RL algorithm aims to produce a new policy without further interactions with the environment [13]. Methods that can reliably improve the performance of a policy are key in (offline) RL.
_Safe policy improvement_ (SPI) algorithms address this challenge by providing (probabilistic) correctness guarantees on the reliable improvement of policies [14, 23]. These guarantees depend on the size of the data set and usually adhere to a conservative bound on the minimal amount of samples required. Since this bound often turns out to be too large for practical applications of SPI, it is instead turned into a hyperparameter (see, _e.g._, [1]). The offline nature of SPI prevents further data collection, which steers the key requirements of SPI in practical settings: (1) exploit the data set as efficiently as possible and (2) compute better policies from smaller data sets.
Contributions.Our contribution provides the theoretical foundations to improve the understanding of SPI algorithms in general. Specifically, in a general SPI setting, we can guarantee a higher performance for significantly less data. Equivalently, we can allow the same amount of data and consequently provide significantly less performance guarantees. Our main technical contribution is an transformation of the underlying MDP model into a _two-successor MDP_ (2sMDP) along with adjustments to the data set, that allows us to prove these tighter bounds. A 2sMDP is an MDP where each state-action pair has at most two successors, hence limiting the branching factor of an MDP to only two. These transformations preserve (the optimal) performance of policies and are reversible. In the context of SPI these transformations are implicit, _i.e._, do not have to be computed explicitly. Hence, we
are able to apply standard SPI algorithms such as SPI with baseline bootstrapping (SPIBB) [11], and use our novel improvement guarantees without any algorithmic changes necessary, as also illustrated in Figure 1.
Following the theoretical foundations for the MDP and data set transformations (Section 4), we present two different methods to compute the new performance guarantees (Section 5). The first uses Weissman's bound [21], as also used in, _e.g._, standard SPIBB, while the second uses the inverse incomplete beta function [14]. Our experimental results show a significant reduction in the amount of data required for equal performance guarantees (Section 6). Concretely, where the number of samples required at each state-action pair of standard SPIBB grows linearly in the number of states, our approach only grows logarithmic in the number of states for both methods. We also demonstrate the impact on three well-known benchmarks in practice by comparing them with standard SPIBB across multiple hyperparameters.
## 2 Preliminaries
Let \(X\) be a finite set. We denote the number of elements in \(X\) by \(|X|\). A discrete probability distribution over \(X\) is a function \(\mu\colon X\to[0,1]\) where \(\sum_{x\in X}\mu(x)=1\). The set of all such distributions is denoted by \(\Delta(X)\). The \(L_{1}\)-distance between two probability distributions \(\mu\) and \(\sigma\) is defined as \(\|\mu-\sigma\|_{1}=\sum_{x\in X}|\mu(x)-\sigma(x)|\). We write \([m:n]\) for the set of natural numbers \(\{m,\dots,n\}\subset\mathbb{N}\), and \(\mathbb{I}[x{=}x^{\prime}]\) for the indicator function, returning \(1\) if \(x=x^{\prime}\) and \(0\) otherwise.
**Definition 1** (MDP).: _A Markov decision process (MDP) is a tuple \(M=(S,A,\iota,P,R,\gamma)\), where \(S\) and \(A\) are finite sets of states and actions, respectively, \(\iota\in S\) an initial state, \(P\colon S\times A\rightharpoonup\Delta(S)\) is the (partial) transition function, \(R\colon S\times A\rightharpoonup[-R_{\text{max}},R_{\text{max}}]\) is the reward function bounded by some known value \(R_{\text{max}}\in\mathbb{R}\), and \(\gamma\in(0,1)\subset\mathbb{R}\) is the discount factor._
We say that an action \(a\) is _enabled_ in state \(s\) if \(P(s,a)\) is defined. We write \(P(s^{\prime}\,|\,s,a)\) for the transition probability \(P(s,a)(s^{\prime})\), and \(Post_{M}(s,a)\) for the set of successor states reachable with positive probability from the state-action pair \((s,a)\) in \(M\). A _path_ in \(M\) is a finite sequence \(\langle s_{1},a_{1},\dots,a_{n-1},s_{n}\rangle\in(S\times A)^{*}\times S\) where \(s_{i}\in Post_{M}(s_{i-1},a_{i-1})\) for all \(i\in[2{:}n]\). The probability of following a path \(\langle s_{1},a_{1},\dots,a_{n-1},s_{n}\rangle\) in the MDP \(M\) given a deterministic sequence of actions is written as \(\mathbb{P}_{M}(\langle s_{1},a_{1},\dots,a_{n-1},s_{n}\rangle)\) and can be computed by repeatedly applying the transition probability function, _i.e._, \(\mathbb{P}_{M}(\langle s_{1},a_{1},\dots,a_{n-1},s_{n}\rangle)=\prod_{i=1}^{n -1}P(s_{i+1}\,|\,s_{i},a_{i})\).
A memoryless stochastic _policy_ for \(M\) is a function \(\pi\colon S\to\Delta(A)\). The set of such policies is \(\Pi\). The goal is to find a policy maximizing the expected discounted reward
\[\max_{\pi\in\Pi}\mathbb{E}\left[\sum_{t=1}^{\infty}\gamma^{t}r_{t}\right],\]
where \(r_{t}\) is the reward the agent collects at time \(t\) when following policy \(\pi\) in the MDP.
We write \(V_{M}^{\pi}(s)\) for the state-based _value function_ of an MDP \(M\) under a policy \(\pi\). Whenever clear from context, we omit \(M\) and \(\pi\). The value of a state \(s\) in an MDP \(M\) is the least solution of the Bellman equation and can be computed by, _e.g._, value iteration [15]. The _performance_\(\rho(\pi,M)\) of a policy \(\pi\) in an MDP \(M\) is defined as the value in the initial state \(\iota\in S\), _i.e._, \(\rho(\pi,M)=V_{M}^{\pi}(\iota)\).
## 3 Safe Policy Improvement
In _safe policy improvement_ (SPI), we are given an MDP \(M\) with an unknown transition function, a policy \(\pi_{b}\), also known as the _behavior policy_, and a data set \(\mathcal{D}\) of paths in \(M\) under \(\pi_{b}\). The goal is to derive a policy \(\pi_{I}\) from \(\pi_{b}\) and \(\mathcal{D}\) that with high probability \(1{-}\delta\) guarantees to _improve_\(\pi_{b}\) on \(M\) up to an _admissible performance loss_\(\zeta\). That is, the performance of \(\pi_{I}\) is at least that of \(\pi_{b}\) tolerating an error of \(\zeta\):
\[\rho(\pi_{I},M)\geq\rho(\pi_{b},M)-\zeta. \tag{1}\]
### Maximum Likelihood Estimation
We use _maximum likelihood estimation_ (MLE) to derive an MLE-MDP \(\tilde{M}\) from the data set \(\mathcal{D}\). For a path \(\rho\in\mathcal{D}\), let \(\#_{\rho}(s,a)\) and \(\#_{\rho}(s,a,s^{\prime})\) be the number of (sequential) occurrences of a state-action pair \((s,a)\) and a transition \((s,a,s^{\prime})\) in \(\rho\), respectively. We lift this notation the level of the data set \(\mathcal{D}\) by defining \(\#_{\mathcal{D}}(s,a)=\sum_{\rho\in\mathcal{D}}\#_{\rho}(s,a)\) and \(\#_{\mathcal{D}}(s,a,s^{\prime})=\sum_{\rho\in\mathcal{D}}\#_{\rho}(s,a,s^{ \prime})\).
**Definition 2** (MLE-MDP).: _The maximum likelihood MDP (MLE-MDP) of an MDP \(M=(S,A,\iota,P,R,\gamma)\) and data set \(\mathcal{D}\) is a tuple \(\tilde{M}=(S,A,\iota,\tilde{P},R,\gamma)\) where \(S,\iota,A,R,\) and \(\gamma\) are as in \(M\) and the transition function is estimated from \(\mathcal{D}\):_
\[\tilde{P}(s^{\prime}\,|\,s,a)=\frac{\#_{\mathcal{D}}(s,a,s^{\prime})}{\#_{ \mathcal{D}}(s,a)}.\]
Figure 1: Overview of our approach. The solid arrows indicate how the full derivation of the improvement guarantees is done, while the dashed line indicates that the transformations are only used in the proofs and that in practice, we can immediately use \(\zeta^{\text{2a}}\) or \(\zeta^{\beta}\) as bounds.
Let \(e\colon S\times A\to\mathbb{R}\) be an error function. We define \(\Xi_{e}^{M}\) as the set of MDPs \(M^{\prime}\) that are close to \(\tilde{M}\), _i.e._, where for all state-action pairs \((s,a)\) the \(L_{1}\)-distance between the transition function \(P^{\prime}(\cdot\,|\,s,a)\) and \(\tilde{P}(\cdot\,|\,s,a)\) is at most \(e(s,a)\):
\[\Xi_{e}^{\tilde{M}}=\{M^{\prime}\mid\forall(s,a).\|P^{\prime}(\cdot\,|\,s,a)- \tilde{P}(\cdot\,|\,s,a)\|_{1}\leq e(s,a)\}.\]
SPI methods aim at defining the error function \(e\) in such a way that \(\Xi_{e}^{M}\) contains the true MDP \(M\) with high probability \(1-\delta\). Computing a policy that is an improvement over the behavior policy for all MDPs in this set then also guarantees an improved policy for the MDP \(M\) with high probability \(1-\delta\)[2]. The amount of data required to achieve a \(\zeta^{\text{SPI}}\)-approximately safe policy improvement with probability \(1-\delta\) (recall Equation (1)) for all state-action pairs has been established by Laroche _et al._[1] as
\[\#_{\mathcal{D}}(s,a)\geq N_{\wedge}^{\text{SPI}}=\frac{8V_{max}^{2}}{(( \text{SPI})^{2}(1-\gamma)^{2}}\log\frac{2|S||A|2^{|S|}}{\delta}. \tag{2}\]
Intuitively, if the data set \(\mathcal{D}\) satisfies the constraint in Equation 2, the MLE-MDP estimated from \(\mathcal{D}\) will be close enough to the unknown MDP \(M\) used to obtain \(\mathcal{D}\). To this end, it would be likely that a policy in the MLE-MDP with better performance will also have a better performance in \(M\).
### SPI with Baseline Bootstrapping
The constraint in Equation (2) has to hold for all state-action pairs in order to guarantee a \(\zeta\)-approximate improvement and thus requires a large data set with good coverage of the entire model. SPI with baseline bootstrapping (SPIBB) [11] relaxes this requirement by only changing the behavior policy in those pairs for which the data set contains enough samples and follows the behavior policy otherwise. Specifically, state-action pairs with less than \(N_{\wedge}^{\text{SPIBB}}\) samples are collected in a set of _unknown_ state-action pairs \(\mathcal{U}\):
\[\mathcal{U}=\{(s,a)\in S\times A\mid\#_{\mathcal{D}}(s,a)\leq N_{\wedge}^{ \text{SPIBB}}\}.\]
SPIBB then determines an improved policy \(\pi_{I}\) similar to (standard) SPI, except that if \((s,a)\in\mathcal{U}\), \(\pi_{I}\) is required to follow the behavior policy \(\pi_{b}\):
\[\forall(s,a)\in\mathcal{U}.\,\pi_{I}(a\,|\,s)=\pi_{b}(a\,|\,s).\]
Then, \(\pi_{I}\) is an improved policy as in Equation (1), where \(N_{\wedge}^{\text{SPIBB}}\) is treated as a hyperparameter and \(\zeta\) is given by
\[\zeta^{\text{SPBB}} =\frac{4V_{max}}{1-\gamma}\sqrt{\frac{2}{N_{\wedge}^{\text{SPBB} }}}\log\frac{2|S||A|2^{|S|}}{\delta}\] \[\qquad\qquad\qquad\qquad-\rho(\pi_{I},\tilde{M})+\rho(\pi_{b}, \tilde{M}).\]
We can rearrange this equation to compute the number of necessary samples for a \(\zeta^{\text{SPIBB}}\)-approximate improvement. As \(\rho(\pi_{I},\tilde{M})\) is only known at runtime, we have to employ an under-approximation \(\rho(\pi_{b},\tilde{M})\) to a priori compute
\[N_{\wedge}^{\text{SPIBB}}=\frac{32V_{max}^{2}}{(\zeta^{\text{SPIBB}})^{2}(1- \gamma)^{2}}\log\frac{2|S||A|2^{|S|}}{\delta}.\]
Thus, the sample size constraint \(N_{\wedge}^{\text{SPIBB}}\) grows approximately linearly in terms of the size of the MDP. The exponent of the term \(2^{|S|}\) is an over-approximation of the maximum branching factor of the MDP, since worst-case, the MDP can be fully connected. In the following Section, we present our approach to limit the branching factor of an MDP. After that, we present two methods that exploit this limited branching factor to derive improved sampling size constraints for SPI that satisfy the same guarantees.
## 4 Tighter Improvement Bounds for SPI
In the following, we present the technical construction of two-successor MDPs and the data set transformation that allows us to derive the tighter performance guarantees in SPI.
### From MDP to Two-Successor MDP
A _two-successor MDP_ (\(2\)sMDP) is an MDP \(M^{\text{2s}}\) where each state-action pair \((s,a)\) has at most two possible successors states, _i.e._, \(|Post_{M^{\text{2s}}}(s,a)|\leq 2\). To transform an MDP \(M=(S,A,\iota,P,R,\gamma)\) into a \(2\)sMDP, we introduce a set of _auxiliary_ states \(S_{\text{aux}}\) along with the _main_ states \(S\) of the MDP \(M\). Further, we include an additional action \(\tau\) and adapt the probability and reward functions towards a \(2\)sMDP \(M^{\text{2s}}=(S\cup S_{\text{aux}},A\cup\{\tau\},\iota,P^{\text{2s}},R^{ \text{2s}},\gamma^{\text{2s}})\).
For readability, we now detail the transformation for a fixed state-action pair \((s,a)\) with three or more successors. The transformation of the whole MDP follows from repeatedly applying this transformation to all such state-action pairs.
We enumerate the successor states of \((s,a)\), _i.e._, \(Post_{M}(s,a)=\{s_{1},\ldots,s_{k}\}\) and define \(p_{i}=P(s_{i}\,|\,s,a)\) for all \(i=1,\ldots,k\). Further, we introduce \(k-2\) auxiliary states \(S_{\text{aux}}^{s,a}=\{x_{2},\ldots,x_{k-1}\}\), each with one available action with a binary outcome. Concretely, the two possible outcomes in state \(x_{i}\) are "move to state \(s_{i}\)" or "move to one of the states \(s_{i+1},\ldots,s_{k}\)" where the latter is represented by moving to an auxiliary state \(x_{i+1}\), unless \(i=k-1\) in which case we immediately move to \(s_{k}\). Formally, the new transition function \(P^{\text{2s}}(\cdot\,|\,s,a)\) is:
\[P^{\text{2s}}(s_{1}\,|\,s,a)=p_{1},\quad P^{\text{2s}}(x_{2}\,|\,s,a)=1-p_{1}.\]
For the transition function \(P^{\text{2s}}\) in the auxiliary states we define a new action \(\tau\) that will be the only enabled action in these states. For \(i>1\), the transition function \(P^{\text{2s}}\) is then
\[P^{\text{2s}}(s_{i}\,|\,x_{i},\tau) =\frac{p_{i}}{1-(p_{1}+\cdots+p_{i-1})},\] \[P^{\text{2s}}(x_{i+1}\,|\,x_{i},\tau,i<k-1) =1-\frac{p_{i}}{1-(p_{1}+\cdots+p_{i-1})},\] \[P^{\text{2s}}(s_{k}\,|\,x_{k-1},\tau) =1-\frac{p_{k}}{1-(p_{i-1}+p_{k})}.\]
An example of this transformation is shown in Figure 2, where Figure 1(a) shows the original MDP and Figure 1(b) shows the resulting \(2\)sMDP. As we introduce \(|Post(s,a)|\) auxiliary states for a state-action pair \((s,a)\), and \(k\leq|S|\) in the worst-case of a fully connected MDP, we can bound the number of states in the \(2\)sMDP by \(|S\cup S_{\text{aux}}|\leq|S|+|S||A|(|S|-2)\leq|S|^{2}|A|\). Note that we did not specify a particular order for the enumeration of the successor states. Further, other transformations utilizing auxiliary states with a different structure (_e.g._, a balanced binary tree) are possible.
However, neither the structure of the auxiliary states, nor the order of successor states changes the total number of states in the 2sMDP, which is the deciding factor for the application of this transformation in the context of SPI algorithms.
The extension of the reward function is straightforward, i.e., the agent receives the same reward as in the original MDP when in main states and no reward when in auxiliary states:
\[R^{\text{2s}}(s,a)=\begin{cases}R(s,a)&s\in S,a\in A,\\ 0&\text{otherwise}.\end{cases}\]
Any policy \(\pi\) for the MDP \(M\) can be extended into a policy \(\pi^{\text{2s}}\) for the 2sMDP \(M^{\text{2s}}\) by copying \(\pi\) for states in \(S\) and choosing \(\tau\) otherwise:
\[\pi^{\text{2s}}(a\,|\,s)=\begin{cases}\pi(a\,|\,s)&s\in S,\\ \mathbb{I}[a=\tau]&s\in S_{\text{aux}}.\end{cases}\]
Finally, in order to preserve discounting correctly, we introduce a state-dependent discount factor \(\gamma^{\text{2s}}\), such that discounting only occurs in the main states, _i.e._,
\[\gamma^{\text{2s}}(s)\;=\;\begin{cases}\gamma&s\in S,\\ 1&s\in S_{\text{aux}}.\end{cases}\]
This yields the following value function for the 2sMDP \(M^{\text{2s}}\):
\[V_{M^{\text{2s}}}^{\pi^{\text{2s}}}(s)=\sum_{a\in A}\pi^{\text{2 s}}(a\,|\,s)\Big{(}R^{\text{2s}}(s,a)\\ +\gamma^{\text{2s}}(s)\sum_{s^{\prime}\in S}P^{\text{2s}}(s^{ \prime}\,|\,s,a)V_{M^{\text{2s}}}^{\pi^{\text{2s}}}(s^{\prime})\Big{)}.\]
The performance of policy \(\pi^{\text{2s}}\) on \(M^{\text{2s}}\) uses the value function defined above and is denoted by \(\rho^{\text{2s}}(\pi^{\text{2s}},M^{\text{2s}})=V_{M^{\text{2s}}}^{\pi^{\text{2 s}}}(\iota)\), for the initial state \(\iota\in S\). Our transformation described above, together with the adjusted value function, indeed preserves the performance of the original MDP and policy:
**Theorem 1** (Preservation of transition probabilities).: _For every transition \((s,a,s^{\prime})\) in the original MDP \(M\), there exists a unique path \(\langle s,a,x_{2},\tau,\ldots,x_{i},\tau,s^{\prime}\rangle\) in the 2sMDP \(M^{\text{2s}}\) with the same probability. That is,_
\[\mathbb{P}_{M}(\langle s,a,s^{\prime}\rangle)=\mathbb{P}_{M^{\text{2s}}}( \langle s,a,x_{2},\tau,\ldots,x_{i},\tau,s^{\prime}\rangle).\]
Appendix A provides the proofs of all theoretical results.
**Corollary 1** (Preservation of performance).: _Let \(M\) be an MDP, \(\pi\) a policy for \(M\), and \(M^{\text{2s}}\) the two-successer MDP with policy \(\pi^{\text{2s}}\) constructed from \(M\) and \(\pi\) as described above. Then \(\rho(\pi,M)=\rho^{\text{2s}}(\pi^{\text{2s}},M^{\text{2s}})\)._
### Data-set Transformation
In the previous section, we discussed how to transform an MDP into a 2sMDP. However, for SPI we do not have access to the underlying MDP, but only to a data set \(\mathcal{D}\) and the behavior policy \(\pi_{b}\) used to collect this data. In this section, we present a transformation similar to the one from MDP to 2sMDP, but now for the data set \(\mathcal{D}\). This data set transformation allows us to estimate a 2sMDP from the transformed data via maximum likelihood estimation (MLE).
We again assume a data set \(\mathcal{D}\) of observed states and actions of the form \(\mathcal{D}=\langle s_{t},a_{t}\rangle_{t\in[1:m]}\) from an MDP \(M\). We transform the data set \(\mathcal{D}\) into a data set \(\mathcal{D}^{\text{2s}}\) that we use to define a two-successer MLE-MDP \(\tilde{M}^{\text{2s}}\). Each sample \((s_{t},a_{t},s_{t+1})\) in \(\mathcal{D}\) is transformed into a set of samples, each corresponding to a path from \(s_{t}\) to \(s_{t+1}\) via states in \(S_{\text{aux}}\) in \(M^{\text{2s}}\). Importantly, the data set transformation only relies on \(\mathcal{D}\) and not on any additional knowledge about \(M\).
Similar to the notation in Section 3, let \(\#_{\mathcal{D}}(x)\) denote the number of times \(x\) occurs in \(\mathcal{D}\). For each state-action pair \((s,a)\in S\times A\) we denote its successor states in \(\tilde{M}\) as \(Post_{\tilde{M}}(s,a)=\{s_{i}\,|\,\#_{\mathcal{D}}(s,a,s_{i})>0\}\), which are again enumerated by \(\{s_{1},\ldots,s_{k}\}\). Similarly as for the MDP transformation, we define \(Post_{\tilde{M}^{\text{2s}}}(s,a)=Post_{\tilde{M}}(s,a)\) if \(k\leq 2\) and \(Post_{\tilde{M}^{\text{2s}}}(s,a)=\{s_{1},x_{2}\}\) otherwise. For auxiliary states \(x_{i}\in S_{\text{aux}}^{\text{2s}}\), we define \(Post_{\tilde{M}^{\text{2s}}}(x_{i},\tau)=\{s_{i},x_{i+1}\}\) for \(i<k-1\) and \(Post_{\tilde{M}^{\text{2s}}}(x_{k-1},\tau)=\{s_{k-1},s_{k}\}\). We then define the transformed data set \(\mathcal{D}^{\text{2s}}\) from \(\mathcal{D}\) for each \(s\in S\) and \(s^{\prime}\in Post_{\tilde{M}^{\text{2s}}}(s,a)\) as follows:
\[\#_{\mathcal{D}^{\text{2s}}}(s,a,s^{\prime})=\begin{cases}\#_{\mathcal{D}}(s,a,s ^{\prime})&s^{\prime}\in S,\\ \sum\limits_{j=2}^{k}\#_{\mathcal{D}}(s,a,s_{j})&s^{\prime}=x_{2}\in S_{\text{ aux}}^{s,a},\\ 0&\text{otherwise}.\end{cases}\]
Further, for each \(x_{i}\in S_{\text{aux}}^{s,a}\) and \(s^{\prime}\in Post_{\tilde{M}^{\text{2s}}}(s,a)\)
\[\#_{\mathcal{D}^{\text{2s}}}(x_{i},\tau,s^{\prime})=\begin{cases}\#_{\mathcal{D} }(s,a,s^{\prime})&s^{\prime}\in S,\\ \sum\limits_{j=i+1}^{k}\#_{\mathcal{D}}(s,a,s_{j})&s^{\prime}=x_{i+1}\in S_{ \text{aux}}^{s,a},\\ 0&\text{otherwise}.\end{cases}\]
The following preservation results for data generated MLE-MDPs are in the line of Theorem 1 and Corollary 1. See Figure 1 for an overview of the relationships between theorems.
Figure 2: Example for a transformation from an MDP to a 2sMDP, where the single and double arc indicate different actions.
**Theorem 2** (Preservation of estimated transition probabilities).: _Let \(\mathcal{D}\) be a data set and \(\mathcal{D}^{\sf 2s}\) be the data set obtained by the transformation above. Further, let \(\tilde{M}\) and \(\tilde{M}^{\sf 2s}\) be the MLE-MDPs constructed from \(\mathcal{D}\) and \(\mathcal{D}^{\sf 2s}\), respectively. Then for every transition \((s,a,s^{\prime})\) in \(\tilde{M}\) there is a unique path \(\langle s,a,x_{2},\tau,\ldots,x_{i},\tau,s^{\prime}\rangle\) in \(\tilde{M}^{\sf 2s}\) with the same probability:_
\[\mathbb{P}_{\tilde{M}}(\langle s,a,s^{\prime}\rangle)=\mathbb{P}_{\tilde{M}^{ \sf 2s}}(\langle s,a,x_{2},\tau,\ldots,x_{i},\tau,s^{\prime}\rangle).\]
**Corollary 2** (Preservation of estimated performance).: _Let \(\tilde{M}\) and \(\tilde{M}^{\sf 2s}\) be the MLE-MDPs as above, constructed from \(\mathcal{D}\) and \(\mathcal{D}^{\sf 2s}\), respectively. Further, let \(\tilde{\pi}\) be an arbitrary policy on \(\tilde{M}\) and \(\tilde{\pi}^{\sf 2s}\) the policy that extends \(\pi\) for \(\tilde{M}^{\sf 2s}\) by choosing \(\tau\) in all auxiliary states. Then \(\rho(\tilde{\pi},\tilde{M})=\rho^{\sf 2s}(\tilde{\pi}^{\sf 2s},\tilde{M}^{\sf 2s})\)._
We want to emphasize that while \(\mathcal{D}^{\sf 2s}\) may contain more samples than \(\mathcal{D}\), it does not yield any additional information. Rather, instead of viewing each transitions sample as an atomic data point, in \(\mathcal{D}^{\sf 2s}\) transition samples are considered like a multi-step process. E.g, The sample \((s,a,s_{3})\in\mathcal{D}\) would be transformed into the samples \(\{(s,a,x_{2}),(x_{2},\tau,x_{3}),(x_{3},\tau,s_{3})\}\in\mathcal{D}^{\sf 2s}\) which in the construction of the MLE-MDP are used to estimate the probabilities \(P(s^{\prime}\neq s_{1}\,|\,s,a),P(s^{\prime}\neq s_{2}\,|\,s,a,s^{\prime}\neq s _{1})\) and \(P(s^{\prime}=s_{3}\,|\,s,a,s^{\prime}\neq s_{1},s^{\prime}\neq s_{2})\), respectively. The probabilities of these events are mutually independent, but when multiplied give exactly \(P(s_{3}\,|\,s,a)\).
## 5 SPI in Two-Successor MDPs
In this section, we discuss how SPI can benefit from two-successor MDPs as constructed following our new transformation presented in Section 4. The dominating term in the bound \(N\) obtained by [19] is the branching factor of the MDP, which, without any prior information, has to necessarily be over-approximated by \(|S|\) (cf. Section 3.2). We use our transformation above to bound the branching factor to \(k=2\), which allows us to provide stronger guarantees with the same data set (or conversely, require less data to guarantee a set maximum performance loss). Note that bounding the branching factor by any other constant can be achieved by a similar transformation as in Section 4, but \(k=2\) leads to an optimal bound (cf. Appendix A).
Let \(\tilde{M}\) and \(\tilde{M}^{\sf 2s}\) be the MLE-MDPs inferred from data sets \(\mathcal{D}\) and \(\mathcal{D}^{\sf 2s}\), respectively. Further, let \(\pi_{\odot}\) and \(\pi_{\odot}^{\sf 2s}\) denote the optimal policies in these MLE-MDPs, constrained to the set of policies that follow \(\pi_{b}\) for state-action pairs \((s,a)\in\mathcal{U}\). Note that these optimal policies can easily be computed using, _e.g._, standard value iteration. First, we show how to improve the admissible performance loss \(\zeta\) in SPI on two-successor MDPs.
**Lemma 1**.: _Let \(M^{\sf 2s}\) be a two-successor MDP with behavior policy \(\pi_{b}\). Then, \(\pi_{\odot}^{\sf 2s}\) is a \(\zeta\)-approximately safe policy improvement over \(\pi_{b}\) with high probability \(1-\delta\), where:_
\[\zeta=\frac{4V_{max}}{1-\gamma}\sqrt{\frac{2}{N_{\wedge}}\log\frac{8|S||A|}{ \delta}}+\tilde{\rho}^{\sf 2s},\]
_with \(\tilde{\rho}^{\sf 2s}=-\rho^{\sf 2s}(\pi_{\odot}^{\sf 2s},\tilde{M}^{\sf 2s})+ \rho^{\sf 2s}(\pi_{b},\tilde{M}^{\sf 2s})\)._
For a general MDP \(M\), we can utilize this result by first applying the transformation from Section 4.1.
**Theorem 3** (Weissman-based tighter improvement guarantee).: _Let \(M\) be an MDP with behavior policy \(\pi_{b}\). Then, \(\pi_{\odot}\) is a \(\zeta^{\sf 2s}\)-approximate safe policy improvement over \(\pi_{b}\) with high probability \(1-\delta\), where:_
\[\zeta^{\sf 2s}=\frac{4V_{max}}{1-\gamma}\sqrt{\frac{2}{N_{\wedge}^{\sf 2s}}\log \frac{8|S|^{2}|A|^{2}}{\delta}}-\rho(\pi_{\odot},\tilde{M})+\rho(\pi_{b}, \tilde{M}).\]
As for \(\zeta^{\sf SPIBB}\), we can rearrange the equation to compute the number of necessary samples for a \(\zeta^{\sf 2s}\)-safe improvement:
\[N_{\wedge}^{\sf 2s}=\frac{32V_{max}^{2}}{(\zeta^{\sf 2s})^{2}(1-\gamma)^{2}} \log\frac{8|S|^{2}|A|^{2}}{\delta}.\]
Note that \(\zeta^{\sf 2s}\) and \(N_{\wedge}^{\sf 2s}\) only depend on parameters of \(M\) and policy performances on \(\tilde{M}\), which follows from Corollary 2 yielding \(\rho(\pi_{\odot},\tilde{M})=\rho^{\sf 2s}(\pi_{\odot},\tilde{M}^{\sf 2s})\). Hence, it is not necessary to explicitly compute the transformed MLE-MDP \(\tilde{M}^{\sf 2s}\).
### Uncertainty in Two-Successor MDPs
So far, the methods we outlined relied on a bound of the \(L_{1}\)-distance between a probability vector and its estimate based on a number of samples [20]. In this section, we outline a second method to tighten this bound for two-successor MDP and how to apply it to obtain a smaller approximation error \(\zeta^{\beta}\) for a fixed \(N_{\wedge}^{\beta}\).
Formally, given a 2sMDP \(M^{\sf 2s}\) and an error tolerance \(\delta\), we construct an error function \(e\colon S\times A\to\mathbb{R}\) that ensures with probability \(1-\delta\) that \(\|P(s,a)-\hat{P}(s,a)\|_{1}\leq e(s,a)\) for all \((s,a)\). To achieve this, we distribute \(\delta\) uniformly over all states to obtain \(\delta_{T}=\sfrac{\sfrac{\sfrac{\sfrac{\sfrac{\sfrac{\sfrac{\s
**Lemma 2**.: _Let \(k\sim\text{Bin}(n,p)\) be a random variable according to a binomial distribution. Then the smallest interval \([\underline{p},\overline{p}]\) for which_
\[\mathbb{P}\left(p\in[\underline{p},\overline{p}]\right)\geq 1-\delta_{T}\]
_holds, has size_
\[\overline{p}-\underline{p}\leq 1-2I_{\nicefrac{{s_{T/2}}}{{2}}}^{-1}\left( \frac{n}{2}+1,\frac{n}{2}+1\right).\]
Next, we show how to utilize this bound for the interval size in MDPs with arbitrary topology. The core idea is the same as in Theorem 3: We transform the MDP into a 2sMDP and apply the error bound \(e(s,a)=\overline{p}-\underline{p}\) from Lemma 2.
**Theorem 4** (Beta-based tighter improvement guarantee).: _Let \(M\) be an MDP with behavior policy \(\pi_{b}\). Then, \(\pi_{\odot}\) is a \(\zeta^{\beta}\)-approximate safe policy improvement over \(\pi_{b}\) with high probability \(1-\delta\), where:_
\[\zeta^{\beta}= \frac{4V_{max}}{1-\gamma}\left(1-I_{\nicefrac{{s_{T/2}}}{{2}}}^ {-1}\left(\frac{N_{\wedge}^{\beta}}{2}+1,\frac{N_{\wedge}^{\beta}}{2}+1\right) \right)+\tilde{\rho},\]
_with \(\delta_{T}=\frac{\delta}{|S|^{2}|A|^{2}}\), and \(\tilde{\rho}=-\rho(\pi_{\odot},\tilde{M})+\rho(\pi_{b},\tilde{M})\)._
There is no closed formula to directly compute \(N_{\wedge}^{\beta}\) for a given \(\zeta^{\beta}\). However, for a given admissible performance loss \(\zeta\), we can perform a binary search to obtain the smallest natural number \(N_{\wedge}^{\beta}\) such that \(\zeta^{\beta}\leq\zeta\) given in Theorem 4.
**Comparison of Different \(N_{\wedge}\).** In the context of SPI, finding an \(N_{\wedge}\) that is as small as possible while still guaranteeing \(\zeta\)-approximate improvement is the main objective. An overview of the different \(\zeta\) and \(N_{\wedge}\) that are available is given in Table 1.
Comparing the equations for different \(N_{\wedge}\), we immediately see that \(N_{\wedge}^{\text{2s}}\leq N_{\wedge}^{\text{SPBB}}\) if and only if \(2^{|S|}\geq 4|S||A|\). This means the only MDPs where standard SPIBB outperforms our 2sMDP approach are environments with a small state-space but a large action-space.
By Lemma 2, we have that the error term \(e(s,a)\) used to compute \(\zeta^{\beta}\) is minimal in the 2sMDP1, and in particular it is smaller than the error term used to compute \(\zeta^{\text{2s}}\). Thus we always have \(N_{\wedge}^{\beta}\leq N_{\wedge}^{\text{2s}}\). In case \(2^{|S|}<4|S||A|\) it is also possible to compute both \(N_{\wedge}^{\text{SPBB}}\), and \(N_{\wedge}^{\beta}\) and simply choose the smaller one.
Footnote 1: Technically, Lemma 2 allows for arbitrary parameters while the SPIBB algorithm only allows integers for the number of samples, and thus integer parameters in the inverse beta function, so \(\zeta^{\beta}\) is only minimal for even \(N_{\wedge}^{\beta}\). However, we can easily adapt the equation for odd \(N_{\wedge}^{\beta}\) by replacing \(N_{\wedge}^{\beta}\) by \(N_{\wedge}^{\beta}-1\) and \(N_{\wedge}^{\beta}+1\), respectively.
## 6 Implementation and Evaluation
We provide an evaluation2 of our approach from two different perspectives. First, a theoretical evaluation of how the different \(N_{\wedge}\) depend on the size of a hypothetical MDP, and second, a practical evaluation to investigate how smaller \(N_{\wedge}\) values translate to the performance of the improved policies.
Footnote 2: Code available at [https://github.com/LAVA-LAB/improved_spi](https://github.com/LAVA-LAB/improved_spi).
### Example Comparison of Different \(N_{\wedge}\)
To render the theoretical differences between the possible \(N_{\wedge}\) discussed at the end of Section 5 more tangible, we now give a concrete example. We assume a hypothetical MDP with \(|A|=4\), \(V_{max}=1\), \(\gamma=0.95\), and SPIBB parameters \(\delta=0.1\) and \(\zeta=0.1\). For varying sizes of the state-space, we compute all three sample size constraints: \(N_{\wedge}^{\text{SPIBB}}\), \(N_{\wedge}^{\text{2s}}\), and \(N_{\wedge}^{\beta}\). The results are shown in Figure 3, where Figure (a)a shows the full plot and Figure (b)b provides an excerpt to differentiate between the \(N_{\wedge}^{\text{2s}}\) and \(N_{\wedge}^{\beta}\) plots by scaling down the \(y\)-axis. Note that the \(x\)-axis, the number of states in our hypothetical MDP, is on a log-scale. We see that \(N_{\wedge}^{\text{SPIBB}}\) grows linearly with the number of states, whereas \(N_{\wedge}^{\text{2s}}\) and \(N_{\wedge}^{\beta}\) are logarithmic in the number of states. Further, we note that \(N_{\wedge}^{\beta}\) is significantly below \(N_{\wedge}^{\text{2s}}\), which follows from Lemma 1. Finally, the difference between \(N_{\wedge}^{\text{SPIBB}}\) and \(N_{\wedge}^{\text{2s}}\) is for small MDPs of around a hundred states already a factor \(10\).
Discussion.While we show that a significant reduction of the required number of samples per state-action pair \(N_{\wedge}\) is possible via our two approaches, we note that even for small MDPs (_e.g._, \(|S|=100\)) we still need over \(10\) million samples per state-action pair to guarantee that an improved policy is safe _w.r.t._ the behavior policy. That is, with probability \(1-\delta=0.9\), an improved policy will have an admissible performance loss of at most \(\zeta=0.1\), which is infeasible in practice. Nevertheless, a practical evaluation of our approaches is possible taking on a different perspective, which we address in the next section.
### Evaluation in SPIBB
We integrate our novel results for computing \(\zeta^{\text{2s}},\zeta^{\beta},N_{\wedge}^{\text{2s}}\), and \(N_{\wedge}^{\beta}\) into the implementation of SPIBB [1].
Benchmarks.We consider two standard benchmarks used in SPI and one other well-known MDP: the \(25\)-state _Gridworld_ proposed by Laroche _et al._Laroche _et al._ (2019), the \(25\)-state _Wet Chicken_ benchmark [14], which was used to evaluate SPI approaches by Scholl _et al._Scholl _et al._ (2022), and a \(376\)-state instance of _Resource Gathering_ proposed by Barrett and Narayanan Barrett and Narayanan (2008).
Behavior policy.For the Gridworld, we use the same behavior policy as [1]. For the Wet Chicken
environment, we use Q-Learning with a softmax function to derive a behavior policy. The behavior policy of Resource Gathering was derived from the optimal policy by selecting each non-optimal action with a probability of \(\mathtt{le}\)-\(\mathtt{5}\).
**Methodology.** Recall that in the standard SPIBB approach, \(N_{\wedge}\) is used as a hyperparameter, since the actual \(N_{\wedge}\) for reasonable \(\delta\) and \(\zeta\) are infeasible. While our methods improve significantly on \(N_{\wedge}\), the values we obtain are still infeasible in practice, as discussed in Section 6.1. We still use \(N_{\wedge}^{\text{SPIBB}}\) as a hyperparameter, and then run the SPIBB algorithm and compute the resulting \(\zeta^{\text{SPIBB}}\). This \(\zeta^{\text{SPIBB}}\) is consequently used to compute the values \(N_{\wedge}^{\text{Zs}}\) and \(N_{\wedge}^{\beta}\) that ensure the same performance loss. We then run SPIBB again with these two values for \(N_{\wedge}\). As seen in the previous experiment, and detailed at the end of Section 5, for most MDPs - including our examples - we have \(N_{\wedge}^{\beta}\leq N_{\wedge}^{\text{Zs}}\leq N_{\wedge}^{\text{SPIBB}}\) for a fixed \(\zeta\).
**Evaluation metrics.** For each data set size, we repeat each experiment \(1000\) times and report the mean performance of the learned policy, as well as the \(10\%\) and \(1\%\) conditional value at risk (CVaR) values [10], indicating the mean performance of the worst \(10\%\) and \(1\%\) runs. To give a complete picture, we also include the performance of basic RL (dynamic programming on the MLE-MDP [20]), the behavior policy \(\pi_{b}\), and the optimal policy \(\pi^{*}\) of the underlying MDP.
**Results.** We present the results for the Gridworld, Wet Chicken, and Resource Gathering environments for three different hyperparameters \(N_{\wedge}^{\text{SPIBB}}\) in Figures 4, 5, and 6, respectively. In all instances, we see similar and improved behaviors as we presumed by sharpening the sampling bounds with our new approaches. Smaller values for \(N_{\wedge}\) typically require smaller data sets for a policy to start improving, and this is precisely what our methods set out to do. In particular, we note that our methods (2S and Beta) are quicker to converge to an optimal policy than standard SPIBB. Beta is, as expected, the fastest, and starts to improve over the behavior policy for data sets about half the size compared to SPIBB in the Gridworld. Further, while theoretically, the factor between the different \(N_{\wedge}\) does not directly translate to the whole data set size, we see that in practice on all three benchmarks this is roughly the case. Finally, we note that Basic RL is unreliable compared to the SPI methods, as seen by the CVaR values being significantly below the baseline performance for several data set sizes in all three environments. This is as expected and in accordance with well-established results.
## 7 Related Work
A variant of our transformation from MDP to 2sMDP was introduced by Mayr and Munday [2023], utilizing binary trees built from auxiliary states as gadgets. Similar to our construction, Junges _et al._[2018] transform a partially observ
\begin{table}
\begin{tabular}{l l l} \hline \hline Method & Admissible performance loss \(\zeta\) & Number of samples \(N_{\wedge}\) \\ \hline Standard SPI & \(\zeta^{\text{SPI}}=\frac{2\gamma V_{max}}{1-\gamma}\sqrt{\frac{2}{N_{\wedge}^{ \text{SPI}}}\log\frac{2|S||A|2|S|}{\delta}}\) & \(N_{\wedge}^{\text{SPI}}=\frac{8V_{max}^{2}}{\zeta^{\text{SPI}} {}^{2}(1-\gamma)^{2}}\log\frac{2|S||A|2^{|S|}}{\delta}\) (\(\star\)) \\ Standard SPIBB & \(\zeta^{\text{SPIBB}}=\frac{4V_{max}}{1-\gamma}\sqrt{\frac{2}{N_{\wedge}^{ \text{SPIBB}}}\log\frac{2|S||A|2^{|S|}}{\delta}}+\tilde{\rho}\) & \(N_{\wedge}^{\text{SPIBB}}=\frac{32V_{max}^{2}}{\zeta^{\text{SPIBB}} {}^{2}(1-\gamma)^{2}}\log\frac{2|S||A|2^{|S|}}{\delta}\) \\ Two-Successor SPIBB & \(\zeta^{\text{2s}}=\frac{4V_{max}}{1-\gamma}\sqrt{\frac{2}{N_{\wedge}^{ \text{Zs}}}\log\frac{8|S|^{2}|A|^{2}}{\delta}}+\tilde{\rho}\) & \(N_{\wedge}^{\text{Zs}}=\frac{32V_{max}^{2}}{(\zeta^{\text{Zs}})^{2}(1-\gamma)^ {2}}\log\frac{8|S|^{2}|A|^{2}}{\delta}\) \\ Inverse beta SPIBB & \(\zeta^{\beta}=\frac{4V_{max}}{1-\gamma}\left(1-2I_{s_{7/2}}^{-1}\left(\frac{N_ {\wedge}^{\beta}}{2}+1,\frac{N_{\wedge}^{\beta}}{2}+1\right)\right)+\tilde{\rho}\) & No closed formula available (use binary search to compute) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Overview of the different \(\zeta\) and \(N_{\wedge}\), we obtain, where \(\delta_{T}=\frac{\delta}{|S|^{2}|A|^{2}}\) and \(\tilde{\rho}=-\rho(\pi_{\odot},\tilde{M})+\rho(\pi_{b},\tilde{M})\) is the difference in performance between optimal and behavior policy on the MLE-MDP. (\(\star\)) Standard SPI requires at least \(N_{\wedge}^{\text{SPI}}\) samples in _all_ state-action pairs.
Figure 4: Safe policy improvement on the Gridworld environment.
able MDP (POMDP) [12, 13] into a _simple POMDP_, where each state has either one action choice, and an arbitrary number of successor states, or where there are multiple actions available but each action has a single successor state. The same transformation was applied to _uncertain_ POMDPs [14].
Besides the main approaches to SPI mentioned in Section 3, there are a number of other noteworthy works in this area. SPIBB has been extended to _soft baseline bootstrapping_ in [15], where instead of either following the behavior policy or the optimal policy in the MLE-MDP in a state-action pair, randomization between the two is applied. However, the theoretical guarantees of this approach rely on an assumption that rarely holds [14].
Incorporating structural knowledge of the environment has been shown to improve the sample complexity of SPI algorithms [13, 13]. It is also possible to deploy the SPIBB algorithm in problems with large state space using MCTS [12]. For a more detailed overview of SPI approaches and an empirical comparison between them, see [14]. For an overview of how these algorithms scale in the number of states, we refer to [1].
Other related work investigated how to relax some of the assumptions SPI methods make. In [13], a method for estimating the behavior policy is introduced, relaxing the need to know this policy. Finally, a number of recent works extend the scope and relax common assumptions by introducing SPI in problems with partial observability [13], non-stationary dynamics [15], and multiple objectives [16].
Finally, we note that SPI is a specific _offline_ RL problem [11] and there have been significant advances in the general offline setting recently [13, 14, 15, 16, 17, 18, 19]. While these approaches may be applicable to high dimensional problems such as control tasks and problems with large observation space [14], they often ignore the inherent reliability aspect of improving over a baseline policy, as SPI algorithms do. Nevertheless, it remains a challenge to bring SPI algorithms to high-dimensional problems.
## 8 Conclusion
We presented a new approach to safe policy improvement that reduces the required size of data sets significantly. We derived new performance guarantees and applied them to state-of-the-art approaches such as SPIBB. Specifically, we introduced a novel transformation to the underlying MDP model that limits the branching factor, and provided two new ways of computing the admissible performance loss \(\zeta\) and the sample size constraint \(N_{\wedge}\), both exploiting the limited branching factor in SPI(BB). This improves the overall performance of SPI algorithms, leading to more efficient use of a given data set.
## Acknowledgments
The authors were partially supported by the DFG through the Cluster of Excellence EXC 2050/1 (CeTI, project ID 390696704, as part of Germany's Excellence Strategy), the TRR 248 (see [https://perspicuous-computing.science](https://perspicuous-computing.science), project ID 389792660), the NWO grants OCENW.KLEIN.187 (Provably Correct Policies for Uncertain Partially Observable Markov Decision Processes) and NWA.1160.18.238 (PrimaVera), and the ERC Starting Grant 101077178 (DEUCE).
Figure 5: Safe policy improvement on the Wet Chicken environment.
Figure 6: Safe policy improvement on the Resource Gathering environment. |
2310.03988 | Asymptotic distribution of degree--based topological indices | Topological indices play a significant role in mathematical chemistry. Given
a graph $\mathcal{G}$ with vertex set $\mathcal{V}=\{1,2,\dots,n\}$ and edge
set $\mathcal{E}$, let $d_i$ be the degree of node $i$. The degree-based
topological index is defined as $\mathcal{I}_n=$ $\sum_{\{i,j\}\in
\mathcal{E}}f(d_i,d_j)$, where $f(x,y)$ is a symmetric function. In this paper,
we investigate the asymptotic distribution of the degree-based topological
indices of a heterogeneous Erd\H{o}s-R\'{e}nyi random graph. We show that after
suitably centered and scaled, the topological indices converges in distribution
to the standard normal distribution. Interestingly, we find that the general
Randi\'{c} index with $f(x,y)=(xy)^{\tau}$ for a constant $\tau$ exhibits a
phase change at $\tau=-\frac{1}{2}$. | Mingao Yuan | 2023-10-06T03:17:40Z | http://arxiv.org/abs/2310.03988v1 | # Asymptotic Distribution of Degree-Based Topological Indices
###### Abstract
Topological indices play a significant role in mathematical chemistry. Given a graph \(\mathcal{G}\) with vertex set \(\mathcal{V}=\{1,2,\ldots,n\}\) and edge set \(\mathcal{E}\), let \(d_{i}\) be the degree of node \(i\). The degree-based topological index is defined as \(\mathcal{I}_{n}=\sum_{\{i,j\}\in\mathcal{E}}f(d_{i},d_{j})\), where \(f(x,y)\) is a symmetric function. In this paper, we investigate the asymptotic distribution of the degree-based topological indices of a heterogeneous Erdos-Renyi random graph. We show that after suitably centered and scaled, the topological indices converges in distribution to the standard normal distribution. Interestingly, we find that the general Randic index with \(f(x,y)=(xy)^{\tau}\) for a constant \(\tau\) exhibits a phase change at \(\tau=-\frac{1}{2}\).
## 1 Introduction
Topological index is a numerical parameter of a graph. It is graph invariant and characterizes the topology of a graph. Topological indices are used to model many physicochemical properties in QSAR [1, 4, 12]. One of the most important types of topological indices is the degree-based topological indices, which are defined as a function of the degrees of nodes in a graph [12].
The first degree-based topological index is the Randic index [20]. It measures the branching extent of a graph. The Randic index is the
most popular and most studied index among all topological indices. It plays a central role in understanding quantitative structure-property and structure-activity relations in chemistry and pharmacology [21; 22]. Moreover, the Randic index possesses a wealth of non-trivial and interesting mathematical properties [2; 3; 5; 6; 15]. In addition, the Randic index finds countless applications in network (graph) data analysis. For instance, it was used to quantify the similarity between different networks or subgraphs of the same network [11], it serves as a quantitative characterization of network heterogeneity [10], and [8; 9] used the Randic index to measure graph robustness.
Motivated by the Randic index, various degree-based topological indices have been introduced and attracted great interest in the past years [16]. For example, [2] proposed the general Randic index, which includes the Randic index and the second Zagreb index as special cases. [26; 27] defined the general sum-connectivity index, which includes the harmonic index as a special case [28]. [23; 24] introduced the inverse sum indeg index to predict the total surface area of octane isomers. The reader is referred to [12] for more references.
One of the interesting research topics in the study of topological indices is to investigate their properties of random graphs. Recently, [17; 18] performed numeric and analytic analyses of the Randic index and the harmonic index in the Erdos-Renyi random graph. Their simulation studies show that the indices are approximately equal to one half of the number of nodes, and the distributions of the indices are symmetric around their expectations. [7; 8; 14] calculated the expectations of the Randic index, the generalized Zagreb indices and two modified Zagreb indices of the Erdos-Renyi random graph, respectively. [25] derived the asymptotic limits of the Randic index and the harmonic index of a heterogeneous random graph.
In this paper, we are interested in the asymptotic distribution of the degree-based topological index of a heterogeneous Erdos-Renyi random graph. We show that after suitably centered and scaled, the degree-based topological index converges in distribution to the standard normal distribution. We apply our results to several well-known topological indices and observe that the general Randic index exhibits a phase change phe
nomenon.
This paper is organized as follows. In Section 2, we present the asymptotic distribution of the topological index of a heterogeneous random graph. In Section 3, we provide several examples. The proof is deferred to Section 4.
**Notation:** We adopt the Bachmann-Landau notation throughout this paper. Let \(a_{n}\) and \(b_{n}\) be two positive sequences. Denote \(a_{n}=\Theta(b_{n})\) if \(c_{1}b_{n}\leq a_{n}\leq c_{2}b_{n}\) for some positive constants \(c_{1},c_{2}\). Denote \(a_{n}=\omega(b_{n})\) if \(\lim_{n\to\infty}\frac{a_{n}}{b_{n}}=\infty\). Denote \(a_{n}=O(b_{n})\) if \(a_{n}\leq cb_{n}\) for some positive constants \(c\). Denote \(a_{n}=o(b_{n})\) if \(\lim_{n\to\infty}\frac{a_{n}}{b_{n}}=0\). Let \(\mathcal{N}(0,1)\) be the standard normal distribution and \(X_{n}\) be a sequence of random variables. Then \(X_{n}\Rightarrow\mathcal{N}(0,1)\) means \(X_{n}\) converges in distribution to the standard normal distribution as \(n\) goes to infinity. Denote \(X_{n}=O_{P}(a_{n})\) if \(\frac{X_{n}}{a_{n}}\) is bounded in probability. Denote \(X_{n}=o_{P}(a_{n})\) if \(\frac{X_{n}}{a_{n}}\) converges to zero in probability as \(n\) goes to infinity. Let \(\mathbb{E}[X_{n}]\) and \(Var(X_{n})\) denote the expectation and variance of a random variable \(X_{n}\) respectively. \(\mathbb{P}[E]\) denote the probability of an event \(E\). Let \(f=f(x,y)\) be a function. For non-negative integers \(s,t\), \(f^{(s,t)}=f^{(s,t)}(x,y)\) denote the partial derivative \(\frac{\partial^{s+t}f(x,y)}{\partial x^{s}\partial y^{t}}\). For convenience, sometimes we write \(f_{x}=f^{(1,0)}\), \(f_{y}=f^{(0,1)}\), \(f_{xx}=f^{(2,0)}\),\(f_{yy}=f^{(0,2)}\) and \(f_{xy}=f^{(1,1)}\). \(\exp(x)\) denote the exponential function \(e^{x}\). For positive integer \(n\), denote \([n]=\{1,2,\ldots,n\}\). Given a finite set \(E\), \(|E|\) represents the number of elements in \(E\).
## 2 Main results
A graph consists of a set of nodes (vertices) and a set of edges. Given a positive integer \(n\), an _undirected_ graph on \(\mathcal{V}=[n]\) is a pair \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where \(\mathcal{E}\) is a collection of subsets of \(\mathcal{V}\) such that \(|e|=2\) for every \(e\in\mathcal{E}\). Elements in \(\mathcal{E}\) are called edges. A graph can be conveniently represented as an adjacency matrix \(A\), where \(A_{ij}=1\) if \(\{i,j\}\) is an edge, \(A_{ij}=0\) otherwise and \(A_{ii}=0\). Since \(\mathcal{G}\) is undirected, the adjacency matrix \(A\) is symmetric. The degree \(d_{i}\) of node \(i\) is the number of edges connecting it, that is, \(d_{i}=\sum_{j\neq i}A_{ij}\). A graph is said to be random if \(A_{ij}(1\leq i<j\leq n)\) are random.
The degree-based topological index of a graph is defined as follows [1, 12].
Definition 1: The degree-based topological index of a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) is defined as
\[\mathcal{I}_{n}=\sum_{\{i,j\}\in\mathcal{E}}f(d_{i},d_{j}), \tag{1}\]
where \(f(x,y)\) is a real function satisfying \(f(x,y)=f(y,x)\).
Many well-known topological indices can be expressed as (1). For example, the Randic index corresponds to \(f(x,y)=(xy)^{-\frac{1}{2}}\) and the hyper-Zagreb index corresponds to \(f(x,y)=(x+y)^{2}\).
Definition 2: Let \(\beta\) be a constant between zero and one, \(n\) be an positive integer, and
\[W=\{w_{ij}\in[\beta,1]|1\leq i<j\leq n,w_{ji}=w_{ij},w_{ii}=0\}.\]
Define a heterogeneous random graph \(\mathcal{G}_{n}(\beta,W)\) as
\[\mathbb{P}(A_{ij}=1)=p_{n}w_{ij},\]
where \(A_{ij}\) (\(1\leq i<j\leq n\)) are independent, \(A_{ij}=A_{ji}\) and \(p_{n}\in(0,1)\).
The expected degree of node \(i\) in \(\mathcal{G}_{n}(\beta,W)\) is \(\mathbb{E}[d_{i}]=\sum_{k\neq i}p_{n}w_{ik}\). In general, \(\mathbb{E}[d_{i}]\neq\mathbb{E}[d_{j}]\) if \(i\neq j\), that is, the expected degrees of nodes are not the same. Hence \(\mathcal{G}_{n}(\beta,W)\) is a heterogeneous random graph. When \(w_{ij}=c\) (\(1\leq i<j\leq n\)) for a constant \(c\in(0,1)\), \(\mathcal{G}_{n}(\beta,W)\) is the Erdos-Renyi random graph. It is homogeneous in the sense that nodes in it share the same expected degree.
Several recent works have studied the expectations of some special topological indices of the Erdos-Renyi random graph [7, 8, 14, 17, 18]. In this paper, we derive the asymptotic distribution of the topological index \(\mathcal{I}_{n}\) of the heterogeneous random graph \(\mathcal{G}_{n}(\beta,W)\). Our results can be applied to all the topological indices studied in [7, 8, 14, 17, 18].
Before presenting our results, we introduce several notations and as
sumptions. Let \(w_{i(k)}=1+\sum_{l\notin\{i,k\}}p_{n}w_{il}\) and
\[\sigma_{n}^{2}=\sum_{i<j}(a_{ij}+a_{ji})^{2}p_{n}w_{ij}(1-p_{n}w_{ij}), \tag{2}\]
where
\[a_{ij}=\frac{1}{2}f(w_{i(j)},w_{j(i)})+\frac{1}{2}\sum_{l\notin\{i,j\}}p_{n}w_ {il}\big{[}f_{x}(w_{i(l)},w_{l(i)})+f_{y}(w_{i(l)},w_{l(i)})\big{]}.\]
**Assumption 1**.: _Let \(k_{0}(k_{0}\geq 3)\), \(s,t\) be non-negative integers. Suppose \(np_{n}=\omega(\log(n))\) and the following conditions hold._
1. \[\sum_{i<j}(a_{ij}+a_{ji})^{4}p_{n}=o(\sigma_{n}^{4}).\]
2. _For all non-negative integers_ \(s,t\) _satisfying_ \(s+t\leq k_{0}\)_, there is some positive constant_ \(C\) _such that_ \[|f^{(s,t)}(x,y)|\leq(xy)^{C}.\]
3. _Given_ \(s,t\) _satisfying_ \(s+t=k_{0}\)_,_ \(|f^{(s,t)}(x,y)|\) _is monotone in_ \(x\) _and_ \(y\)_._
4. _For a large positive constant_ \(M\) _and positive sequences_ \(a_{n},b_{n}\in[(\log(np_{n}))^{-2},M]\)_, the following holds. For_ \(s+t=k_{0}\)_,_ \[n(np_{n})^{\frac{k_{0}}{2}+1}|f^{(s,t)}(a_{n}np_{n},b_{n}np_{n})|=o\left(\sigma _{n}\right).\]
5. _For_ \(1\leq s+t\leq k_{0}-1\)_,_ \[n(np_{n})^{2(s+t)-1}|f^{(s,t)}(np_{n},np_{n})|^{2}=o\left(\sigma_{n}^{2}\right).\]
Assumption 1 is not restrictive and many common degree-based topological indices satisfy this assumption as shown later. Under Assumption 1, we derive the asymptotic distribution of the topological index \(\mathcal{I}_{n}\) of \(\mathcal{G}_{n}(\beta,W)\) as follows.
**Theorem 1**.: _Let \(\mathcal{I}_{n}\) be the topological index defined in (1) of the random
graph \(\mathcal{G}_{n}(\beta,W)\) and \(\sigma_{n}^{2}\) be defined in (2). Suppose Assumption 1 holds. Then_
\[\frac{\mathcal{I}_{n}-\mathbb{E}[\mathcal{I}_{n}]}{\sigma_{n}}\Rightarrow \mathcal{N}(0,1), \tag{3}\]
_as \(n\) goes to infinity. In addition, the expectation \(\mathbb{E}[\mathcal{I}_{n}]\) has the following asymptotic expression_
\[\mathbb{E}[\mathcal{I}_{n}]=\left(1+O\left(\frac{1}{np_{n}}\right)\right)\sum_ {1\leq i<j\leq n}p_{n}w_{ij}f(w_{i(j)},w_{j(i)}), \tag{4}\]
_where the error rate \(\frac{1}{np_{n}}\) cannot be improved._
Based on Theorem 1, the degree-based topological index \(\mathcal{I}_{n}\) (suitably centered and scaled) of the heterogeneous random graph \(\mathcal{G}_{n}(\beta,W)\) converges in distribution to the standard normal distribution. As far as we know, this is the first theoretical result on limiting distribution of topological indices. Moreover, Theorem 1 provides the best approximation of the expectation of \(\mathcal{I}_{n}\), in the sense that the error rate \(\frac{1}{np_{n}}\) cannot be improved. For some special topological indices of the Erdos-Renyi random graph, it is possible to get an exact and compact expression of \(\mathbb{E}[\mathcal{I}_{n}]\). For instance, [14] and [7] obtained the exact expressions of the expectation of the hyper-Zagreb index and the forgotten topological index of the Erdos-Renyi random graph respectively. However, for most topological indices, it seems impossible to get exact and closed-form expressions of the expectations [7]. Our result (4) provides an approximation of the expectations.
The proof of Theorem 1 proceeds by decomposing \(\mathcal{I}_{n}\) as a sum of leading term and remainder term, followed by finding the limiting distribution of the leading term and showing the remainder term is negligible. The condition \((C1)\) of Assumption 1 is used to prove the leading term converges in distribution to the standard normal distribution. The conditions \((C2)\)-\((C5)\) are needed to bound the remainder term. The condition \(np_{n}=\omega(\log(n))\) requires the random graph to be relatively dense. This condition is common in theoretical network analysis. Assumption 1 is weak. We shall provide several examples of degree-based topological indices that satisfy this assumption in the subsequent section.
Application to several topological indices
In this section, we apply Theorem 1 to several well-known topological indices of a special heterogeneous random graph.
Let \(p_{n}=n^{-\alpha}\) for a constant \(\alpha\in(0,1)\) and \(w_{ij}=e^{-\frac{\kappa i}{n}}e^{-\frac{\kappa j}{n}},(i\neq j)\) with non-negative constant \(\kappa\). Then \(e^{-2\kappa}\leq w_{ij}\leq 1\). Denote the corresponding random graph as \(\mathcal{G}_{n}(\alpha,\kappa)\). When \(\kappa=0\), \(\mathcal{G}_{n}(\alpha,0)\) is the Erdos-Renyi random graph. In this case, we denote it as \(\mathcal{G}_{n}(\alpha)\) for convenience. For \(\kappa>0\), \(\mathcal{G}_{n}(\alpha,\kappa)\) is heterogeneous.
Denote \(c(\kappa)=\frac{1-e^{-\kappa}}{\kappa}\) for \(\kappa>0\) and \(c(0)=1\). Note that
\[\sum_{i=1}^{n}e^{-\frac{\kappa i}{n}}=\frac{e^{-\frac{\kappa}{n}}(1-e^{-\kappa })}{1-e^{-\frac{\kappa}{n}}}=nc(\kappa)+O(1),\ \ \ \ \ \kappa>0.\]
Then
\[w_{i(k)} = 1+\sum_{l\notin\{i,k\}}p_{n}w_{il}\] \[= 1+p_{n}e^{-\frac{\kappa i}{n}}\left(nc(\kappa)+O(1)-e^{-\frac{ \kappa i}{n}}-e^{-\frac{\kappa k}{n}}\right)\] \[= 1+np_{n}c(\kappa)e^{-\frac{\kappa i}{n}}+O\left(p_{n}\right)\] \[= np_{n}c(\kappa)e^{-\frac{\kappa i}{n}}+O\left(1\right),\ \ \kappa>0.\]
When \(\kappa=0\),
\[w_{i(k)}=1+\sum_{l\notin\{i,k\}}p_{n}=1+(n-2)p_{n}.\]
### The general Randic index
The general Randic index is a generalization of the well-known Randic index and has been widely studied in literature [8, 17, 18]. Let \(f(x,y)=(xy)^{\tau}\) for a non-zero constant \(\tau\). The general Randic index \(\mathcal{I}_{n}\) is defined as
\[\mathcal{I}_{n}=\sum_{\{i,j\}\in\mathcal{E}}(d_{i}d_{j})^{\tau}. \tag{5}\]
When \(\tau=-\frac{1}{2}\), \(\mathcal{I}_{n}\) is the Randic index.
Given non-negative integers \(s,t\), straightforward computation yields
\[f^{(s,t)}(x,y)=\left(\prod_{k=0}^{s-1}(\tau-k)\right)\left(\prod_{k=0}^{t-1}(\tau -k)\right)x^{\tau-s}y^{\tau-t}.\]
Then
\[a_{ij}\] \[= \frac{1}{2}\left[(np_{n})^{2}c(\kappa)^{2}e^{-\frac{\kappa i}{n}}e ^{-\frac{\kappa j}{n}}+O\left(np_{n}\right)\right]^{\tau}\] \[+\frac{1}{2}\sum_{l\notin\{i,j\}}p_{n}e^{-\frac{\kappa i}{n}}e^{- \frac{\kappa l}{n}}\tau\left(np_{n}c(\kappa)e^{-\frac{\kappa i}{n}}+O\left(1 \right)\right)^{\tau-1}\] \[\times\left(np_{n}c(\kappa)e^{-\frac{\kappa l}{n}}+O\left(1\right) \right)^{\tau}\] \[+\frac{1}{2}\sum_{l\notin\{i,j\}}p_{n}e^{-\frac{\kappa i}{n}}e^{- \frac{\kappa l}{n}}\tau\left(np_{n}c(\kappa)e^{-\frac{\kappa i}{n}}+O\left(1 \right)\right)^{\tau}\] \[\times\left(np_{n}c(\kappa)e^{-\frac{\kappa l}{n}}+O\left(1\right) \right)^{\tau-1}.\]
Note that \(e^{-\kappa}\leq e^{-\frac{\kappa i}{n}}\leq 1\). If \(\tau>0\), then \(a_{ij}=\Theta((np_{n})^{2\tau})\). In this case,
\[\sigma_{n}^{2}=\sum_{i<j}(a_{ij}+a_{ji})^{2}p_{n}=\Theta\left(n(np_{n})^{4\tau +1}\right).\]
Hence
\[\frac{\sum_{i<j}(a_{ij}+a_{ji})^{4}p_{n}}{\sigma_{n}^{4}} = O\left(\frac{n(np_{n})^{8\tau+1}}{n^{2}(np_{n})^{8\tau+2}}\right) =o(1).\]
For \(1\leq s+t\),
\[n(np_{n})^{2(s+t)}|f^{(s,t)}(np_{n},np_{n})|^{2} = \Theta\left(n(np_{n})^{4\tau}\right)=o\left(\sigma_{n}^{2}\right)\]
Let \(k_{0}=\max\left\{\left|1+\frac{1}{1-\alpha}\right|+1,3\right\}\). Then \(k_{0}>1+\frac{1}{1-\alpha}\). For \(s+t=k_{0}\), it is easy to verify that
\[n(np_{n})^{\frac{k_{0}}{2}+1}|f^{(s,t)}(np_{n},np_{n})|=\Theta\left(n(np_{n}) ^{1+2\tau-\frac{k_{0}}{2}}\right)=o\left(\sigma_{n}\right).\]
Then Assumption 1 holds and Theorem 1 applies.
**Corollary 1**.: _Let \(\mathcal{I}_{n}\) be the general Randic index defined in (5) of the random graph \(\mathcal{G}_{n}(\alpha,\kappa)\) and \(\sigma_{n}^{2}\) be defined in (2). If \(\tau>0\), then_
\[\frac{\mathcal{I}_{n}-\mathbb{E}[\mathcal{I}_{n}]}{\sigma_{n}}\Rightarrow \mathcal{N}(0,1), \tag{6}\]
_as \(n\) goes to infinity. In addition, the expectation \(\mathbb{E}[\mathcal{I}_{n}]\) has the following asymptotic expression_
\[\mathbb{E}[\mathcal{I}_{n}]=\left(1+O\left(\frac{1}{np_{n}}\right)\right) \sum_{1\leq i<j\leq n}p_{n}w_{ij}(w_{i(j)}w_{j(i)})^{\tau}, \tag{7}\]
_where the error rate \(\frac{1}{np_{n}}\) cannot be improved._
When \(\tau<0\), Assumption 1 may not hold. To see this, let \(\kappa=0\). Then
\[a_{ij} = \frac{(1+(n-2)p_{n})^{2\tau}}{2}+\tau(1+(n-2)p_{n})^{2\tau-1}(n-2 )p_{n}\] \[= (1+(n-2)p_{n})^{2\tau}\left(\frac{1+2\tau}{2}-\frac{\tau}{1+(n-2 )p_{n}}\right).\]
If \(\tau\neq-\frac{1}{2}\), then \(a_{ij}=\Theta\left((np_{n})^{2\tau}\right)\). Similar to the case \(\tau>0\), Assumption 1 holds and Theorem 1 applies.
If \(\tau=-\frac{1}{2}\), then \(a_{ij}=\Theta\left(\frac{1}{(np_{n})^{2}}\right)\). In this case, \(\sigma_{n}^{2}=\Theta\left(\frac{n}{(np_{n})^{3}}\right)\) and
\[n(np_{n})^{2(s+t)-1}|f^{(s,t)}(np_{n},np_{n})|^{2} = \Theta\left(\frac{n}{(np_{n})^{3}}\right).\]
Clearly, \(\frac{n}{(np_{n})^{3}}\neq o(\sigma_{n}^{2})\). Hence Assumption 1 does not hold. We have to study this case separately.
When \(\tau=-\frac{1}{2}\), \(\mathcal{I}_{n}\) is the well-known Randic index. The Randic index is perhaps the first degree-based topological index introduced in [20]. Recently, [17] performed simulation studies of the expectation and distribution of the Randic index in Erdos-Renyi random graph. Its asymptotic limit was given in [25]. Here we derive the asymptotic distribution of the Randic index of the Erdos-Renyi random graph as follows.
Theorem 2: _Let \(\mathcal{I}_{n}\) be the general Randic index of the Erdos-Renyi random graph \(\mathcal{G}_{n}(\alpha)\) with constant \(\alpha\in(0,1)\). Then_
\[\frac{\mathcal{I}_{n}-\mathbb{E}[\mathcal{I}_{n}]}{\sigma_{n}}\Rightarrow \mathcal{N}(0,1), \tag{8}\]
_where_
\[\mathbb{E}[\mathcal{I}_{n}]=\frac{n(np_{n})^{2\tau+1}}{2}\left(1+O\left(\frac{ 1}{np_{n}}\right)\right),\]
\[\sigma_{n}^{2}=\frac{n(n-1)(n-2)p_{n}^{2}(1-p_{n})^{2}}{32(1+(n-2)p_{n}))^{4}}= \Theta\left(\frac{n}{(np_{n})^{2}}\right),\]
_for \(\tau=-\frac{1}{2}\) and_
\[\sigma_{n}^{2}=\frac{(1+2\tau)^{2}}{2}n(n-1)p_{n}(1+(n-2)p_{n})^{4\tau}(1+o(1) )=\Theta\left(n(np_{n})^{4\tau+1}\right),\]
_for \(\tau\neq-\frac{1}{2}\)._
Based on Theorem 2, we have an interesting finding. For \(\tau\neq-\frac{1}{2}\), the order of \(\sigma_{n}^{2}\) is \(n(np_{n})^{4\tau+1}\), while for \(\tau=-\frac{1}{2}\), the order of \(\sigma_{n}^{2}\) is \(\frac{n}{(np_{n})^{2}}\). Note that
\[n(np_{n})^{4\left(-\frac{1}{2}\right)+1}=\frac{n}{np_{n}}=\omega\left(\frac{n }{(np_{n})^{2}}\right).\]
Therefore, as a function of \(\tau\), the order of \(\sigma_{n}^{2}\) is continuous at \(\tau\neq-\frac{1}{2}\), but discontinuous at \(\tau=-\frac{1}{2}\). In this sense, the general Randic index exhibits a phase change at \(\tau=-\frac{1}{2}\).
### Hyper-Zagreb index
The Zagreb indices and its variants are frequently used to measure physical-chemical properties of compounds. Recently, [14] studied the expectation of the hyper-Zagreb index of the Erdos-Renyi random graph. Let \(f(x,y)=(x+y)^{2}\). The hyper-Zagreb index \(\mathcal{I}_{n}\) is defined as
\[\mathcal{I}_{n}=\sum_{\{i,j\}\in\mathcal{E}}(d_{i}+d_{j})^{2}. \tag{9}\]
Clearly, \(f^{(s,t)}(x,y)=0\) for \(s+t\geq 3\) and
\[f_{x}(x,y)=f_{y}(x,y)=2(x+y),\ \ f_{xx}(x,y)=f_{yy}(x,y)=f_{xy}(x,y)=2.\]
Straightforward calculation yields
\[\sum_{l\notin\{i,j\}}w_{il}\big{[}f_{x}(w_{i(l)},w_{l(i)})+f_{y}( w_{i(l)},w_{l(i)})\big{]}\] \[= 4e^{-\frac{\kappa i}{n}}\sum_{l\notin\{i,j\}}e^{-\frac{\kappa l}{ n}}\left(np_{n}c(\kappa)e^{-\frac{\kappa i}{n}}+np_{n}c(\kappa)e^{-\frac{\kappa l}{ n}}+O\left(1\right)\right)\] \[= 4e^{-\frac{\kappa i}{n}}\left[n^{2}p_{n}c(\kappa)^{2}e^{-\frac{ \kappa i}{n}}+n^{2}p_{n}c(\kappa)c(2\kappa)+O(n)\right]\] \[= 4n^{2}p_{n}c(\kappa)^{2}e^{-\frac{2\kappa i}{n}}+4n^{2}p_{n}c( \kappa)c(2\kappa)e^{-\frac{\kappa i}{n}}+O(n).\]
Since \(e^{-\kappa}\leq e^{-\frac{\kappa i}{n}}\leq 1\) and \(e^{-2\kappa}\leq e^{-\frac{2\kappa i}{n}}\leq 1\), then
\[a_{ij} = \frac{(np_{n})^{2}c(\kappa)^{2}}{2}\left(e^{-\frac{\kappa i}{n}} +e^{-\frac{\kappa j}{n}}\right)^{2}\] \[+2(np_{n})^{2}c(\kappa)^{2}e^{-\frac{2\kappa i}{n}}+2(np_{n})^{2} c(\kappa)c(2\kappa)e^{-\frac{\kappa i}{n}}+O(np_{n})\] \[= (np_{n})^{2}\Bigg{[}\frac{5}{2}c(\kappa)^{2}e^{-\frac{2\kappa i}{ n}}+\frac{1}{2}c(\kappa)^{2}e^{-\frac{2\kappa j}{n}}\] \[+c(\kappa)^{2}e^{-\frac{\kappa i}{n}}e^{-\frac{\kappa j}{n}}+2c( \kappa)c(2\kappa)e^{-\frac{\kappa i}{n}}\Bigg{]}+O(np_{n})\] \[= \Theta\left((np_{n})^{2}\right).\]
Consequently, we have \(\sigma_{n}^{2}=\Theta\left(n(np_{n})^{5}\right)\) and
\[\frac{\sum_{i<j}(a_{ij}+a_{ji})^{4}p_{n}}{\sigma_{n}^{4}} = O\left(\frac{n(np_{n})^{9}}{n^{2}(np_{n})^{10}}\right)=o(1).\]
For \(s+t\geq 3\), \(f^{(s,t)}(x,y)=0\). For \(s+t=2\), we have
\[n(np_{n})^{4}=o\left(\sigma_{n}^{2}\right).\]
Then Assumption 1 holds and Theorem 1 applies.
Corollary 2: _Let \(\mathcal{I}_{n}\) be the hyper-Zagreb index defined in (9) of the ran
dom graph \(\mathcal{G}_{n}(\alpha,\kappa)\) and \(\sigma_{n}^{2}\) be defined in (2). Then_
\[\frac{\mathcal{I}_{n}-\mathbb{E}[\mathcal{I}_{n}]}{\sigma_{n}}\Rightarrow \mathcal{N}(0,1), \tag{10}\]
_as \(n\) goes to infinity. In addition, the expectation \(\mathbb{E}[\mathcal{I}_{n}]\) has the following asymptotic expression_
\[\mathbb{E}[\mathcal{I}_{n}]=\left(1+O\left(\frac{1}{np_{n}}\right)\right) \sum_{1\leq i<j\leq n}p_{n}w_{ij}(w_{i(j)}+w_{j(i)})^{2}, \tag{11}\]
_where the error rate \(\frac{1}{np_{n}}\) cannot be improved._
By Theorem 1 in [14], the expectation of the hyper-Zagreb index of the Erdos-Renyi random graph \(\mathcal{G}_{n}(\alpha)\) is equal to
\[\mathbb{E}[\mathcal{I}_{n}]=n(n-1)(n-2)(2n-5)p_{n}^{3}+5n(n-1)(n-2)p_{n}^{2}+2 n(n-1)p_{n}. \tag{12}\]
By (11), we have
\[\mathbb{E}[\mathcal{I}_{n}]=2n(n-1)(n-2)^{2}p_{n}^{3}\left(1+O\left(\frac{1} {np_{n}}\right)\right). \tag{13}\]
Then our result (13) is consistent with (12). In addition, by (12), the error rate \(\frac{1}{np_{n}}\) cannot be improved, as stated in Corollary 2.
### Forgotten topological index
The forgotten topological index is another chemical index. [7] studied the expectation of the forgotten topological index of the Erdos-Renyi random graph. Let \(f(x,y)=x^{2}+y^{2}\). The forgotten topological index \(\mathcal{I}_{n}\) is defined as
\[\mathcal{I}_{n}=\sum_{\{i,j\}\in\mathcal{E}}(d_{i}^{2}+d_{j}^{2}). \tag{14}\]
Similar to the hyper-Zagreb index, it is easy to verify that Assumption 1 holds. Then Theorem 1 applies.
**Corollary 3**.: _Let \(\mathcal{I}_{n}\) be the forgotten topological index defined in (14) of
the random graph \(\mathcal{G}_{n}(\alpha,\kappa)\) and \(\sigma_{n}^{2}\) be defined in (2). Then_
\[\frac{\mathcal{I}_{n}-\mathbb{E}[\mathcal{I}_{n}]}{\sigma_{n}}\Rightarrow \mathcal{N}(0,1), \tag{15}\]
_as \(n\) goes to infinity. In addition, the expectation \(\mathbb{E}[\mathcal{I}_{n}]\) has the following asymptotic expression_
\[\mathbb{E}[\mathcal{I}_{n}]=\left(1+O\left(\frac{1}{np_{n}}\right)\right) \sum_{1\leq i<j\leq n}p_{n}w_{ij}(w_{i(j)}^{2}+w_{j(i)}^{2}), \tag{16}\]
_where the error rate \(\frac{1}{np_{n}}\) cannot be improved._
For the Erdos-Renyi random graph \(\mathcal{G}_{n}(\alpha)\), the expectation of the forgotten topological index [7] is equal to
\[\mathbb{E}[\mathcal{I}_{n}]=n(n-1)(n-2)(n-3)p_{n}^{3}+3n(n-1)(n-2)p_{n}^{2}+n (n-1)p_{n}. \tag{17}\]
By (16), we have
\[\mathbb{E}[\mathcal{I}_{n}]\] \[= \left(1+O\left(\frac{1}{np_{n}}\right)\right)\frac{n(n-1)p_{n}}{ 2}\Big{[}(1+(n-2)p_{n})^{2}+(1+(n-2)p_{n})^{2}\Big{]}\] \[= n(n-1)(n-2)^{2}p_{n}^{3}\left(1+O\left(\frac{1}{np_{n}}\right) \right).\]
Hence our approximation (16) is consistent with (17). Moreover, by (17), the error rate \(\frac{1}{np_{n}}\) cannot be improved as in Corollary 3.
### The inverse sum indeg index
The inverse sum indeg index is a significant predictor of total surface area of octane isomers [19; 23; 24]. Let \(f(x,y)=\frac{xy}{x+y}\). The inverse sum indeg index \(\mathcal{I}_{n}\) is defined as
\[\mathcal{I}_{n}=\sum_{\{i,j\}\in\mathcal{E}}\frac{d_{i}d_{j}}{d_{i}+d_{j}}. \tag{18}\]
As far as we know, the inverse sum indeg index of random graph has not been studied in literature. Here we provide its asymptotic distribution and
an approximation of its expectation.
Let \(g(x,y)=xy\) and \(h(x,y)=\frac{1}{x+y}\). For simplicity, we denote \(f(x,y)\) as \(f\). Then \(f=gh\). Given positive integers \(s,t\), \(h^{(s,t)}=\frac{c_{s,t}}{(x+y)^{1+s+t}}\), where \(c_{s,t}\) is a constant dependent on \(s,t\). Straightforward calculation yields
\[f^{(s,0)} = \sum_{r=0}^{s}\binom{s}{r}g^{(r,0)}h^{(s-r,0)}=gh^{(s,0)}+sg^{(1, 0)}h^{(s-1,0)}\] \[= \frac{c_{s,0}xy}{(x+y)^{1+s}}+\frac{sc_{s-1,0}y}{(x+y)^{s}},\] \[f^{(0,t)} = \sum_{r=0}^{t}\binom{t}{r}g^{(0,r)}h^{(0,t-r)}=gh^{(0,t)}+tg^{(0, 1)}h^{(0,t-1)}\] \[= \frac{c_{0,t}xy}{(x+y)^{1+t}}+\frac{tc_{0,t-1}x}{(x+y)^{t}}.\]
Further, for \(s\geq 1\) or \(t\geq 1\), we have
\[f^{(s,t)} = \sum_{r=0}^{t}\binom{t}{r}g^{(0,r)}h^{(s,t-r)}+s\sum_{r=0}^{t} \binom{t}{r}g^{(1,r)}h^{(s-1,t-r)}\] \[= gh^{(s,t)}+tg^{(0,1)}h^{(s,t-1)}+sg^{(1,0)}h^{(s-1,t)}+stg^{(1,1 )}h^{(s-1,t-1)}\] \[= \frac{c_{s,t}xy}{(x+y)^{1+s+t}}+\frac{tc_{s,t-1}x}{(x+y)^{s+t}}+ \frac{sc_{s-1,t}y}{(x+y)^{s+t}}+\frac{stc_{s-1,t-1}}{(x+y)^{s+t-1}}.\]
Hence, for \(s+t\geq 1\), \(|f^{(s,t)}(np_{n},np_{n})|\) can be bounded as follows
\[|f^{(s,t)}(np_{n},np_{n})|=O\left(\frac{1}{(np_{n})^{s+t-1}}\right).\]
Note that
\[f_{x}(np_{n},np_{n})=f_{y}(np_{n},np_{n})=\frac{(np_{n})^{2}}{(np_{n}+np_{n}) ^{2}}=\frac{1}{4}.\]
Then \(a_{ij}=\Theta\left(np_{n}\right)\), \(\sigma_{n}^{2}=\Theta\left(n(np_{n})^{3}\right)\) and
\[\frac{\sum_{i<j}(a_{ij}+a_{ji})^{4}p_{n}}{\sigma_{n}^{4}} = O\left(\frac{n(np_{n})^{5}}{n^{2}(np_{n})^{6}}\right)=o(1).\]
When \(1\leq s+t\), we have
\[n(np_{n})^{2(s+t)}|f^{(s,t)}(np_{n},np_{n})|^{2} = O\left(n(np_{n})^{2}\right)=o\left(\sigma_{n}^{2}\right).\]
Let \(k_{0}=\max\{\lfloor 1+\frac{1}{1-\alpha}\rfloor+1,3\}\). Then \(k_{0}>1+\frac{1}{1-\alpha}\) and
\[n(np_{n})^{\frac{k_{0}}{2}+1}|f^{(s,t)}(np_{n},np_{n})|=O\left(n(np_{n})^{2- \frac{k_{0}}{2}}\right)=o\left(\sigma_{n}\right).\]
Assumption 1 holds. Then Theorem 1 applies.
**Corollary 4**.: _Let \(\mathcal{I}_{n}\) be the inverse sum indeg index defined in (18) of the random graph \(\mathcal{G}_{n}(\alpha,\kappa)\) and \(\sigma_{n}^{2}\) be defined in (2). Then_
\[\frac{\mathcal{I}_{n}-\mathbb{E}[\mathcal{I}_{n}]}{\sigma_{n}}\Rightarrow \mathcal{N}(0,1), \tag{19}\]
_as \(n\) goes to infinity. In addition, the expectation \(\mathbb{E}[\mathcal{I}_{n}]\) has the following asymptotic expression_
\[\mathbb{E}[\mathcal{I}_{n}]=\left(1+O\left(\frac{1}{np_{n}}\right)\right) \sum_{1\leq i<j\leq n}p_{n}w_{ij}\frac{w_{i(j)}w_{j(i)}}{w_{i(j)}+w_{j(i)}}, \tag{20}\]
_where the error rate \(\frac{1}{np_{n}}\) cannot be improved._
For the Erdos-Renyi random graph \(\mathcal{G}_{n}(\alpha)\), the expectation of the inverse sum indeg index can be expressed as
\[\mathbb{E}[\mathcal{I}_{n}]=\left(1+O\left(\frac{1}{np_{n}}\right)\right) \frac{n(n-1)(n-2)p_{n}^{2}}{4}.\]
## 4 Proof of main results
In this section, we provide detailed proofs of Theorem 1 and Theorem 2. It is not easy to work on \(\mathcal{I}_{n}\) defined in (1) directly. In stead, we provide an alternative expression of \(\mathcal{I}_{n}\) as follows
\[\mathcal{I}_{n}=\sum_{1\leq i<j\leq n}A_{ij}f(d_{i(j)},d_{j(i)})=\frac{1}{2} \sum_{i\neq j}A_{ij}f(d_{i(j)},d_{j(i)}), \tag{21}\]
where \(d_{i(j)}=1+\sum_{l\notin\{i,j\}}A_{il}\) and \(d_{j(i)}=1+\sum_{l\notin\{i,j\}}A_{jl}\). Note that \(A_{ij}\), \(d_{i(j)}\) and \(d_{j(i)}\) are independent, \(\mathbb{E}[d_{i(j)}]=w_{i(j)}\) and \(\mathbb{E}[d_{j(i)}]=w_{j(i)}\). We will use these facts frequently in the proof.
### Lemmas
Before proving Theorem 1 and Theorem 2, we present two lemmas.
**Lemma 1**.: _Let \(\mathcal{G}_{n}(\beta,W)\) be defined in Definition 2, \(\delta_{n}=\left(\log(np_{n})\right)^{-2}\) and \(M\) be a constant greater than \(\frac{e^{2}}{1-p_{n}\beta}\). For any \(i\in[n]\), we have_
\[\mathbb{P}(d_{i(j)}-1=k)\leq\exp(-np_{n}\beta(1+o(1))),\ \ \ \ k\leq \delta_{n}np_{n},\]
\[\mathbb{P}(d_{i(j)}-1=k)\leq\exp(-np_{n}\beta(1+o(1))),\ \ \ \ k\geq Mnp_{n}.\]
**Proof of Lemma 1:** Given distinct indices \(i,j\), let \(\theta_{ij}=\{p_{n}w_{il}|l\in[n]\setminus\{i,j\}\}\). Then \(d_{i(j)}-1\) follows the Poisson-Binomial distribution \(PB(\theta_{ij})\). Recall that \(\beta\leq w_{ij}\leq 1\). Then
\[\mathbb{P}(d_{i(j)}-1=k) = \sum_{S\subset[n]\setminus\{i,j\},|S|=k}\prod_{l\in S}p_{n}w_{il} \prod_{l\in S^{C}\setminus\{i,j\}}(1-p_{n}w_{il}) \tag{22}\] \[\leq \sum_{S\subset[n]\setminus\{i,j\},|S|=k}\prod_{l\in S}p_{n}\prod_ {l\in S^{C}\setminus\{i,j\}}(1-p_{n}\beta)\] \[= {n-2\choose k}p_{n}^{k}(1-p_{n}\beta)^{n-2-k}.\]
Note that \({n-2\choose k}\leq e^{k\log n-k\log k+k}\) and \((1-p_{n}\beta)^{n-2-k}=e^{(n-2-k)\log(1-p_{n}\beta)}\). Then by (22) we get
\[\mathbb{P}(d_{i(j)}-1=k) \tag{23}\] \[\leq \exp\left(k\log(np_{n})-k\log k+k+(n-2-k)\log(1-p_{n}\beta)\right).\]
Let \(g(k)=k\log(np_{n})-k\log k+k+(n-2-k)\log(1-p_{n}\beta)\). Considering \(k\) as continuous variable, the derivative of \(g(k)\) with respect to \(k\) is equal to
\[g^{\prime}(k)=\log\left(\frac{np_{n}}{1-p_{n}\beta}\right)-\log k.\]
Clearly, \(g^{\prime}(k)>0\) for \(k<\frac{np_{n}}{1-p_{n}\beta}\) and \(g^{\prime}(k)<0\) for \(k>\frac{np_{n}}{1-p_{n}\beta}\). Then \(g(k)\) achieves its maximum at \(k=\frac{np_{n}}{1-p_{n}\beta}\). For \(k\leq\delta_{n}np_{n}\), \(g(k)\leq g(\delta_{n}np_{n})\). Hence
\[\mathbb{P}(d_{i(j)}-1=k)\] \[\leq \exp\left(\delta_{n}np_{n}\log\frac{1}{\delta_{n}(1-p_{n}\beta)} +\delta_{n}np_{n}+n\log(1-p_{n}\beta)\right)\] \[\leq \exp\left(-np_{n}\beta(1+o(1))\right).\]
Since \(M>\frac{e^{2}}{1-p_{n}\beta}\), then \(Mnp_{n}\geq\frac{np_{n}}{1-p_{n}\beta}\). For \(k\geq Mnp_{n}\), \(g(k)\leq g(Mnp_{n})\). Hence
\[\mathbb{P}(d_{i(j)}-1=k)\] \[\leq \exp\left(-M\log(M)np_{n}+Mnp_{n}+(n-2-Mnp_{n})\log(1-p_{n}\beta)\right)\] \[= \exp\left(-M(\log M-1)np_{n}+|n-2-Mnp_{n}|p_{n}\beta\right)\] \[\leq \exp\left(-np_{n}\beta(1+o(1))\right).\]
**Lemma 2**.: _Suppose (\(C1\)) of Assumption 1 holds. For the random graph \(\mathcal{G}_{n}(\beta,W)\), we have_
\[\frac{\sum_{i\neq j}a_{ij}(A_{ij}-p_{n}w_{ij})}{\sqrt{\sum_{i<j}(a_{ij}+a_{ji}) ^{2}p_{n}w_{ij}(1-p_{n}w_{ij})}}\Rightarrow\mathcal{N}(0,1),\]
_where_
\[a_{ij}=\frac{1}{2}f(w_{i(j)},w_{j(i)})+\frac{1}{2}\sum_{l\notin\{i,j\}}p_{n}w _{il}\big{[}f_{x}(w_{i(l)},w_{l(i)})+f_{y}(w_{i(l)},w_{l(i)})\big{]}.\]
Proof of Lemma 2:Let
\[\mathcal{Z}_{n}=\frac{\sum_{i\neq j}a_{ij}(A_{ij}-p_{n}w_{ij})}{\sqrt{\sum_{i< j}(a_{ij}+a_{ji})^{2}p_{n}w_{ij}(1-p_{n}w_{ij})}}.\]
Then \(\mathbb{E}[\mathcal{Z}_{n}]=0\). Note that
\[\sum_{i\neq j}a_{ij}(A_{ij}-p_{n}w_{ij})=\sum_{i<j}(a_{ij}+a_{ji})(A_{ij}-p_{n}w _{ij}).\]
Then
\[\mathcal{Z}_{n}=\frac{\sum_{i<j}(a_{ij}+a_{ji})(A_{ij}-p_{n}w_{ij})}{\sqrt{ \sum_{i<j}(a_{ij}+a_{ji})^{2}p_{n}w_{ij}(1-p_{n}w_{ij})}}.\]
Recall that \(A_{ij}(1\leq i<j\leq n)\) are independent. It is easy to verify that
\[\mathbb{E}\left[\left(\sum_{i<j}(a_{ij}+a_{ji})(A_{ij}-p_{n}w_{ij })\right)^{2}\right]\] \[= \sum_{i<j}(a_{ij}+a_{ji})^{2}\mathbb{E}[(A_{ij}-p_{n}w_{ij})^{2}]\] \[= \sum_{i<j}(a_{ij}+a_{ji})^{2}p_{n}w_{ij}(1-p_{n}w_{ij}).\]
Hence \(Var[\mathcal{Z}_{n}]=1\). Straightforward calculation yields
\[\mathbb{E}[(A_{ij}-p_{n}w_{ij})^{4}]=p_{n}w_{ij}[(1-p_{n}w_{ij})^{4}+p_{n}^{3} w_{ij}^{3}(1-p_{n}w_{ij})]\leq 2p_{n}w_{ij}.\]
By \((C1)\) of Assumption 1, we have
\[\frac{\sum_{i<j}(a_{ij}+a_{ji})^{4}\mathbb{E}[(A_{ij}-p_{n}w_{ij}) ^{4}]}{\left(\sum_{i<j}(a_{ij}+a_{ji})^{2}p_{n}w_{ij}(1-p_{n}w_{ij})\right)^{ 2}}\] \[\leq \frac{2\sum_{i<j}(a_{ij}+a_{ji})^{4}p_{n}w_{ij}}{\left(\sum_{i<j}( a_{ij}+a_{ji})^{2}p_{n}w_{ij}(1-p_{n}w_{ij})\right)^{2}}\] \[= o(1).\]
According to the Lyapunov Central Limit Theorem, \(\mathcal{Z}_{n}\) converges in distribution to the standard normal distribution.
### Proof of Theorem 1
Let \(k_{0}\) be the integer in Assumption 1. The proof strategy is as follows: firstly we use the Taylor expansion to expand the function \(f(d_{i(j)},d_{j(i)})\) at \((w_{i(j)},w_{j(i)})\) to \(k_{0}\)-th order; then we write \(\mathcal{I}_{n}\) as the sum of the leading term and remainder terms; finally we show the leading term (after suitable scaling) converges in distribution to the standard normal distribution and the remainder terms are negligible.
By the Taylor expansion, \(f(d_{i(j)},d_{j(i)})\) can be decomposed as
\[f(d_{i(j)},d_{j(i)}) = M_{ij}+S_{ij}+T_{ij}+R_{ij}, \tag{24}\]
where
\[M_{ij} = f(w_{i(j)},w_{j(i)})+f_{x}(w_{i(j)},w_{j(i)})(d_{i(j)}-w_{i(j)})\] \[+f_{y}(w_{i(j)},w_{j(i)})(d_{j(i)}-w_{j(i)}),\] \[S_{ij} = \frac{1}{2}f_{xx}(w_{i(j)},w_{j(i)})(d_{i(j)}-w_{i(j)})^{2}\] \[+\frac{1}{2}f_{yy}(w_{i(j)},w_{j(i)})(d_{j(i)}-w_{j(i)})^{2}\] \[+f_{xy}(w_{i(j)},w_{j(i)})(d_{i(j)}-w_{i(j)})(d_{j(i)}-w_{j(i)}),\] \[T_{ij} = \sum_{k=3}^{k_{0}-1}\sum_{s+t=k}\frac{f^{(s,t)}(w_{i(j)},w_{j(i)} )}{s!t!}(d_{i(j)}-w_{i(j)})^{s}(d_{j(i)}-w_{j(i)})^{t},\] \[R_{ij} = \sum_{s+t=k_{0}}\frac{f^{(s,t)}(X_{i(j)},X_{j(i)})}{s!t!}(d_{i(j) }-w_{i(j)})^{s}(d_{j(i)}-w_{j(i)})^{t},\]
\(X_{i(j)}\) is between \(d_{i(j)}\) and \(w_{i(j)}\), and \(X_{j(i)}\) is between \(d_{j(i)}\) and \(w_{j(i)}\). By (21), the topological index \(\mathcal{I}_{n}\) is equal to
\[\mathcal{I}_{n}=\frac{1}{2}\sum_{i\neq j}M_{ij}A_{ij}+\frac{1}{2}\sum_{i\neq j }S_{ij}A_{ij}+\frac{1}{2}\sum_{i\neq j}T_{ij}A_{ij}+\frac{1}{2}\sum_{i\neq j} R_{ij}A_{ij}. \tag{25}\]
Then
\[\frac{\mathcal{I}_{n}-\mathbb{E}[\mathcal{I}_{n}]}{\sigma_{n}}\]
\[= \frac{\frac{1}{2}\sum_{i\neq j}(M_{ij}A_{ij}-\mathbb{E}[M_{ij}A_{ij}] )}{\sigma_{n}}+\frac{\frac{1}{2}\sum_{i\neq j}(S_{ij}A_{ij}-\mathbb{E}[S_{ij}A_{ ij}])}{\sigma_{n}} \tag{26}\] \[+ \frac{\frac{1}{2}\sum_{i\neq j}(T_{ij}A_{ij}-\mathbb{E}[T_{ij}A_{ ij}])}{\sigma_{n}}+\frac{\frac{1}{2}\sum_{i\neq j}(R_{ij}A_{ij}-\mathbb{E}[R_{ij}A_{ ij}])}{\sigma_{n}},\]
where \(\sigma_{n}^{2}\) is defined in (2).
Next we show the first term in (26) is leading term and the last three terms are negligible.
#### 4.2.1 Asymptotic normality of the first term in (26)
To begin with, we study the first term of (26). Note that
\[\sum_{i\neq j}M_{ij}A_{ij} \tag{27}\] \[= \sum_{i\neq j}f(w_{i(j)},w_{j(i)})A_{ij}+\sum_{i\neq j}f_{x}(w_{i (j)},w_{j(i)})(d_{i(j)}-w_{i(j)})A_{ij}\] \[+\sum_{i\neq j}f_{y}(w_{i(j)},w_{j(i)})(d_{j(i)}-w_{j(i)})A_{ij}.\]
Since \(d_{i(j)}-w_{i(j)}=\sum_{l\notin\{i,j\}}(A_{il}-p_{n}w_{il})\), then \((d_{i(j)}-w_{i(j)})\) does not contain \(A_{ij}\). Hence, \((d_{i(j)}-w_{i(j)})\) and \(A_{ij}\) are independent. Similarly, \(d_{j(i)}-w_{j(i)}\) and \(A_{ij}\) are independent. In addition, \(\mathbb{E}[d_{i(j)}]=w_{i(j)}\) and \(\mathbb{E}[d_{j(i)}]=w_{j(i)}\). Then the expectation of \(\sum_{i\neq j}M_{ij}A_{ij}\) is equal to
\[\mathbb{E}\left[\sum_{i\neq j}M_{ij}A_{ij}\right]=\sum_{i\neq j}f(w_{i(j)},w_{ j(i)})p_{n}w_{ij}. \tag{28}\]
The first term of (27) can be written as
\[\sum_{i\neq j}f(w_{i(j)},w_{j(i)})A_{ij} = \sum_{i\neq j}f(w_{i(j)},w_{j(i)})(A_{ij}-p_{n}w_{ij}) \tag{29}\] \[+\sum_{i\neq j}f(w_{i(j)},w_{j(i)})p_{n}w_{ij}.\]
Similarly, the second term of (27) is written as
\[\sum_{i\neq j}f_{x}(w_{i(j)},w_{j(i)})(d_{i(j)}-w_{i(j)})A_{ij} \tag{30}\] \[= \sum_{i\neq j}f_{x}(w_{i(j)},w_{j(i)})(d_{i(j)}-w_{i(j)})(A_{ij}-p_ {n}w_{ij})\] \[+\sum_{i\neq j}p_{n}w_{ij}f_{x}(w_{i(j)},w_{j(i)})(d_{i(j)}-w_{i( j)})\] \[= \sum_{i\neq j\neq l}f_{x}(w_{i(j)},w_{j(i)})(A_{il}-p_{n}w_{il})(A _{ij}-p_{n}w_{ij})\] \[+\sum_{i\neq j}\left(\sum_{l\notin\{i,j\}}p_{n}w_{il}f_{x}(w_{i(l )},w_{l(i)})\right)(A_{ij}-p_{n}w_{ij}).\]
We will show the first term of (30) is of smaller order than the second term. To this end, we find the second moment of it. Recall that if \(\{i,j\}\neq\{s,t\}\), \(A_{ij}\) and \(A_{st}\) are independent. Let \(i,j,l\) be three arbitrary distinct indices and \(i_{1},j_{1},l_{1}\) be another three arbitrary distinct indices. If \(\{i,j,l\}\neq\{i_{1},j_{1},l_{1}\}\), then
\[\mathbb{E}[(A_{ij}-p_{n}w_{ij})(A_{il}-p_{n}w_{il})(A_{i_{1}j_{1}}-p_{n}w_{i_{ 1}j_{1}})(A_{i_{1}l_{1}}-p_{n}w_{i_{1}l_{1}})]=0.\]
When \(\{i,j,l\}=\{i_{1},j_{1},l_{1}\}\), it is easy to get
\[\mathbb{E}[(A_{ij}-p_{n}w_{ij})(A_{il}-p_{n}w_{il})(A_{i_{1}j_{1}} -p_{n}w_{i_{1}j_{1}})(A_{i_{1}l_{1}}-p_{n}w_{i_{1}l_{1}})]\] \[= \mathbb{E}[(A_{ij}-p_{n}w_{ij})^{2}(A_{il}-p_{n}w_{il})^{2}]\] \[= p_{n}w_{ij}(1-p_{n}w_{ij})p_{n}w_{il}(1-p_{n}w_{il}).\]
Then the second moment of the first term of (30) can be calculated as follows.
\[\mathbb{E}\left[\left(\sum_{i\neq j\neq l}f_{x}(w_{i(j)},w_{j(i)} )(A_{il}-p_{n}w_{il})(A_{ij}-p_{n}w_{ij})\right)^{2}\right]\] \[= \sum_{\begin{subarray}{c}i\neq j\neq l\\ i_{1}\neq j_{1}\neq l_{1}\end{subarray}}f_{x}(w_{i(j)},w_{j(i)})f_{x}(w_{i_{1} (j_{1})},w_{j_{1}(i_{1})})\]
\[\times\mathbb{E}[(A_{il}-p_{n}w_{il})(A_{ij}-p_{n}w_{ij})(A_{i_{1}l_{1 }}-p_{n}w_{i_{1}l_{1}})(A_{i_{1}j_{1}}-p_{n}w_{i_{1}j_{1}})]\] \[= \sum_{i\neq j\neq l}f_{x}(w_{i(j)},w_{j(i)})^{2}\mathbb{E}[(A_{il}- p_{n}w_{il})^{2}(A_{ij}-p_{n}w_{ij})^{2}]\] \[= \sum_{i\neq j\neq l}f_{x}(w_{i(j)},w_{j(i)})^{2}p_{n}w_{il}(1-p_{n} w_{il})p_{n}w_{ij}(1-p_{n}w_{ij})\] \[= \Theta\left(n^{3}p_{n}^{2}f_{x}(np_{n},np_{n})^{2}\right).\]
By Markov's inequality, it follows that
\[\sum_{i\neq j\neq l}f_{x}(w_{i(j)},w_{j(i)})(A_{il}-p_{n}w_{il})(A _{ij}-p_{n}w_{ij}) \tag{31}\] \[= O_{P}\left(\sqrt{n^{3}p_{n}^{2}f_{x}(np_{n},np_{n})^{2}}\right).\]
Similarly, one has
\[\sum_{i\neq j\neq l}f_{y}(w_{i(j)},w_{j(i)})(A_{il}-p_{n}w_{il})( A_{ij}-p_{n}w_{ij}) \tag{32}\] \[= O_{P}\left(\sqrt{n^{3}p_{n}^{2}f_{y}(np_{n},np_{n})^{2}}\right).\]
Denote
\[a_{ij}=\frac{1}{2}f(w_{i(j)},w_{j(i)})+\frac{1}{2}\sum_{l\notin\{i,j\}}p_{n}w_ {il}\big{[}f_{x}(w_{i(l)},w_{l(i)})+f_{y}(w_{i(l)},w_{l(i)})\big{]}.\]
Then combining (27)- (32) yields
\[\frac{\frac{1}{2}\sum_{i\neq j}(M_{ij}A_{ij}-\mathbb{E}[M_{ij}A_{ ij}])}{\sigma_{n}}\] \[= \frac{\sum_{i\neq j}a_{ij}(A_{ij}-p_{n}w_{ij})}{\sigma_{n}}+O_{P} \left(\sqrt{\frac{n^{3}p_{n}^{2}f_{x}(np_{n},np_{n})^{2}}{\sigma_{n}^{2}}} \right).\]
By Lemma 2 and \((C5)\) of Assumption 1 ( let \(s+t=1\)), we conclude that
\[\frac{\frac{1}{2}\sum_{i\neq j}(M_{ij}A_{ij}-\mathbb{E}[M_{ij}A_{ ij}])}{\sigma_{n}}\Rightarrow\mathcal{N}(0,1).\]
Then the proof is complete if the second, third and last term of (26)
converge to zero in probability.
#### 4.2.2 Bound the second term of (26)
We prove the second term of (26) is equal to \(o_{P}(1)\). By the definition of \(S_{ij}\), we have
\[\sum_{i\neq j}S_{ij}A_{ij} \tag{33}\] \[= \frac{1}{2}\sum_{i\neq j}f_{xx}(w_{i(j)},w_{j(i)})(d_{i(j)}-w_{i( j)})^{2}A_{ij}\] \[+\frac{1}{2}\sum_{i\neq j}f_{yy}(w_{i(j)},w_{j(i)})(d_{j(i)}-w_{j (i)})^{2}A_{ij}\] \[+\sum_{i\neq j}f_{xy}(w_{i(j)},w_{j(i)})(d_{i(j)}-w_{i(j)})(d_{j(i )}-w_{j(i)})A_{ij}.\]
Then
\[\mathbb{E}\left[\sum_{i\neq j}S_{ij}A_{ij}\right] \tag{34}\] \[= \frac{1}{2}\sum_{i\neq j}f_{xx}(w_{i(j)},w_{j(i)})\mathbb{E} \left[(d_{i(j)}-w_{i(j)})^{2}\right]p_{n}w_{ij}\] \[+\frac{1}{2}\sum_{i\neq j}f_{yy}(w_{i(j)},w_{j(i)})\mathbb{E} \left[(d_{j(i)}-w_{j(i)})^{2}\right]p_{n}w_{ij}\] \[= \frac{1}{2}\sum_{i\neq j\neq l}f_{xx}(w_{i(j)},w_{j(i)})p_{n}w_{ il}(1-p_{n}w_{il})p_{n}w_{ij}\] \[+\frac{1}{2}\sum_{i\neq j\neq l}f_{yy}(w_{i(j)},w_{j(i)})p_{n}w_{ jl}(1-p_{n}w_{jl})p_{n}w_{ij}.\]
The first term of (33) can be expressed as
\[\sum_{i\neq j}f_{xx}(w_{i(j)},w_{j(i)})(d_{i(j)}-w_{i(j)})^{2}A_{ij}\] \[= \sum_{i\neq j}f_{xx}(w_{i(j)},w_{j(i)})(d_{i(j)}-w_{i(j)})^{2}(A_ {ij}-p_{n}w_{ij})\]
\[+\sum_{i\neq j}p_{n}w_{ij}f_{xx}(w_{i(j)},w_{j(i)})(d_{i(j)}-w_{i(j)} )^{2}. \tag{35}\]
We will find an upper bound of (35). Note that
\[\sum_{i\neq j}f_{xx}(w_{i(j)},w_{j(i)})(d_{i(j)}-w_{i(j)})^{2}(A_{ ij}-p_{n}w_{ij}) \tag{36}\] \[= \sum_{i\neq j}f_{xx}(w_{i(j)},w_{j(i)})(A_{ij}-p_{n}w_{ij})\sum_{ \begin{subarray}{c}s\neq t\\ s,t\notin\{i,j\}\end{subarray}}(A_{is}-p_{n}w_{is})(A_{it}-p_{n}w_{it})\] \[+\sum_{i\neq j}f_{xx}(w_{i(j)},w_{j(i)})(A_{ij}-p_{n}w_{ij})\sum_ {s\notin\{i,j\}}(A_{is}-p_{n}w_{is})^{2}.\]
The second moment of the first term of (36) is equal to
\[\mathbb{E}\Bigg{[}\sum_{i\neq j}f_{xx}(w_{i(j)},w_{j(i)})(A_{ij}- p_{n}w_{ij}) \tag{37}\] \[\times\sum_{\begin{subarray}{c}s\neq t\\ s,t\notin\{i,j\}\end{subarray}}(A_{is}-p_{n}w_{is})(A_{it}-p_{n}w_{it})\Bigg{]} ^{2}\] \[= \sum_{\begin{subarray}{c}i\neq j,s\neq t\\ s,t\notin\{i,j\}\end{subarray}}f_{xx}(w_{i(j)},w_{j(i)})^{2}\mathbb{E}\big{[}( A_{ij}-p_{n}w_{ij})^{2}\] \[\times(A_{is}-p_{n}w_{is})^{2}(A_{it}-p_{n}w_{it})^{2}\big{]}\] \[= \sum_{\begin{subarray}{c}i\neq j,s\neq t\\ s,t\notin\{i,j\}\end{subarray}}f_{xx}(w_{i(j)},w_{j(i)})^{2}p_{n}w_{ij}(1-p_{n }w_{ij})\] \[\times p_{n}w_{is}(1-p_{n}w_{is})p_{n}w_{it}(1-p_{n}w_{it})\] \[= \Theta\left(n^{4}p_{n}^{3}f_{xx}(np_{n},np_{n})^{2}\right).\]
The second moment of the second term of (36) is equal to
\[\mathbb{E}\left[\sum_{i\neq j}f_{xx}(w_{i(j)},w_{j(i)})(A_{ij}-p_ {n}w_{ij})\sum_{s\notin\{i,j\}}(A_{is}-p_{n}w_{is})^{2}\right]^{2} \tag{38}\] \[= \sum_{i\neq j\neq s}f_{xx}(w_{i(j)},w_{j(i)})^{2}\mathbb{E}\left[ (A_{ij}-p_{n}w_{ij})^{2}(A_{is}-p_{n}w_{is})^{4}\right]\] \[= \sum_{i\neq j\neq s}f_{xx}(w_{i(j)},w_{j(i)})^{2}\mathbb{E}\left[ (A_{ij}-p_{n}w_{ij})^{2}(A_{is}-p_{n}w_{is})^{4}\right]\] \[= \Theta\left(n^{4}p_{n}^{3}f_{xx}(np_{n},np_{n})^{2}\right).\]
The second moment of the second term of (36) is equal to
\[\mathbb{E}\left[\sum_{i\neq j}f_{xx}(w_{i(j)},w_{j(i)})(A_{ij}-p_ {n}w_{ij})\sum_{s\notin\{i,j\}}(A_{is}-p_{n}w_{is})^{2}\right]^{2}\] \[= \sum_{i\neq j\neq s}f_{xx}(w_{i(j)},w_{j(i)})^{2}\mathbb{E}\left[ (A_{ij}-p_{n}w_{ij})^{2}(A_{is}-p_{n}w_{is})^{4}\right]\] \[= \Theta\left(n^{4}p_{n}^{3}f_{xx}(np_{n},np_{n})^{2}\right).\]
The second moment of the second term of (36) is equal to
\[\mathbb{E}\left[\sum_{i\neq j}f_{xx}(w_{i(j)},w_{j(i)})(A_{ij}-p_ {n}w_{ij})\sum_{s\notin\{i,j\}}(A_{is}-p_{n}w_{is})^{2}\right]^{2}\] \[= \sum_{i\neq j\neq s}f_{xx}(w_{i(j)},w_{j(i)})^{2}\mathbb{E}\left[ (A_{ij}-p_{n}w_{ij})^{2}(A_{is}-p_{n}w_{is})^{4}\right]\] \[= \Theta\left(n^{4}p_{n}^{3}f_{xx}(np_{n},np_{n})^{2}\right).\]
The second moment of the second term of (36) is equal to
\[\mathbb{E}\left[\sum_{i\neq j}f_{xx}(w_{i(j)},w_{j(i)})(A_{ij}-p_ {n}w_{ij})\sum_{s\notin\{i,j\}}(A_{is}-p_{n}w_{is})^{2}\right]^{2}\] \[= \sum_{i\neq j\neq s}f_{xx}(w_{i(j)},w_{j(i)})^{2}\mathbb{E}\left[ (A_{ij}-p_{n}w_{is})^{2}(A_{is}-p_{n}w_{is})^{4}\right]\] \[= \Theta\left(n^{4}p_{n}^{3}f_{xx}(np_{n},np_{n})^{2}\right).\]
The second moment of the second term of (36) is equal to
\[\mathbb{E}\left[\sum_{i\neq j}f_{xx}(w_{i(j)},w_{j(i)})(A_{ij}-p_ {n}w_{ij})\sum_{s\notin\{i,j\}}(A_{is}-p_{n}w_{is})^{2}\right]^{2}\] \[= \sum_{i\neq j\neq s}f_{xx}(w_{i(j)},w_{j(i)})^{2}\mathbb{E}\left[ (A_{ij}-p_{n}w_{is})^{2}\right]^{2}\] \[= \Theta\left(n^{4}p_{n}^{3}f_{xx}(np_{n},np_{n})^{2}\right).\]
The second moment of the second term of (36) is equal to
\[\mathbb{E}\left[\sum_{i\neq j}f_{xx}(w_{i(j)},w_{j(i)})(A_{ij}-p_ {n}w_{ij})\sum_{s\notin\{i,j\}}(A_{is}-p_{n}w_{is})^{2}\right]^{2}\] \[= \sum_{i\neq j\neq s}f_{xx}(w_{i(j)},w_{j(i)})^{2}\mathbb{E}\left[ (A_{ij}-p_{n}w_{is})^{2}\right]^{2}\] \[= \Theta\left(n^{4}p_{n}^{3}f_{xx}(np_{n},np_{n})^{2}\right).\]
The second moment of the second term of (36) is equal to
\[\mathbb{E}\left[\sum_{i\neq j}f_{xx}(w_{i(j)},w_{j(i)})(A_{ij}-p_ {n}w_{ij})\sum_{s\notin\{i,j\}}(A_{is}-p_{n}w_{is})^{2}\right]^{2}\] \[= \sum_{i\neq j\neq s}f_{xx}(w_{i(j)},w_{j(i)})^{2}\mathbb{E}\left[ (A_{ij}-p_{n}w_{is})^{2}\right]^{2}\] \[= \Theta\left(n^{4}p_{n}^{3}f_{xx}(np_{n},np_{n})^{2}\right).\]
The second moment of the second term of (36) is equal to
\[\mathbb{E}\left[\sum_{i\neq j}f_{xx}(w_{i(j)},w_{j(i)})(A_{ij}-p_ {n}w_{ij})\sum_{s\notin\{i,j\}}(A_{is}-p_{n}w_{is})^{2}\right]^{2}\] \[= \sum_{i\neq j\neq s}f_{xx}(w_{i(j)},w_{j(i)})^{2}\mathbb{E}\left[ (A_{ij}-p_{n}w_{is})^{2}\right]^{2}\] \[= \Theta\left(n^{4}p_{n}^{3}f_{xx}(np_{n},np_{n})^{2}\right).
\[\mathbb{E}\left[\sum_{i\neq j}p_{n}w_{ij}f_{xx}(w_{i(j)},w_{j(i)}) \sum_{\begin{subarray}{c}s\neq t\\ s,t\notin\{i,j\}\end{subarray}}(A_{is}-p_{n}w_{is})(A_{it}-p_{n}w_{it})\right]^ {2}\] \[= \sum_{\begin{subarray}{c}i\neq j,i\neq j_{1},s\neq t\\ s,t\notin\{i,j,j_{1}\}\end{subarray}}p_{n}^{2}w_{ij}w_{ij_{1}}f_{xx}(w_{i(j)},w_{j(i)})f_{xx}(w_{i(j_{1})},w_{j_{1}(i)})\] \[\times\mathbb{E}\left[(A_{is}-p_{n}w_{is})^{2}(A_{it}-p_{n}w_{is}) ^{2}\right]\]
\[+\sum_{\begin{subarray}{c}i\neq j,s\neq t\\ s,t\notin\{i,j\}\end{subarray}}p_{n}^{2}w_{ij}^{2}f_{xx}(w_{i(j)},w_{j(i)})^{2} \mathbb{E}\left[(A_{is}-p_{n}w_{is})^{2}(A_{it}-p_{n}w_{it})^{2}\right] \tag{40}\] \[= \sum_{\begin{subarray}{c}i\neq j,i\neq j_{1},s\neq t\\ s,t\notin\{i,j,j\}\end{subarray}}p_{n}^{2}w_{ij}w_{ij_{1}}f_{xx}(w_{i(j)},w_{ j(i)})f_{xx}(w_{i(j_{1})},w_{j_{1}(i)})\] \[\times p_{n}w_{is}(1-p_{n}w_{is})p_{n}w_{it}(1-p_{n}w_{it})\] \[+\sum_{\begin{subarray}{c}i\neq j,s\neq t\\ s,t\notin\{i,j\}\end{subarray}}p_{n}^{2}w_{ij}^{2}f_{xx}(w_{i(j)},w_{j(i)})^{2} p_{n}w_{is}(1-p_{n}w_{is})p_{n}w_{it}(1-p_{n}w_{it})\] \[= \Theta\left(n^{5}p_{n}^{4}f_{xx}(np_{n},np_{n})^{2}\right)\]
The second moment of the second term of (39) is equal to
\[\mathbb{E}\Bigg{[}\sum_{i\neq j}p_{n}w_{ij}f_{xx}(w_{i(j)},w_{j(i )}) \tag{41}\] \[\times\sum_{s\notin\{i,j\}}\left[(A_{is}-p_{n}w_{is})^{2}-\mathbb{ E}\left[(A_{is}-p_{n}w_{is})^{2}\right]\right]\Bigg{]}^{2}\] \[= \sum_{\begin{subarray}{c}i\neq j\neq s\\ i\neq j_{1}\neq s\end{subarray}}p_{n}w_{ij}f_{xx}(w_{i(j)},w_{j(i)})p_{n}w_{ ij_{1}}f_{xx}(w_{i(j_{1})},w_{j_{1}(i)})\] \[\times\mathbb{E}\left[\left((A_{is}-p_{n}w_{is})^{2}-\mathbb{E} \left[(A_{is}-p_{n}w_{is})^{2}\right]\right)^{2}\right]\] \[+\sum_{i\neq j\neq s}p_{n}^{2}w_{ij}^{2}f_{xx}(w_{i(j)},w_{j(i)}) ^{2}\] \[\times\mathbb{E}\left[\left((A_{is}-p_{n}w_{is})^{2}-\mathbb{E} \left[(A_{is}-p_{n}w_{is})^{2}\right]\right)^{2}\right]\] \[= \Theta\left(n^{4}p_{n}^{3}f_{xx}(np_{n},np_{n})^{2}\right).\]
By Markov's inequality and equations (35)- (41), we have
\[\sum_{i\neq j}f_{xx}(w_{i(j)},w_{j(i)})\left[(d_{i(j)}-w_{i(j)})^{ 2}A_{ij}-\mathbb{E}\left[(d_{i(j)}-w_{i(j)})^{2}\right]A_{ij}\right] \tag{42}\] \[= O_{P}\left(\sqrt{n^{5}p_{n}^{4}f_{xx}(np_{n},np_{n})^{2}}\right).\]
Similarly, one has
\[\sum_{i\neq j}f_{yy}(w_{i(j)},w_{j(i)})\left[(d_{i(j)}-w_{i(j)})^{2}A _{ij}-\mathbb{E}\left[(d_{i(j)}-w_{i(j)})^{2}\right]A_{ij}\right] \tag{43}\] \[= O_{P}\left(\sqrt{n^{5}p_{n}^{4}f_{yy}(np_{n},np_{n})^{2}}\right).\]
Now we bound the third term of (33). Note that
\[\sum_{i\neq j}f_{xy}(w_{i(j)},w_{j(i)})(d_{i(j)}-w_{i(j)})(d_{j(i) }-w_{j(i)})A_{ij} \tag{44}\] \[= \sum_{i\neq j}f_{xy}(w_{i(j)},w_{j(i)})(d_{i(j)}-w_{i(j)})(d_{j(i) }-w_{j(i)})(A_{ij}-p_{n}w_{ij})\] \[+\sum_{i\neq j}f_{xy}(w_{i(j)},w_{j(i)})(d_{i(j)}-w_{i(j)})(d_{j(i )}-w_{j(i)})p_{n}w_{ij}\] \[= \sum_{\begin{subarray}{c}i\neq j,s\neq j\\ t\neq i\end{subarray}}f_{xy}(w_{i(j)},w_{j(i)})(A_{ij}-p_{n}w_{ij})(A_{is}-p_ {n}w_{is})(A_{jt}-p_{n}w_{jt})\] \[+\sum_{\begin{subarray}{c}i\neq j,s\neq j\\ t\neq i\end{subarray}}f_{xy}(w_{i(j)},w_{j(i)})p_{n}w_{ij}(A_{is}-p_{n}w_{is}) (A_{jt}-p_{n}w_{jt}).\]
The second moment of the first term of (44) is equal to
\[\mathbb{E}\Bigg{[}\sum_{i\neq j,s\neq j,t\neq i}f_{xy}(w_{i(j)},w _{j(i)})(A_{ij}-p_{n}w_{ij})\] \[\times(A_{is}-p_{n}w_{is})(A_{jt}-p_{n}w_{jt})\Bigg{]}^{2}\] \[= \sum_{\begin{subarray}{c}i\neq j,s\neq j,\\ t\neq i,s\neq t\end{subarray}}f_{xy}(w_{i(j)},w_{j(i)})^{2}\mathbb{E}\big{[}( A_{ij}-p_{n}w_{ij})^{2}\] \[\times(A_{is}-p_{n}w_{is})^{2}(A_{jt}-p_{n}w_{jt})^{2}\big{]}\] \[+\sum_{i\neq j,s\neq j,}f_{xy}(w_{i(j)},w_{j(i)})^{2}\mathbb{E} \big{[}(A_{ij}-p_{n}w_{ij})^{2}\] \[\times(A_{is}-p_{n}w_{is})^{2}(A_{js}-p_{n}w_{js})^{2}\big{]}\] \[= \sum_{\begin{subarray}{c}i\neq j,s\neq j,\\ t\neq i,s\neq t\end{subarray}}f_{xy}(w_{i(j)},w_{j(i)})^{2}p_{n}w_{ij}(1-p_{n}w _{ij})\]
\[\times p_{n}w_{is}(1-p_{n}w_{is})p_{n}w_{jt}(1-p_{n}w_{jt})\] \[+\sum_{\begin{subarray}{c}i\neq j,\\ s\neq j\end{subarray}}f_{xy}(w_{i(j)},w_{j(i)})^{2}p_{n}w_{ij}(1-p_{n}w_{ij})\] \[\times p_{n}w_{is}(1-p_{n}w_{is})p_{n}w_{js}(1-p_{n}w_{js})\] \[= \Theta\left(n^{4}p_{n}^{3}f_{xy}(np_{n},np_{n})^{2}\right). \tag{45}\]
The second moment of the second term of (44) is equal to
\[\mathbb{E}\left[\sum_{i\neq j,s\neq j,t\neq i}f_{xy}(w_{i(j)},w_{j (i)})p_{n}w_{ij}(A_{is}-p_{n}w_{is})(A_{jt}-p_{n}w_{jt})\right]^{2} \tag{46}\] \[= \sum_{\begin{subarray}{c}i\neq j,s\neq j\\ t\neq i,s\neq t\end{subarray}}f_{xy}(w_{i(j)},w_{j(i)})^{2}p_{n}^{2}w_{ij}^{2} \mathbb{E}\left[(A_{is}-p_{n}w_{is})^{2}(A_{jt}-p_{n}w_{jt})^{2}\right]\] \[+\sum_{i\neq j,s\neq j}f_{xy}(w_{i(j)},w_{j(i)})^{2}p_{n}^{2}w_{ij }^{2}\mathbb{E}\left[(A_{is}-p_{n}w_{is})^{2}(A_{js}-p_{n}w_{js})^{2}\right]\] \[= \sum_{\begin{subarray}{c}i\neq j,s\neq j\\ t\neq i,s\neq t\end{subarray}}f_{xy}(w_{i(j)},w_{j(i)})^{2}p_{n}^{2}w_{ij}^{2} p_{n}w_{is}(1-p_{n}w_{is})p_{n}w_{jt}(1-p_{n}w_{jt})\] \[+\sum_{i\neq j,s}f_{xy}(w_{i(j)},w_{j(i)})^{2}p_{n}^{2}w_{ij}^{2} p_{n}w_{is}(1-p_{n}w_{is})p_{n}w_{js}(1-p_{n}w_{js})\] \[= \Theta\left(n^{4}p_{n}^{4}f_{xy}(np_{n},np_{n})^{2}\right).\]
Combining (44), (45) and (46) yields
\[\sum_{i\neq j}f_{xy}(w_{i(j)},w_{j(i)})(d_{i(j)}-w_{i(j)})(d_{j(i )}-w_{j(i)})A_{ij} \tag{47}\] \[= O_{P}\left(\sqrt{n^{4}p_{n}^{3}f_{xy}(np_{n},np_{n})^{2}}\right).\]
By (42), (43), (47) and \((C5)\) of Assumption 1 (let \(s+t=2\)), we get
\[\sum_{i\neq j}(S_{ij}A_{ij}-\mathbb{E}\left[S_{ij}A_{ij}\right])=\] \[O_{P}\left(\sqrt{n^{5}p_{n}^{4}[f_{xx}(np_{n},np_{n})^{2}+f_{yy} (np_{n},np_{n})^{2}]+n^{4}p_{n}^{3}f_{xy}(np_{n},np_{n})^{2}}\right)\] \[=o_{P}(\sigma_{n}).\]
Hence the second term of (26) is equal to \(o_{P}(1)\).
#### 4.2.3 Bound the third term of (26)
Now we prove the third term of (26) converges in probability to zero. This is the most complex part of the proof. Note that
\[\sum_{i\neq j}T_{ij}A_{ij} \tag{49}\] \[= \sum_{k=3}^{k_{0}-1}\sum_{s+t=k}\sum_{i\neq j}\frac{f^{(s,t)}(w_{ i(j)},w_{j(i)})}{s!t!}(d_{i(j)}-w_{i(j)})^{s}(d_{j(i)}-w_{j(i)})^{t}A_{ij}\] \[= \sum_{k=3}^{k_{0}-1}\sum_{s+t=k}\sum_{i\neq j}\frac{f^{(s,t)}(w_{ i(j)},w_{j(i)})}{s!t!}(d_{i(j)}-w_{i(j)})^{s}(d_{j(i)}-w_{j(i)})^{t}\] \[\times(A_{ij}-p_{n}w_{ij})\] \[+\sum_{k=3}^{k_{0}-1}\sum_{s+t=k}\sum_{i\neq j}\frac{f^{(s,t)}(w_ {i(j)},w_{j(i)})}{s!t!}(d_{i(j)}-w_{i(j)})^{s}\] \[\times(d_{j(i)}-w_{j(i)})^{t}p_{n}w_{ij}.\]
Next we bound the second moment of the first term of (49) and the variance of the second term. Since \(k_{0}\) is a fixed finite integer, the quantities \(s!,t!\) in (49) are finite. We will ignore them in the subsequent analysis for simplicity. Given finite integer \(k_{0}\geq 4\), there are finitely many non-negative integers \(s,t\) such that \(s+t=k\) for any \(k=3,4,\ldots,k_{0}-1\). Hence we only need to bound the second moment of
\[\sum_{i\neq j}f^{(s,t)}(w_{i(j)},w_{j(i)})(d_{i(j)}-w_{i(j)})^{s}(d_{j(i)}-w_{ j(i)})^{t}(A_{ij}-p_{n}w_{ij}), \tag{50}\]
and the variance of
\[\sum_{i\neq j}f^{(s,t)}(w_{i(j)},w_{j(i)})(d_{i(j)}-w_{i(j)})^{s}(d_{j(i)}-w_{ j(i)})^{t}p_{n}w_{ij}, \tag{51}\]
where \(s,t\) are given non-negative integers with \(s+t=k\) for \(k=3,4,\ldots,k_{0}-1\).
We consider the variance of (51) first. Fix integer \(k\in\{3,4,\ldots,k_{0}-1\}\)
and integers \(s,t\in\{0,1,2,\ldots,k\}\) satisfying \(s+t=k\). For positive integers \(r\leq s\) and \(v\leq t\), let \(\lambda_{r;1},\lambda_{r;2},\ldots,\lambda_{r;r}\), \(\gamma_{v;1},\gamma_{v;2},\ldots,\gamma_{v;v}\) be positive integers such that \(\lambda_{r;1}+\lambda_{r;2}+\cdots+\lambda_{r;r}=s\) and \(\gamma_{v;1}+\gamma_{v;2}+\cdots+\gamma_{v;v}=t\). Given indices \(i,j\), we have
\[(d_{i(j)}-w_{i(j)})^{s} = \sum_{j_{1},j_{2},\ldots,j_{s}\notin\{i,j\}}\prod_{l=1}^{s}(A_{ ij_{l}}-p_{n}w_{ij_{l}})\] \[= \sum_{r=1}^{s}\sum_{\begin{subarray}{c}j_{1},j_{2},\ldots,j_{r} \notin\{i,j\}\\ j_{1}\neq j_{2}\neq\ldots\neq j_{r}\end{subarray}}\prod_{l=1}^{r}(A_{ij_{l}}-p _{n}w_{ij_{l}})^{\lambda_{r;l}},\] \[(d_{j(i)}-w_{j(i)})^{t} = \sum_{i_{1},i_{2},\ldots,i_{t}\notin\{i,j\}}\prod_{l=1}^{t}(A_{ ji_{l}}-p_{n}w_{ji_{l}}) \tag{52}\] \[= \sum_{v=1}^{t}\sum_{\begin{subarray}{c}i_{1},i_{2},\ldots,i_{v} \notin\{i,j\}\\ i_{1}\neq i_{2}\neq\ldots\neq i_{v}\end{subarray}}\prod_{l=1}^{v}(A_{ji_{l}}-p _{n}w_{ji_{l}})^{\gamma_{v;l}}.\]
Then (51) can be written as
\[\sum_{i\neq j}p_{n}w_{ij}f^{(s,t)}(w_{i(j)},w_{j(i)})(d_{i(j)}-w_{i(j)})^{s}(d _{j(i)}-w_{j(i)})^{t}=\sum_{r=1}^{s}\sum_{v=1}^{t}V_{rv},\]
where
\[V_{rv} = \sum_{\begin{subarray}{c}i\neq j\\ j_{1},\ldots,j_{r}\notin\{i,j\}\\ j_{1}\neq j_{2}\neq\ldots\neq j_{r}\\ i_{1},\ldots,i_{v}\notin\{i,j\}\\ i_{1}\neq i_{2}\neq\ldots\neq i_{v}\end{subarray}}p_{n}w_{ij}f^{(s,t)}(w_{i(j) },w_{j(i)})\prod_{l=1}^{r}(A_{ij_{l}}-p_{n}w_{ij_{l}})^{\lambda_{r;l}} \tag{53}\] \[\times\prod_{m=1}^{v}(A_{ji_{m}}-p_{n}w_{ji_{m}})^{\gamma_{v;m}}.\]
Note that
\[Var\left(\sum_{r=1}^{s}\sum_{v=1}^{t}V_{rv}\right) = \sum_{r=1}^{s}\sum_{v=1}^{t}\sum_{r_{1}=1}^{s}\sum_{v_{1}=1}^{t} Cov(V_{rv},V_{r_{1}v_{1}})\]
\[\leq \sum_{r=1}^{s}\sum_{v=1}^{t}\sum_{r_{1}=1}^{s}\sum_{v_{1}=1}^{t} \Big{(}Var(V_{rv})+Var(V_{r_{1}v_{1}})\Big{)},\]
and \(s,t\) are finite non-negative integers, we only need to bound \(Var(V_{rv})\) for each given \(r,v\). Fix \(r\in\{1,2,\ldots,s\}\) and \(v\in\{1,2,\ldots,t\}\). There are two cases: (I) there exists \(l_{0}\in\{1,2,\ldots,r\}\) or \(m_{0}\in\{1,2,\ldots,v\}\) such that \(\lambda_{r;l_{0}}=1\) or \(\gamma_{v;m_{0}}=1\) ; (II) \(\lambda_{r;l}\geq 2\) for all \(l\in\{1,2,\ldots,r\}\) and \(\gamma_{v;m}\geq 2\) for all \(m\in\{1,2,\ldots,v\}\).
We study case (I) first. Suppose there are some \(\lambda_{r;l}\) or \(\gamma_{v;m}\) which are equal to one. Without loss of generality, let \(\lambda_{r;1}=\lambda_{r;2}=\cdots=\lambda_{r;r_{0}}=1\) and \(\lambda_{r;l}\geq 2\) for \(l\in\{r_{0}+1,\ldots,r\}\), \(\gamma_{v;1}=\gamma_{v;2}=\cdots=\gamma_{v;v_{0}}=1\) and \(\gamma_{v;l}\geq 2\) for \(l\in\{v_{0}+1,\ldots,v\}\). Here, either \(r_{0}\geq 1\) or \(v_{0}\geq 1\). Without loss of generality, let \(r_{0}\geq 1\). In this case,
\[\prod_{l=1}^{r}(A_{ij_{l}}-p_{n}w_{ij_{l}})^{\lambda_{r;l}} \tag{54}\] \[= \left(\prod_{l=1}^{r_{0}}(A_{ij_{l}}-p_{n}w_{ij_{l}})\right) \left(\prod_{l=r_{0}+1}^{r}(A_{ij_{l}}-p_{n}w_{ij_{l}})^{\lambda_{r;l}}\right),\]
\[\prod_{m=1}^{v}(A_{ji_{m}}-p_{n}w_{ji_{m}})^{\gamma_{v;m}} \tag{55}\] \[= \left(\prod_{m=1}^{v_{0}}(A_{ji_{m}}-p_{n}w_{ji_{m}})\right)\left( \prod_{m=v_{0}+1}^{v}(A_{ji_{m}}-p_{n}w_{ji_{m}})^{\gamma_{v;m}}\right),\]
and \(\mathbb{E}[V_{rv}]=0\). Then \(Var(V_{rv})=\mathbb{E}[V_{rv}^{2}]\). For convenience, denote \(\bar{A}_{ij}=A_{ij}-p_{n}w_{ij}\). By (53), (54) and (55), we have
\[\mathbb{E}[V_{rv}^{2}] \tag{56}\] \[= \sum_{\begin{subarray}{c}i\neq j\\ j_{1},\ldots,j_{r}\notin\{i,j\}\\ j_{1}\neq j_{2}\neq\ldots\neq j_{r}\\ i_{1},\ldots,i_{v}\notin\{i,j\}\\ i_{1}\neq i_{2}\neq\ldots\neq i_{v}\\ i_{1}^{\prime}\neq i_{2}\neq\ldots\neq i_{v}\\ \end{subarray}}\sum_{\begin{subarray}{c}i^{\prime}\neq j^{\prime}\\ j^{\prime}_{1},\ldots,j^{\prime}_{r}\notin\{i^{\prime},j^{\prime}\}\\ j^{\prime}_{1}\neq j^{\prime}_{2}\neq\ldots\neq j^{\prime}_{r}\\ i^{\prime}_{1},\ldots,i_{v}\notin\{i^{\prime},j^{\prime}\}\\ i^{\prime}_{1}\neq i^{\prime}_{2}\neq\ldots\neq i^{\prime}_{v}\\ \end{subarray}}p_{n}w_{ij}f^{(s,t)}(w_{i(j)},w_{i(j)})\] \[\times p_{n}w_{i^{\prime}j^{\prime}}f^{(s,t)}(w_{i^{\prime}(j^{ \prime})},w_{j^{\prime}(i^{\prime})})\]
\[\times\mathbb{E}\Bigg{[}\left(\prod_{l=1}^{r_{0}}\bar{A}_{ij_{l}} \right)\left(\prod_{l=r_{0}+1}^{r}\bar{A}_{ij_{l}}^{\lambda_{r;l}}\right)\left( \prod_{l=1}^{r_{0}}\bar{A}_{i^{\prime}j^{\prime}_{l}}\right)\left(\prod_{l=r_{0} +1}^{r}\bar{A}_{i^{\prime}j^{\prime}_{l}}^{\lambda_{r;l}}\right)\] \[\times\left(\prod_{m=1}^{r_{0}}\bar{A}_{ji_{m}}\right)\left(\prod _{m=v_{0}+1}^{v}\bar{A}_{ji^{\prime}_{m}}^{\gamma_{v;m}}\right)\left(\prod_{m=1 }^{v_{0}}\bar{A}_{j^{\prime}i^{\prime}_{m}}^{\gamma_{v;m}}\right)\left(\prod_{ m=v_{0}+1}^{v}\bar{A}_{j^{\prime}i^{\prime}_{m}}^{\gamma_{v;m}}\right) \Bigg{]}.\]
Next we find an upper bound of (56). Recall that \(i\neq j\) and \(i^{\prime}\neq j^{\prime}\). We shall decompose the summation in (56) into six cases: \(i\neq i^{\prime}\) and \(j=j^{\prime}\); \(i=i^{\prime}\) and \(j\neq j^{\prime}\); \(i\neq j^{\prime}\) and \(j=i^{\prime}\); \(i=j^{\prime}\) and \(j\neq i^{\prime}\); \(\{i,j\}=\{i^{\prime},j^{\prime}\}\); \(\{i,j\}\cap\{i^{\prime},j^{\prime}\}=\emptyset\). For convenience, denote the expectation in (56) as \(E\).
Firstly, we consider the case \(\{i,j\}=\{i^{\prime},j^{\prime}\}\). There are two scenarios: (i) \(i=i^{\prime}\) and \(j=j^{\prime}\), (ii) \(i=j^{\prime}\) and \(j=i^{\prime}\).
Consider (i) first. In this case, \(A_{ij_{l}}\), \(A_{ij^{\prime}_{l}}\) are independent of \(A_{ji_{m}}\), \(A_{ji^{\prime}_{m}}\). Then
\[E = \mathbb{E}\Bigg{[}\left(\prod_{l=1}^{r_{0}}\bar{A}_{ij_{l}}\bar{A }_{ij^{\prime}_{l}}\right)\left(\prod_{l=r_{0}+1}^{r}\bar{A}_{ij_{l}}^{\lambda _{r;l}}\bar{A}_{ij^{\prime}_{l}}^{\lambda_{r;l}}\right)\Bigg{]} \tag{57}\] \[\times\mathbb{E}\Bigg{[}\left(\prod_{m=1}^{r_{0}}\bar{A}_{ji_{m}} \bar{A}_{ji^{\prime}_{m}}\right)\left(\prod_{m=v_{0}+1}^{v}\bar{A}_{ji_{m}}^{ \gamma_{v;m}}\bar{A}_{ji^{\prime}_{m}}^{\gamma_{v;m}}\right)\Bigg{]}.\]
Recall that \(j_{1},j_{2},\ldots,j_{r}\) are mutually distinct and \(j^{\prime}_{1},j^{\prime}_{2},\ldots,j^{\prime}_{r}\) are mutually distinct. Moreover, \(\mathbb{E}[\bar{A}_{ij_{l}}]=0\) for all \(l=1,2,\ldots,r\). If there exists an index \(j_{l_{1}}\) with \(1\leq l_{1}\leq r_{0}\) such that \(j_{l_{1}}\notin\{j^{\prime}_{1},j^{\prime}_{2},\ldots,j^{\prime}_{r}\}\), then
\[\mathbb{E}\Bigg{[}\left(\prod_{l=1}^{r_{0}}\bar{A}_{ij_{l}}\bar{A }_{ij^{\prime}_{l}}\right)\left(\prod_{l=r_{0}+1}^{r}\bar{A}_{ij_{l}}^{\lambda _{r;l}}\bar{A}_{ij^{\prime}_{l}}^{\lambda_{r;l}}\right)\Bigg{]} \tag{58}\] \[= \mathbb{E}[\bar{A}_{ij_{l_{1}}}]\mathbb{E}\Bigg{[}\left(\prod_{l=1,j_{l}\neq j_{l_{1}}}^{r_{0}}\bar{A}_{ij_{l}}\right)\left(\prod_{l=1}^{r_{0}} \bar{A}_{ij^{\prime}_{l}}\right)\left(\prod_{l=r_{0}+1}^{r}\bar{A}_{ij_{l}}^{ \lambda_{r;l}}\bar{A}_{ij^{\prime}_{l}}^{\lambda_{r;l}}\right)\Bigg{]}\] \[= 0.\]
Hence \(E=0\) by (57). Similarly, if there exists an index \(l_{1}\) with \(1\leq l_{1}\leq r_{0}\) such that \(j^{\prime}_{l_{1}}\notin\{j_{1},j_{2},\ldots,j_{r}\}\), then \(E=0\). In addition, if there is an index \(m_{1}\) with \(1\leq m_{1}\leq v_{0}\) such that \(i_{m_{1}}\notin\{i^{\prime}_{1},i^{\prime}_{2},\ldots,i^{\prime}_{v}\}\) or
\(\{i_{1},i_{2},\ldots,i_{v}\}\), then \(E=0\). Consequently, \(E\neq 0\) implies the following
\[\{j_{1},j_{2},\ldots,j_{r_{0}}\}\subset\{j^{\prime}_{1},j^{\prime}_{2},\ldots,j^{ \prime}_{r}\}, \{j^{\prime}_{1},j^{\prime}_{2},\ldots,j^{\prime}_{r_{0}}\}\subset\{j_{1}, j_{2},\ldots,j_{r}\},\]
\[\{i_{1},i_{2},\ldots,i_{v_{0}}\}\subset\{i^{\prime}_{1},i^{\prime}_{2},\ldots, i^{\prime}_{v}\}, \{i^{\prime}_{1},i^{\prime}_{2},\ldots,i^{\prime}_{v_{0}}\}\subset\{i_{1},i_{ 2},\ldots,i_{v}\}.\]
Without loss of generality, suppose
\[\{j_{1},j_{2},\ldots,j_{r_{1}}\}=\{j^{\prime}_{1},j^{\prime}_{2},\ldots,j^{ \prime}_{r_{1}}\}, \{j_{r_{1}+1},\ldots,j_{r}\}\cap\{j^{\prime}_{r_{1}+1},\ldots,j^{ \prime}_{r}\}=\emptyset, \tag{59}\]
\[\{i_{1},i_{2},\ldots,i_{v_{1}}\}=\{i^{\prime}_{1},i^{\prime}_{2},\ldots,i^{ \prime}_{v_{1}}\}, \{i_{v_{1}+1},\ldots,i_{v}\}\cap\{i^{\prime}_{v_{1}+1},\ldots,i^{ \prime}_{v}\}=\emptyset, \tag{60}\]
for some \(r_{1}\) (\(r_{0}\leq r_{1}\leq r\)) and \(v_{1}\) (\(v_{0}\leq v_{1}\leq v\)). There are at most \(n^{2+2r-r_{1}+2v-v_{1}}\) possible choices for the indices \(i,j,i_{1},\ldots,i_{v},j_{1},\ldots,j_{r}\), \(i^{\prime}\), \(j^{\prime}\), \(i^{\prime}_{1}\), \(\ldots\), \(i^{\prime}_{v}\), \(j^{\prime}_{1}\),\(\ldots\),\(j^{\prime}_{r}\) satisfying \(i=i^{\prime}\), \(j=j^{\prime}\), (59) and (60). Let \(\sigma_{1}\) be a permutation of \(\{1,2,\ldots,r_{1}\}\) such that \(j_{l}=j^{\prime}_{\sigma_{1}(l)}\) and \(\sigma_{2}\) be a permutation of \(\{1,2,\ldots,v_{1}\}\) such that \(i_{m}=i^{\prime}_{\sigma_{2}(m)}\). The numbers of the permutations \(\sigma_{1}\) and \(\sigma_{2}\) are \(r_{1}!\) and \(v_{1}!\) respectively. Then
\[\mathbb{E}\Bigg{[}\left(\prod_{l=1}^{r_{0}}\bar{A}_{ij_{l}}\bar{ A}_{ij^{\prime}_{l}}\right)\left(\prod_{l=r_{0}+1}^{r}\bar{A}_{ij_{l}}^{ \lambda_{r;l}}\bar{A}_{ij^{\prime}_{l}}^{\lambda_{r;l}}\right)\Bigg{]} \tag{61}\] \[= \mathbb{E}\Bigg{[}\left(\prod_{l=1}^{r_{1}}\bar{A}_{ij_{l}}^{ \lambda_{r;l}+\lambda_{r;\sigma_{1}(l)}}\right)\Bigg{]}\left(\prod_{l=r_{1}+1 }^{r}\mathbb{E}[\bar{A}_{ij_{l}}^{\lambda_{r;l}}]\mathbb{E}[\bar{A}_{ij^{ \prime}_{l}}^{\lambda_{r;l}}]\right)\] \[= O\left(p_{n}^{2r-r_{1}}\right).\]
Similarly, we have
\[\mathbb{E}\Bigg{[}\left(\prod_{m=1}^{v_{0}}\bar{A}_{ji_{m}}\bar{A }_{ji^{\prime}_{m}}\right)\left(\prod_{m=v_{0}+1}^{v}\bar{A}_{ji_{m}}^{\gamma_ {v;m}}\bar{A}_{ji^{\prime}_{m}}^{\gamma_{v;m}}\right)\Bigg{]} \tag{62}\] \[= \mathbb{E}\Bigg{[}\left(\prod_{m=1}^{v_{1}}\bar{A}_{ji_{l}}^{ \gamma_{v;m}+\gamma_{v;\sigma_{2}(m)}}\right)\Bigg{]}\left(\prod_{m=v_{1}+1}^{ v}\mathbb{E}[\bar{A}_{ji_{m}}^{\gamma_{v;m}}]\mathbb{E}[\bar{A}_{ji^{\prime}_{m}}^{ \gamma_{v;m}}]\right)\] \[= O\left(p_{n}^{2v-v_{1}}\right).\]
Note that \(2(r+v)-(r_{1}+v_{1})\leq 2(s+t)-1\). By (56), (57), (61), (62) and (\(C5\)) of Assumption 1, the sum in (56) over the indices \(i=i^{\prime}\), \(j=j^{\prime}\), (59)
and (60) is bounded by
\[(np_{n})^{2+2r-r_{1}+2v-v_{1}}f^{(s,t)}(np_{n},np_{n})^{2} \leq (np_{n})(np_{n})^{2(s+t)}f^{(s,t)}(np_{n},np_{n})^{2} \tag{63}\] \[= o\left(\sigma_{n}^{2}\right).\]
Consider case (ii) \(i=j^{\prime}\) and \(j=i^{\prime}\). If there is an index \(l_{1}\) with \(1\leq l_{1}\leq r_{0}\) such that \(j_{l_{1}}\notin\{i^{\prime}_{1},\ldots,i^{\prime}_{v}\}\), then \(E=0\) (the same argument as in (58)). If there is an index \(m_{1}\) with \(1\leq m_{1}\leq v_{0}\) such that \(i^{\prime}_{m_{1}}\notin\{j_{1},\ldots,j_{r}\}\), then \(E=0\) (the same argument as in (58)). Then \(E\neq 0\) implies the following
\[\{j_{1},j_{2},\ldots,j_{r_{0}}\}\subset\{i^{\prime}_{1},i^{\prime}_{2},\ldots,i^{\prime}_{v}\},\ \ \ \ \{i^{\prime}_{1},i^{\prime}_{2},\ldots,i^{\prime}_{v_{0}}\}\subset\{j_{1},j_ {2},\ldots,j_{r}\},\]
\[\{i_{1},i_{2},\ldots,i_{v_{0}}\}\subset\{j^{\prime}_{1},j^{\prime}_{2},\ldots,j^{\prime}_{v}\},\ \ \ \ \{j^{\prime}_{1},j^{\prime}_{2},\ldots,j^{\prime}_{r_{0}}\}\subset\{i_{1},i_{2 },\ldots,i_{v}\}.\]
Without loss of generality, suppose
\[\{j_{1},j_{2},\ldots,j_{r_{1}}\}=\{i^{\prime}_{1},i^{\prime}_{2},\ldots,i^{ \prime}_{r_{1}}\},\ \ \ \ \ \ \{j_{r_{1}+1},\ldots,j_{r}\}\cap\{i^{\prime}_{r_{1}+1},\ldots,i^{\prime}_{v}\}=\emptyset, \tag{64}\]
\[\{i_{1},i_{2},\ldots,i_{v_{1}}\}=\{j^{\prime}_{1},j^{\prime}_{2},\ldots,j^{ \prime}_{v_{1}}\},\ \ \ \{i_{v_{1}+1},\ldots,i_{v}\}\cap\{j^{\prime}_{v_{1}+1},\ldots,j^{\prime}_{v}\}=\emptyset, \tag{65}\]
for some \(r_{1}\) with \(\max\{r_{0},v_{0}\}\leq r_{1}\leq\min\{r,v\}\) and \(v_{1}\) with \(\max\{r_{0},v_{0}\}\leq v_{1}\leq\min\{r,v\}\). There are at most \(n^{2+2r-r_{1}+2v-v_{1}}\) possible choices for indices \(i,j,i_{1},\ldots,i_{v},j_{1},\ldots,j_{r}\), \(i^{\prime},j^{\prime},i^{\prime}_{1},\ldots,i^{\prime}_{v}\), \(j^{\prime}_{1},\ldots,j^{\prime}_{r}\) satisfying \(i=j^{\prime}\), \(j=i^{\prime}\), (64) and (65). Let \(\sigma_{1}\) be a permutation of \(\{1,2,\ldots,r_{1}\}\) such that \(j_{l}=i^{\prime}_{\sigma_{1}(l)}\) and \(\sigma_{2}\) be a permutation of \(\{1,2,\ldots,v_{1}\}\) such that \(i_{m}=j^{\prime}_{\sigma_{2}(m)}\). Then
\[E = \mathbb{E}\Bigg{[}\left(\prod_{l=1}^{r_{1}}\bar{A}_{ij_{l}}^{ \lambda_{r_{l}}+\gamma_{v;\sigma_{1}(l)}}\right)\left(\prod_{l=r_{1}+1}^{r} \bar{A}_{ij_{l}}^{\lambda_{r_{l}}}\right)\left(\prod_{m=r_{1}+1}^{r}\bar{A}_{ ii^{\prime}_{m}}^{\gamma_{v;m}}\right)\] \[\times\left(\prod_{m=1}^{v_{1}}\bar{A}_{ji_{m}}^{\lambda_{r;m}+ \gamma_{v;\sigma_{2}(m)}}\right)\left(\prod_{m=v_{1}+1}^{v}\bar{A}_{ji_{m}}^{ \gamma_{v;m}}\right)\left(\prod_{m=v_{1}+1}^{r}\bar{A}_{jj_{m}^{\prime}}^{ \lambda_{v;m}}\right)\Bigg{]}\] \[= \Theta\left(p_{n}^{2(r+v)-r_{1}-v_{1}}\right).\]
Then the sum in (56) over the indices \(i=j^{\prime}\), \(j=i^{\prime}\), (64) and (65) is
bounded by
\[(np_{n})^{2+2r-r_{1}+2v-v_{1}}f^{(s,t)}(np_{n},np_{n})^{2} \tag{66}\] \[\leq (np_{n})(np_{n})^{2(s+t)}f^{(s,t)}(np_{n},np_{n})^{2}=o\left(\sigma_{ n}^{2}\right).\]
Consider the case \(i\neq i^{\prime}\) and \(j=j^{\prime}\). For any \(l_{0}\in\{1,2,\ldots,r\}\), \(\{i,j_{l_{0}}\}\neq\{j,i_{l}\}\) and \(\{i,j_{l_{0}}\}\neq\{j,i^{\prime}_{l}\}\) for any \(1\leq l\leq v\). If \(r_{0}\geq 2\), it is not possible that \(\{i,j_{1}\}=\{i^{\prime},j^{\prime}_{l_{1}}\}\) and \(\{i,j_{2}\}=\{i^{\prime},j^{\prime}_{l_{2}}\}\) for distinct \(l_{1}\) and \(l_{2}\). Then \(E=0\) (the same argument as in (58)). Let \(r_{0}=1\). In this case, \(i=j^{\prime}_{1}\) and \(i^{\prime}=j_{1}\). Otherwise \(E=0\). Suppose \(\{i_{1},\ldots,i_{v_{1}}\}=\{i^{\prime}_{1},\ldots,i^{\prime}_{v_{1}}\}\) and \(\{i_{v_{1}+1},\ldots,i_{v}\}\cap\{i^{\prime}_{v_{1}+1},\ldots,i^{\prime}_{v}\}=\emptyset\) for \(v_{0}\leq v_{1}\leq v\). Suppose \(\{j_{2},\ldots,j_{r_{1}}\}=\{j^{\prime}_{2},\ldots,j^{\prime}_{r_{1}}\}\) and \(\{j_{r_{1}+1},\ldots,j_{r}\}\cap\{j^{\prime}_{r_{1}+1},\ldots,j^{\prime}_{r}\}=\emptyset\) for \(1\leq r_{1}\leq r\). There are at most \(n^{2+2(r+v)-v_{1}-r_{1}}\) choices for the indices \(i,j,i_{1},\ldots,i_{v},j_{1},\ldots,j_{r},i^{\prime},j^{\prime},i^{\prime}_{1},\ldots,i^{\prime}_{v},j^{\prime}_{1},\ldots,j^{\prime}_{r}\) satisfying these conditions. Then
\[E = \mathbb{E}\Bigg{[}\bar{A}_{ij_{1}}^{2}\left(\prod_{l=2}^{r}\bar{ A}_{ij_{l}}^{\lambda_{r;l}}\right)\left(\prod_{l=2}^{r_{1}}\bar{A}_{ji_{1}j_{l}}^{ \lambda_{r;l}}\right)\left(\prod_{l=r_{1}+1}^{r}\bar{A}_{j_{1}j^{\prime}_{l}} ^{\lambda_{r;l}}\right)\Bigg{]}\] \[\times\mathbb{E}\Bigg{[}\left(\prod_{m=1}^{v_{1}}\bar{A}_{ji_{m} }^{2\gamma_{v;m}}\right)\left(\prod_{m=v_{1}+1}^{v}\bar{A}_{ji_{m}}^{\gamma_{ v;m}}\bar{A}_{ji^{\prime}_{m}}^{\gamma_{v;m}}\right)\Bigg{]}\] \[= O\left(p_{n}^{1+2(r-1)+v_{1}+2(v-v_{1})}\right),\]
and hence the sum over \(i\neq i^{\prime}\) and \(j=j^{\prime}\) in (56) is bounded by
\[(np_{n})^{2(r+v)-v_{1}+1}f^{(s,t)}(np_{n},np_{n})^{2}=o(\sigma_{n}^{2}). \tag{67}\]
Consider the case \(i=i^{\prime}\) and \(j\neq j^{\prime}\). If \(v_{0}\geq 1\), the summation is similarly bounded by (67). Suppose \(v_{0}=0\). Suppose \(\{j_{1},\ldots,j_{r_{1}}\}=\{j^{\prime}_{1},\ldots,j^{\prime}_{r_{1}}\}\) and \(\{j_{r_{1}+1},\ldots,j_{r}\}\cap\{j^{\prime}_{r_{1}+1},\ldots,j^{\prime}_{r}\}=\emptyset\) for \(r_{0}\leq r_{1}\leq r\). There are at most \(n^{3+2r-r_{1}+2v}\) choices for the indices \(i,j,i_{1},\ldots,i_{v},j_{1},\ldots,\)\(j_{r}\), \(i^{\prime},j^{\prime},i^{\prime}_{1},\ldots,i^{\prime}_{v},j^{\prime}_{1}, \ldots,j^{\prime}_{r}\) satisfying these conditions. Then
\[E = \mathbb{E}\Bigg{[}\left(\prod_{l=1}^{r_{1}}\bar{A}_{ij_{l}}^{2 \lambda_{r;l}}\right)\left(\prod_{l=r_{1}+1}^{r}\bar{A}_{ij_{l}}^{\lambda_{r;l} }\right)\left(\prod_{l=r_{1}+1}^{r}\bar{A}_{ij^{\prime}_{l}}^{\lambda_{r;l}}\right)\]
\[\times\left(\prod_{m=1}^{v}\bar{A}_{ji_{m}}^{\gamma_{v;m}}\right) \left(\prod_{m=1}^{v}\bar{A}_{j^{\prime}i^{\prime}_{m}}^{\gamma_{v;m}}\right)\Bigg{]}\] \[= O\left(p_{n}^{2r-r_{1}+2v}\right).\]
Then the sum over \(i=i^{\prime}\) and \(j\neq j^{\prime}\) in (56) is bounded by
\[n(np_{n})^{2+2r-r_{1}+2v}f^{(s,t)}(np_{n},np_{n})^{2}=o(\sigma_{n}^{2}). \tag{68}\]
Suppose \(i=j^{\prime}\) and \(j\neq i^{\prime}\). If \(r_{0}\geq 2\) or \(v_{0}\geq 2\), then \(E=0\) (similar to the argument in the case \(i\neq i^{\prime}\) and \(j=j^{\prime}\)). Let \(r_{0}=1\). If \(v_{0}=1\), then \(j=j^{\prime}_{1}\) and \(i_{1}=i^{\prime}\). If \(v_{0}=0\), then \(j_{1}=i^{\prime}_{m_{1}}\) for some \(1\leq m_{1}\leq v\), \(j=j^{\prime}_{l_{1}}\) for some \(1\leq l_{1}\leq r\) and \(i_{1}=i^{\prime}\). Without loss of generality, let \(m_{1}=l_{1}=1\). Suppose
\[\{j_{1},\ldots,j_{r_{1}}\}=\{i^{\prime}_{1},\ldots,i^{\prime}_{r_{1}}\},\quad \{j_{r_{1}+1},\ldots,j_{r}\}\cap\{i^{\prime}_{r_{1}+1},\ldots,i^{\prime}_{v}\} =\emptyset.\]
\[\{i_{2},\ldots,i_{v_{1}}\}=\{j^{\prime}_{2},\ldots,j^{\prime}_{v_{1}}\},\quad \{i_{v_{1}+1},\ldots,i_{v}\}\cap\{j^{\prime}_{v_{1}+1},\ldots,j^{\prime}_{r}\} =\emptyset,\]
where \(1\leq r_{1},v_{1}\leq\min\{r,v\}\). There are at most \(n^{2(r+v)-r_{1}-v_{1}+2}\) choices for the indices \(i,j,i_{1},\ldots,i_{v},j_{1},\ldots,j_{r},i^{\prime},j^{\prime},i^{\prime}_{1 },\ldots,i^{\prime}_{v},j^{\prime}_{1},\ldots,j^{\prime}_{r}\) satisfying these conditions. In this case,
\[E = \mathbb{E}\Bigg{[}\bar{A}_{ji_{1}}^{2}\left(\prod_{l=1}^{r_{1}} \bar{A}_{ij_{l}}^{\lambda_{r;l}+\gamma_{v;l}}\right)\left(\prod_{l=r_{1}+1}^{ r}\bar{A}_{ij_{l}}^{\lambda_{r;l}}\right)\left(\prod_{l=r_{1}+1}^{v}\bar{A}_{ii^{ \prime}_{l}}^{\gamma_{v;l}}\right)\Bigg{]}\] \[\times\mathbb{E}\Bigg{[}\left(\prod_{m=2}^{v}\bar{A}_{ji_{m}}^{ \gamma_{v;m}}\right)\left(\prod_{l=2}^{v_{1}}\bar{A}_{i_{1}i_{l}}^{\lambda_{r; l}}\right)\left(\prod_{l=v_{1}+1}^{r}\bar{A}_{i_{1}j^{\prime}_{l}}^{\lambda_{r;l}} \right)\Bigg{]}\] \[= O\left(p_{n}^{1+r+(r-v_{1})+v-r_{1}+v-1+v_{1}-1}\right).\]
Then the sum over \(i=j^{\prime}\) and \(j\neq i^{\prime}\) in (56) is bounded by
\[(np_{n})^{2(r+v)-r_{1}-1}f^{(s,t)}(np_{n},np_{n})^{2}=o(\sigma_{n}^{2}). \tag{69}\]
The case \(i\neq j^{\prime}\) and \(j=i^{\prime}\) can be similarly bounded as in (69).
Now we consider \(\{i,j\}\cap\{i^{\prime},j^{\prime}\}=\emptyset\). If \(r_{0}\geq 3\), then at least one of \(\{i,j_{1}\}\), \(\{i,j_{2}\}\) and \(\{i,j_{3}\}\) is not in \(\{\{j^{\prime},i^{\prime}_{m_{1}}\},\{i^{\prime},j^{\prime}_{l}\}\}\) for any \(m,m_{1},l\)
Hence \(E=0\). Similarly, if \(v_{0}\geq 3\), \(E=0\).
Suppose \(r_{0}=v_{0}=2\). Then \(\{i,j_{1}\}=\{i^{\prime},j^{\prime}_{l}\}\) for some \(1\leq l\leq r\) or \(\{i,j_{1}\}=\{j^{\prime},i^{\prime}_{m}\}\) for some \(1\leq m\leq v\). Otherwise \(E=0\). Without loss of generality, suppose \(\{i,j_{1}\}=\{i^{\prime},j^{\prime}_{l}\}\). In this case, \(i=j^{\prime}_{l}\) and \(j_{1}=i^{\prime}\). If \(l\geq 3\), then \(\{i^{\prime},j^{\prime}_{1}\}=\{j,i_{m_{1}}\}\) and \(\{i^{\prime},j^{\prime}_{2}\}=\{j^{\prime},i^{\prime}_{m_{2}}\}\) or \(\{i^{\prime},j^{\prime}_{1}\}=\{j^{\prime},i^{\prime}_{m_{2}}\}\) and \(\{i^{\prime},j^{\prime}_{2}\}=\{j,i_{m_{1}}\}\) (otherwise \(E=0\)). Either case is impossible, due to the fact that \(j^{\prime}\neq i^{\prime}\), \(j^{\prime}\neq j^{\prime}_{l_{1}}\) for any \(l_{1}\). Hence \(l=1\) or \(l=2\). Without loss of generality, let \(l=1\). In this case, \(\{i^{\prime},j^{\prime}_{2}\}=\{j,i_{m_{3}}\}\) and \(i^{\prime}=i_{m_{3}}\) and \(j^{\prime}_{2}=j\). If \(m_{3}\geq 3\), then \(\{j,i_{1}\}=\{j^{\prime},i^{\prime}_{m_{4}}\}\) and \(\{j,i_{2}\}=\{j^{\prime},i^{\prime}_{m_{5}}\}\) (otherwise \(E=0\)), which is not possible. Hence \(m_{3}=1\) or \(m_{3}=2\). Let \(m_{3}=1\) (the argument for \(m_{3}=2\) is the same). Then \(\{j,i_{2}\}=\{j^{\prime},i^{\prime}_{m_{6}}\}\). If \(m_{6}\geq 3\), then \(\{j^{\prime},i^{\prime}_{1}\}=\{i,j_{l_{2}}\}\) and \(\{j^{\prime},i^{\prime}_{2}\}=\{i,j_{l_{3}}\}\) (otherwise \(E=0\)), which is not possible. Hence, \(m_{6}=1\) or \(m_{6}=2\). Let \(m_{6}=1\) (the argument for \(m_{6}=2\) is the same). Then \(\{j^{\prime},i^{\prime}_{2}\}=\{i,j_{2}\}\), \(i=i^{\prime}_{2}\) and \(j_{2}=j^{\prime}\). There are at most \(n^{2(r+v)-4}\) possible choices of the indices \(i,j,i_{1},\ldots,i_{v},j_{1},\ldots,j_{r},i^{\prime},j^{\prime},i^{\prime}_{1},\ldots,i^{\prime}_{v},j^{\prime}_{1},\ldots,j^{\prime}_{r}\) satisfying these conditions. Then the sum in (56) over these indices is bounded by
\[(np_{n})^{2(r+v)-4}f^{(s,t)}(np_{n},np_{n})^{2}=o(\sigma_{n}^{2}). \tag{70}\]
Suppose \(r_{0}=2\) and \(v_{0}=1\). Then \(\{i,j_{1}\}=\{i^{\prime},j^{\prime}_{l}\}\) and \(\{i,j_{2}\}=\{j^{\prime},i^{\prime}_{m}\}\) or \(\{i,j_{1}\}=\{j^{\prime},i^{\prime}_{m}\}\) and \(\{i,j_{2}\}=\{i^{\prime},j^{\prime}_{l}\}\) for some \(l,m\). Otherwise \(E=0\). Without loss of generality, let \(\{i,j_{1}\}=\{i^{\prime},j^{\prime}_{l}\}\) and \(\{i,j_{2}\}=\{j^{\prime},i^{\prime}_{m}\}\). By a similar argument as in the previous paragraph, \(l=1\) or \(l=2\). Let \(l=1\). Then \(\{i^{\prime},j^{\prime}_{2}\}=\{j,i_{m_{3}}\}\). If \(m_{3}=1\) and \(m=1\), the sum in (56) over these indices is bounded by
\[p_{n}(np_{n})^{2(r+v)-2}f^{(s,t)}(np_{n},np_{n})^{2}=o(\sigma_{n}^{2}), \tag{71}\]
If \(m_{3}\geq 2\) or \(m\geq 2\), the sum in (56) over these indices is bounded by
\[p_{n}^{2}(np_{n})^{2(r+v)-4}f^{(s,t)}(np_{n},np_{n})^{2}=o(\sigma_{n}^{2}). \tag{72}\]
Suppose \(r_{0}=2\) and \(v_{0}=0\). Then \(\{i,j_{1}\}=\{i^{\prime},j^{\prime}_{l}\}\) and \(\{i,j_{2}\}=\{j^{\prime},i^{\prime}_{m}\}\) or \(\{i,j_{1}\}=\{j^{\prime},i^{\prime}_{m}\}\) and \(\{i,j_{2}\}=\{i^{\prime},j^{\prime}_{l}\}\) for some \(l,m\). Otherwise
\[p_{n}(np_{n})^{2(r+v)-2}f^{(s,t)}(np_{n},np_{n})^{2}=o(\sigma_{n}^{2}). \tag{77}\]
Suppose \(r_{0}=1\) and \(v_{0}=0\). Then \(\{i,j_{1}\}=\{i^{\prime},j^{\prime}_{l_{1}}\}\) or \(\{i,j_{1}\}=\{j^{\prime},i^{\prime}_{m_{1}}\}\). Suppose \(\{i,j_{1}\}=\{j^{\prime},i^{\prime}_{m_{1}}\}\). Then \(\{i^{\prime},j^{\prime}_{1}\}=\{j,i_{m_{2}}\}\). In this case, the sum in (56) over these indices is bounded by
\[(np_{n})^{2(r+v)}f^{(s,t)}(np_{n},np_{n})^{2}=o(\sigma_{n}^{2}). \tag{78}\]
Suppose \(\{i,j_{1}\}=\{i^{\prime},j^{\prime}_{l_{1}}\}\). If \(l_{1}=1\), then the sum in (56) over these indices is bounded by
\[n(np_{n})^{2(r+v)+1}f^{(s,t)}(np_{n},np_{n})^{2}=o(\sigma_{n}^{2}). \tag{79}\]
If \(l_{1}\geq 2\), then \(\{i^{\prime},j^{\prime}_{1}\}=\{j,i_{m_{3}}\}\). In this case, the sum in (56) over these indices is bounded by
\[(np_{n})^{2(r+v)}f^{(s,t)}(np_{n},np_{n})^{2}=o(\sigma_{n}^{2}). \tag{80}\]
Now we consider the case (II): \(\lambda_{r;l}\geq 2\) for all \(l=1,2,\ldots,r\) and \(\gamma_{v;m}\geq 2\) for all \(m=1,2,\ldots,v\). In this case, \(r\leq\frac{s}{2}\) and \(v\leq\frac{t}{2}\). The expectation of \(V_{rv}\) is equal to
\[\mathbb{E}[V_{rv}] \tag{81}\] \[= \sum_{i\neq j}\sum_{\begin{subarray}{c}j_{1},j_{2},\ldots,j_{r} \notin\{i,j\}\\ j_{1}\neq j_{2}\neq\ldots\neq j_{r}\\ i_{1},i_{2},\ldots,i_{v}\notin\{i,j\}\\ i_{1}\neq i_{2}\neq\ldots\neq i_{v}\end{subarray}}p_{n}w_{ij}f^{(s,t)}(w_{i(j) },w_{j(i)})\prod_{l=1}^{r}\mathbb{E}[(A_{ij_{l}}-p_{n}w_{ij_{l}})^{\lambda_{r;l }}]\] \[\times\prod_{m=1}^{v}\mathbb{E}[(A_{ji_{m}}-p_{n}w_{ji_{m}})^{ \gamma_{v;m}}].\]
Next we bound the variance of \(V_{rv}\), that is, \(Var(V_{rv})=\mathbb{E}\left[(V_{rv}-\mathbb{E}[V_{rv}])^{2}\right]\). Let \(\eta_{l}\in\{0,1\}\) for \(l=1,2,\ldots,r\) and \(\xi_{l}\in\{0,1\}\) for \(l=1,2,\ldots,v\). Then we have
\[\prod_{l=1}^{r}(A_{ij_{l}}-p_{n}w_{ij_{l}})^{\lambda_{r;l}}\prod_ {m=1}^{v}(A_{ji_{m}}-p_{n}w_{ji_{m}})^{\gamma_{v;m}} \tag{82}\] \[= \prod_{l=1}^{r}\left[(A_{ij_{l}}-p_{n}w_{ij_{l}})^{\lambda_{r;l}} -\mathbb{E}\left[(A_{ij_{l}}-p_{n}w_{ij_{l}})^{\lambda_{r;l}}\right]\right.\] \[\left.+\mathbb{E}\left[(A_{ij_{l}}-p_{n}w_{ij_{l}})^{\lambda_{r;l} }\right]\right]\] \[\times\prod_{l=1}^{v}\left[(A_{ji_{l}}-p_{n}w_{ji_{l}})^{\gamma_{ v;l}}-\mathbb{E}\left[(A_{ji_{l}}-p_{n}w_{ji_{l}})^{\gamma_{v;l}}\right]\right.\] \[\left.+\mathbb{E}\left[(A_{ji_{l}}-p_{n}w_{ji_{l}})^{\gamma_{v;l} }\right]\right]\]
\[= \sum_{\begin{subarray}{c}\eta_{1},\ldots,\eta_{r}\in\{0,1\}\\ \xi_{1},\ldots,\xi_{v}\in\{0,1\}\end{subarray}}\prod_{l=1}^{r}\left[\left(A_{ ij_{l}}-p_{n}w_{ij_{l}}\right)^{\lambda_{r;l}}-\mathbb{E}\left[\left(A_{ij_{l}}-p_{n}w_{ ij_{l}}\right)^{\lambda_{r;l}}\right]\right]^{\eta_{l}}\] \[\times\Big{[}\mathbb{E}\left[\left(A_{ij_{l}}-p_{n}w_{ij_{l}} \right)^{\lambda_{r;l}}\right]\Big{]}^{1-\eta_{l}}\] \[\times\prod_{l=1}^{r}\left[\left(A_{ji_{l}}-p_{n}w_{ji_{l}} \right)^{\gamma_{v;l}}-\mathbb{E}\left[\left(A_{ji_{l}}-p_{n}w_{ji_{l}}\right) ^{\gamma_{v;l}}\right]\right]^{\xi_{l}}\] \[\times\Big{[}\mathbb{E}\left[\left(A_{ji_{l}}-p_{n}w_{ji_{l}} \right)^{\gamma_{v;l}}\right]\Big{]}^{1-\xi_{l}}.\]
For convenience, denote
\[X_{ij}(x)=(A_{ij}-p_{n}w_{ij})^{x}-\mathbb{E}[(A_{ij}-p_{n}w_{ij})^{x}]\]
and \(Y_{ij}(x)=\mathbb{E}[(A_{ij}-p_{n}w_{ij})^{x}]\).
Since \(i\neq j\), \(j_{l}\notin\{i,j\}\) for \(l=1,2,\ldots,r\) and \(i_{l}\notin\{i,j\}\) for \(l=1,2,\ldots,v\), then \(\{i,j_{l}\}\neq\{j,i_{m}\}\) for any \(l=1,2,\ldots,r\) and \(m=1,2,\ldots,v\). Consequently, \(A_{ij_{l}}\) (\(l=1,2,\ldots,r\)) and \(A_{ji_{m}}\) (\(m=1,2,\ldots,v\)) are independent.
By (53), (81) and (82), \(V_{rv}-\mathbb{E}[V_{rv}]\) does not contain the term corresponding to \(\eta_{l}=0\) for all \(l\in\{1,2,\ldots,r\}\) and \(\xi_{m}=0\) for all \(m\in\{1,2,\ldots,v\}\). Hence we assume \(\eta_{1}+\cdots+\eta_{r}\geq 1\) or \(\xi_{1}+\cdots+\xi_{v}\geq 1\). Without loss of generality, let \(\eta_{1}=\eta_{2}=\cdots=\eta_{l_{0}}=1\), \(\eta_{l_{0}+1}=\cdots=\eta_{r}=0\), \(\xi_{1}=\xi_{2}=\cdots=\xi_{m_{0}}=1\) and \(\xi_{m_{0}+1}=\cdots=\xi_{v}=0\), where \(1\leq l_{0}+m_{0}\leq r+v\). In this case,
\[\prod_{l=1}^{r}\left[X_{ij_{l}}(\lambda_{r;l})\right]^{\eta_{l}}\left[Y_{ij_{ l}}(\lambda_{r;l})\right]^{1-\eta_{l}}=\left(\prod_{l=1}^{l_{0}}X_{ij_{l}}( \lambda_{r;l})\right)\left(\prod_{l=l_{0}+1}^{r}Y_{ij_{l}}(\lambda_{r;l}) \right),\]
\[\prod_{l=1}^{v}\left[X_{ji_{l}}(\gamma_{v;l})\right]^{\xi_{l}}\left[Y_{ji_{l}} (\gamma_{v;l})\right]^{1-\xi_{l}}=\left(\prod_{l=1}^{m_{0}}X_{ji_{l}}(\gamma_{ v;l})\right)\left(\prod_{l=m_{0}+1}^{v}Y_{ji_{l}}(\gamma_{v;l})\right).\]
Denote
\[V_{rv}(l_{0},m_{0})\]
\[= \sum_{\begin{subarray}{c}i\neq j\\ j_{1},j_{2},\ldots,j_{r}\notin\{i,j\}\\ j_{1}\neq j_{2}\neq\ldots\neq j_{r}\\ i_{1},i_{2},\ldots,i_{v}\notin\{i,j\}\\ i_{1}\neq i_{2}\neq\ldots\neq i_{v}\end{subarray}}p_{n}w_{ij}f^{(s,t)}(w_{i (j)},w_{j(i)})\left(\prod_{l=1}^{l_{0}}X_{ij_{l}}(\lambda_{r;l})\right) \tag{83}\] \[\times\left(\prod_{l=l_{0}+1}^{r}Y_{ij_{l}}(\lambda_{r;l})\right) \left(\prod_{l=1}^{m_{0}}X_{ji_{l}}(\gamma_{v;l})\right)\left(\prod_{l=m_{0}+1} ^{v}Y_{ji_{l}}(\gamma_{v;l})\right).\]
We only need to consider the variance of \(V_{rv}(l_{0},m_{0})\). Since \(\lambda_{r;l}\geq 2\) and \(w_{ij}\in[\beta,1]\), then
\[\mathbb{E}[(A_{ij_{l}}-p_{n}w_{ij_{l}})^{\lambda_{r;l}}]\] \[= (1-p_{n}w_{ij_{l}})^{\lambda_{r;l}}p_{n}w_{ij_{l}}+(-p_{n}w_{ij_{ l}})^{\lambda_{r;l}}(1-p_{n}w_{ij_{l}})\] \[= \Theta(p_{n}).\]
Hence one has
\[\prod_{l=l_{0}+1}^{r}Y_{ij_{l}}(\lambda_{r;l})=\Theta(p_{n}^{r-l_{0}}),\qquad \prod_{l=m_{0}+1}^{v}Y_{ji_{l}}(\gamma_{v;l})=\Theta(p_{n}^{v-m_{0}}).\]
For convenience, denote
\[a_{ij_{1},\ldots,j_{l_{0}}}=\sum_{j_{l_{0}+1},\ldots,j_{r}\notin\{j_{1}, \ldots,j_{l_{0}}\}}\prod_{l=l_{0}+1}^{r}Y_{ij_{l}}(\lambda_{r;l}),\]
\[b_{ji_{1},\ldots,i_{m_{0}}}=\sum_{i_{m_{0}+1},\ldots,i_{v}\notin\{i_{1}, \ldots,i_{m_{0}}\}}\prod_{l=m_{0}+1}^{v}Y_{ji_{l}}(\gamma_{v;l}).\]
Then
\[a_{ij_{1},\ldots,j_{l_{0}}}=\Theta((np_{n})^{r-l_{0}}),\qquad b_{ji_{1}, \ldots,i_{m_{0}}}=\Theta((np_{n})^{v-m_{0}}). \tag{84}\]
In this case, \(V_{rv}(l_{0},m_{0})\) is written as
\[V_{rv}(l_{0},m_{0}) = \sum_{\begin{subarray}{c}i\neq j\\ j_{1},j_{2},\ldots,j_{r}\notin\{i,j\}\\ j_{1}\neq j_{2}\neq\ldots\neq j_{r}\\ i_{1},i_{2},\ldots,i_{r}\notin\{i,j\}\\ i_{1}\neq i_{2}\neq\ldots\neq i_{v}\end{subarray}}p_{n}w_{ij}f^{(s,t)}(w_{i (j)},w_{j(i)})a_{ij_{1},\ldots,j_{l_{0}}}b_{ji_{1},\ldots,i_{m_{0}}} \tag{85}\] \[\times\left(\prod_{l=1}^{l_{0}}X_{ij_{l}}(\lambda_{r;l})\right) \left(\prod_{l=1}^{m_{0}}X_{ji_{l}}(\gamma_{v;l})\right).\]
Next we bound the variance of \(V_{rv}(l_{0},m_{0})\). Since \(i\neq j\), \(j_{l}\notin\{i,j\}\) and \(i_{m}\notin\{i,j\}\) for all \(l,m\), \(X_{ij_{l}}(\lambda_{r;l})\) and \(X_{ji_{l}}(\gamma_{v;l})\) are independent. By definition, \(\mathbb{E}[X_{ij_{l}}(\lambda_{r;l})]=\mathbb{E}[X_{ji_{l}}(\gamma_{v;l})]=0\). Then \(\mathbb{E}[V_{rv}(l_{0},m_{0})]=0\) and \(Var\left(V_{rv}(l_{0},m_{0})\right)=\mathbb{E}[V_{rv}(l_{0},m_{0})^{2}]\), that is,
\[Var\left(V_{rv}(l_{0},m_{0})\right) \tag{86}\] \[= \mathbb{E}\Bigg{[}\sum_{\begin{subarray}{c}i,j,j_{1},\ldots,j_{l _{0}},i_{1},\ldots,i_{m_{0}}\\ i^{\prime},j^{\prime},j^{\prime}_{1},\ldots,j^{\prime}_{l_{0}},i^{\prime}_{1}, \ldots,i^{\prime}_{m_{0}}\end{subarray}}p_{n}w_{ij}f^{(s,t)}(w_{i(j)},w_{j(i) })a_{ij_{1},\ldots,j_{l_{0}}}b_{ji_{1},\ldots,i_{m_{0}}}\] \[\times\left(\prod_{l=1}^{l_{0}}X_{ij_{l}}(\lambda_{r;l})\right) \left(\prod_{l=1}^{m_{0}}X_{ji_{l}}(\gamma_{v;l})\right)\Bigg{]}^{2}\] \[= \sum_{\begin{subarray}{c}i,j,j_{1},\ldots,j_{l_{0}},i_{1},\ldots,i _{m_{0}}\\ i^{\prime},j^{\prime},j^{\prime}_{1},\ldots,j^{\prime}_{l_{0}},i^{\prime}_{1 },\ldots,i^{\prime}_{m_{0}}\end{subarray}}p_{n}w_{ij}f^{(s,t)}(w_{i(j)},w_{j(i) })p_{n}w_{i^{\prime}j^{\prime}}f^{(s,t)}(w_{i^{\prime}(j^{\prime})},w_{j^{ \prime}(i^{\prime})})\] \[\times a_{ij_{1},\ldots,j_{l_{0}}}b_{ji_{1},\ldots,i_{m_{0}}}a_{i^ {\prime}j^{\prime}_{1},\ldots,j^{\prime}_{l_{0}}}b_{j^{\prime}i^{\prime}_{1}, \ldots,i^{\prime}_{m_{0}}}\] \[\times\mathbb{E}\left(\prod_{l=1}^{l_{0}}X_{ij_{l}}(\lambda_{r;l} )X_{i^{\prime}j^{\prime}_{l}}(\lambda_{r;l})\prod_{l=1}^{m_{0}}X_{ji_{l}}( \gamma_{v;l})X_{j^{\prime}i^{\prime}_{l}}(\gamma_{v;l})\right).\]
For each \(j_{l_{1}}\) with \(1\leq l_{1}\leq l_{0}\), if \(\{i,j_{l_{1}}\}\neq\{i^{\prime},j^{\prime}_{l}\}\) or \(\{j^{\prime},i^{\prime}_{m}\}\) for some \(l\in\{1,2,\ldots,l_{0}\}\) or \(m\in\{1,2,\ldots,m_{0}\}\), then \(X_{ij_{l_{1}}}(\lambda_{r;l_{1}})\) is independent of \(X_{ji_{l}}(\lambda_{r;l})\), \(X_{i^{\prime}j^{\prime}_{l}}(\lambda_{r;l})\) and \(X_{j^{\prime}i^{\prime}_{m}}(\lambda_{r;m})\). In this case,
\[\mathbb{E}\left[\prod_{l=1}^{l_{0}}X_{ij_{l}}(\lambda_{r;l})X_{i^{\prime}j^{ \prime}_{l}}(\lambda_{r;l})\prod_{l=1}^{m_{0}}X_{ji_{l}}(\gamma_{v;l})X_{j^{ \prime}i^{\prime}_{l}}(\gamma_{v;l})\right]\]
\[= \mathbb{E}\left[X_{ij_{l_{1}}}\right]\mathbb{E}\left[\prod_{l=1,l\neq l _{1}}^{l_{0}}X_{ij_{l}}(\lambda_{r;l})X_{i^{\prime}j^{\prime}_{l}}(\lambda_{r;l })\prod_{l=1}^{m_{0}}X_{ji_{l}}(\gamma_{v;l})X_{j^{\prime}i^{\prime}_{l}}( \gamma_{v;l})\right] \tag{87}\] \[= 0.\]
Hence, for each \(j_{l_{1}}\) with \(1\leq l_{1}\leq l_{0}\), there exists \(l\) or \(m\) such that \(\{i,j_{l_{1}}\}=\{i^{\prime},j^{\prime}_{l}\}\) or \(\{i,j_{l_{1}}\}=\{j^{\prime},i^{\prime}_{m}\}\). Otherwise, (87) holds. Moreover, if \(\{i,j_{l_{1}}\}=\{i^{\prime},j^{\prime}_{l}\}\) and \(\{i,j_{l_{1}}\}=\{j^{\prime},i^{\prime}_{m}\}\), then \(\{i^{\prime},j^{\prime}_{l}\}=\{j^{\prime},i^{\prime}_{m}\}\). In this case, \(i^{\prime}=i^{\prime}_{m}\) and \(j^{\prime}=j^{\prime}_{l}\) (since \(i^{\prime}\neq j^{\prime}\)). This is not possible, due to the fact that \(i^{\prime}_{m}\notin\{i^{\prime},j^{\prime}\}\) and \(j^{\prime}_{l}\notin\{i^{\prime},j^{\prime}\}\) for all \(m,l\). Therefore, \(\{i,j_{l_{1}}\}\) can only be equal to one of \(\{i^{\prime},j^{\prime}_{l}\}\) and \(\{j^{\prime},i^{\prime}_{m}\}\), but not both. Similarly, \(\{j,i_{m}\}\) can only be equal to one of \(\{i^{\prime},j^{\prime}_{l}\}\) and \(\{j^{\prime},i^{\prime}_{m}\}\).
Let \(m_{0}=0\). If \(l_{0}\geq 2\), then \(i=i^{\prime}\), \(\{j_{1},j_{2},\ldots,j_{l_{0}}\}=\{j^{\prime}_{1},j^{\prime}_{2},\ldots,j^{ \prime}_{l_{0}}\}\). Then
\[\mathbb{E}\left[\prod_{l=1}^{l_{0}}X_{ij_{l}}(\lambda_{r;l})X_{i^ {\prime}j^{\prime}_{l}}(\lambda_{r;l})\prod_{l=1}^{m_{0}}X_{ji_{l}}(\gamma_{v ;l})X_{j^{\prime}i^{\prime}_{l}}(\gamma_{v;l})\right] \tag{88}\] \[= \mathbb{E}\left[\prod_{l=1}^{l_{0}}X_{ij_{l}}(\lambda_{r;l})^{2} \right]=O(p^{l_{0}}_{n}).\]
Then the sum over these indices in (86) is bounded by
\[O\left(n(np_{n})^{2(r+v)-l_{0}+2}f^{(s,t)}(np_{n},np_{n})^{2}\right)=o(\sigma _{n}^{2}). \tag{89}\]
If \(l_{0}=1\), then \(\{i,j_{1}\}=\{i^{\prime},j^{\prime}_{1}\}\). In this case, (89) still holds.
Let \(m_{0}=1\). If \(l_{0}\geq 3\), then \(i=i^{\prime}\), \(\{j_{1},j_{2},\ldots,j_{l_{0}}\}=\{j^{\prime}_{1},j^{\prime}_{2},\ldots,j^{ \prime}_{l_{0}}\}\) and \(\{j,i_{1}\}=\{j^{\prime},i^{\prime}_{1}\}\). Then the sum over these indices in (86) is bounded by
\[O\left((np_{n})^{2(r+v)-l_{0}+1}f^{(s,t)}(np_{n},np_{n})^{2}\right)=o(\sigma_{ n}^{2}). \tag{90}\]
If \(l_{0}=2\), there are two situations: (i) \(i=i^{\prime}\), \(\{j_{1},j_{2}\}=\{j^{\prime}_{1},j^{\prime}_{2}\}\) and \(\{j,i_{1}\}=\{j^{\prime},i^{\prime}_{1}\}\); (ii) \(\{i,j_{1}\}=\{j^{\prime}_{l_{1}},i^{\prime}\}\), \(\{i^{\prime},j^{\prime}_{l_{2}}\}=\{j,i_{1}\}\), \(\{i,j_{2}\}=\{j^{\prime},i^{\prime}_{1}\}\) with \(l_{1}\neq l_{2}\). For the case (i), the sum over these indices in (86) is bounded by (90) with \(l_{0}=2\). For case (ii), the sum over these indices in (86) is
bounded by
\[O\left((np_{n})^{2(r+v)-2}f^{(s,t)}(np_{n},np_{n})^{2}\right)=o(\sigma_{n}^{2}). \tag{91}\]
If \(l_{0}=1\), then \(\{i,j_{1}\}=\{i^{\prime},j_{1}^{\prime}\}\) and \(\{j,i_{1}\}=\{j^{\prime},i_{1}^{\prime}\}\) or \(\{i,j_{1}\}=\{j^{\prime},i_{1}^{\prime}\}\) and \(\{j,i_{1}\}=\{i^{\prime},j_{1}^{\prime}\}\). Either case, the sum over these indices in (86) is bounded by
\[O\left((np_{n})^{2(r+v)}f^{(s,t)}(np_{n},np_{n})^{2}\right)=o(\sigma_{n}^{2}). \tag{92}\]
If \(l_{0}=0\), then (89) holds with \(l_{0}\) replaced by \(m_{0}=1\).
Let \(m_{0}=2\). If \(l_{0}\geq 3\), then \(i=i^{\prime}\), \(\{j_{1},j_{2},\ldots,j_{l_{0}}\}=\{j_{1}^{\prime},j_{2}^{\prime},\ldots,j_{l_{ 0}}^{\prime}\}\), \(j=j^{\prime}\) and \(\{i_{1},i_{2}\}=\{i_{1}^{\prime},i_{2}^{\prime}\}\). In this case, the sum over these indices in (86) is bounded by
\[O\left((np_{n})^{2(r+v)-l_{0}}f^{(s,t)}(np_{n},np_{n})^{2}\right)=o(\sigma_{n}^ {2}). \tag{93}\]
Let \(l_{0}=2\). There are two cases (i) \(i=i^{\prime}\), \(\{j_{1},j_{2}\}=\{j_{1}^{\prime},j_{2}^{\prime}\}\), \(j=j^{\prime}\) and \(\{i_{1},i_{2}\}=\{i_{1}^{\prime},i_{2}^{\prime}\}\); (ii) \(\{i,j_{1}\}=\{j_{l_{1}}^{\prime},i^{\prime}\}\), \(\{j_{l_{2}}^{\prime},i^{\prime}\}=\{j,i_{m_{1}}\}\), \(\{j,i_{m_{2}}\}=\{j^{\prime},i_{m_{3}}^{\prime}\}\), \(\{j^{\prime},i_{m_{4}}^{\prime}\}=\{i,j_{2}\}\). For case (i), (93) holds. For case (ii), the sum over these indices in (86) is bounded by
\[O\left((np_{n})^{2(r+v)-2}f^{(s,t)}(np_{n},np_{n})^{2}\right)=o(\sigma_{n}^{2}). \tag{94}\]
The case \(l_{0}=0,1\) are similar to the case \(m_{0}=0,1\) and \(l_{0}=2\) discussed earlier.
The case \(m_{0}\geq 3\) is similar to the case \(l_{0}\geq 3\).
Now we study the second moment of (50). Note that
\[\sum_{i\neq j}f^{(s,t)}(w_{i(j)},w_{j(i)})(d_{i(j)}-w_{i(j)})^{s}( d_{j(i)}-w_{j(i)})^{t}(A_{ij}-p_{n}w_{ij})\] \[= \sum_{r=1}^{s}\sum_{v=1}^{t}U_{rv},\]
where
\[U_{rv} = \sum_{i\neq j}\sum_{\begin{subarray}{c}j_{1},j_{2},\ldots,j_{r} \notin\{i,j\}\\ j_{1}\neq j_{2}\neq\ldots\neq j_{r}\\ i_{1},i_{2},\ldots,i_{v}\notin\{i,j\}\\ i_{1}\neq i_{2}\neq\ldots\neq i_{v}\end{subarray}}f^{(s,t)}(w_{i(j)},w_{j(i)}) (A_{ij}-p_{n}w_{ij})\] \[\times\prod_{l=1}^{r}(A_{ij_{l}}-p_{n}w_{ij_{l}})^{\lambda_{r;l}} \prod_{m=1}^{v}(A_{ji_{m}}-p_{n}w_{ji_{m}})^{\gamma_{v;m}}.\]
Since \(s,t\) are fixed finite integers less than \(k_{0}\), we only need to bound the variance of \(U_{rv}\). Obviously, \(\mathbb{E}[U_{rv}]=0\). Then \(Var(U_{rv})\) is equal to
\[\mathbb{E}[U_{rv}^{2}] \tag{95}\] \[= \sum_{\begin{subarray}{c}i\neq j\\ i^{\prime}\neq j^{\prime}\end{subarray}}\sum_{\begin{subarray}{c}j_{1},j_{2}, \ldots,j_{r}\notin\{i,j\}\\ j_{1}\neq j_{2}\neq\ldots\neq j_{r}\\ i_{1},i_{2},\ldots,i_{v}\notin\{i,j\}\\ i_{1}\neq i_{2}\neq\ldots\neq i_{v}\end{subarray}}\sum_{\begin{subarray}{c}i_{ 1}^{\prime},i_{2}^{\prime},\ldots,i_{r}^{\prime}\notin\{i^{\prime},j^{\prime} \}\\ i_{1}^{\prime}\neq i_{2}^{\prime}\neq\ldots\neq i_{r}^{\prime}\\ j_{1}^{\prime},j_{2}^{\prime},\ldots,j_{v}^{\prime}\notin\{i^{\prime},j^{ \prime}\}\\ j_{1}^{\prime}\neq j_{2}^{\prime}\neq\ldots\neq j_{v}^{\prime}\end{subarray}}f^{(s,t)}(w_{i(j)},w_{i(j)})\] \[\times f^{(s,t)}(w_{i^{\prime}(j^{\prime})},w_{j^{\prime}(i^{ \prime})})\mathbb{E}\Bigg{[}(A_{ij}-p_{n}w_{ij})(A_{i^{\prime}j^{\prime}}-p_{n }w_{i^{\prime}j^{\prime}})\left(\prod_{l=1}^{r}\bar{A}_{ij_{l}}^{\lambda_{r;l }}\right)\] \[\times\left(\prod_{l=1}^{r}\bar{A}_{i^{\prime}j^{\prime}_{l}}^{ \lambda_{r;l}}\right)\left(\prod_{m=1}^{v}\bar{A}_{ji_{m}}^{\gamma_{v;m}} \right)\left(\prod_{m=1}^{v}\bar{A}_{ji^{\prime}i^{\prime}_{m}}^{\gamma_{v;m} }\right)\Bigg{]}.\]
Denote the expectation in (95) as \(E_{1}\). Fix \(r\in\{1,2,\ldots,s\}\) and \(v\in\{1,2,\ldots,t\}\). There are two cases: (a) there are some indices \(\lambda_{r;l}\) or \(\gamma_{v;m}\) are equal to one; (b) \(\lambda_{r;l}\geq 2\) and \(\gamma_{v;m}\geq 2\) for all \(l\in\{1,2,\ldots,r\}\) and \(m\in\{1,2,\ldots,v\}\).
Consider case (a) first. Suppose there are some indices \(\lambda_{r;l}\) or \(\gamma_{v;m}\) which are equal to one. Without loss of generality, let \(\lambda_{r;1}=\lambda_{r;2}=\cdots=\lambda_{r;r_{0}}=1\) and \(\lambda_{r;l}\geq 2\) for \(l\in\{r_{0}+1,\ldots,r\}\). Let \(\gamma_{v;1}=\gamma_{v;2}=\cdots=\gamma_{v;v_{0}}=1\) and \(\gamma_{v;l}\geq 2\) for \(l\in\{v_{0}+1,\ldots,v\}\). Here \(r_{0}+v_{0}\geq 1\). Then
\[E_{1} = \mathbb{E}\Bigg{[}(A_{ij}-p_{n}w_{ij})(A_{i^{\prime}j^{\prime}}- p_{n}w_{i^{\prime}j^{\prime}})\left(\prod_{l=1}^{r_{0}}\bar{A}_{ij_{l}}\right) \left(\prod_{l=r_{0}+1}^{r}\bar{A}_{ij_{l}}^{\lambda_{r;l}}\right) \tag{96}\] \[\times\prod_{l=1}^{r}\bar{A}_{ij_{l}}^{\lambda_{r;l}}\left(\prod_ {m=1}^{v}\bar{A}_{ji_{l}}^{\lambda_{r;l}}\right)\left(\prod_{m=1}^{v}\bar{A}_{ ji_{l}}^{\lambda_{r;l}}\right)\Bigg{]}.\]
Since \(\lambda_{r;l}\geq 2\) for all \(l\in\{1,2,\ldots,r\}\), we have
\[\mathbb{E}[U_{rv}^{2}]\leq\mathbb{E}[U_{rv}^{2}]\leq\mathbb{E}[U_{rv}^{2}] \tag{97}\]
where \(U_{rv}^{2}=\sum_{\begin{subarray}{c}i\neq j\\ i^{\prime}\neq j^{\prime}\end{subarray}}\sum_{\begin{subarray}{c}j_{1},j_{2}, \ldots,j_{r}\notin\{i,j\}\\ j_{1}\neq j_{2}\neq\ldots\neq j_{r}\\ i_{1},i_{2},\ldots,i_{v}\notin\{i,j\}\\ i_{1}\neq i_{2}\neq\ldots\neq i_{v}\end{subarray}}\sum_{\begin{subarray}{c}i_{ 1}^{\prime},i_{2}^{\prime},\ldots,i_{r}^{\prime}\notin\{i^{\prime},j^{\prime} \}\\ i_{1}^{\prime}\neq i_{2}^{\prime}\neq\ldots\neq i_{r}^{\prime}\end{subarray}}f^{(s,t)}(w_{i(j)},w_{i(j)})\).
Since \(\lambda_{r;l}\geq 2\) for all \(l\in\{1,2,\ldots,r\}\), we have
\[\mathbb{E}[U_{rv}^{2}]\leq\mathbb{E}[U_{rv}^{2}]\leq\mathbb{E}[U_{rv}^{2}] \tag{98}\]
where \(U_{rv}^{2}=\sum_{\begin{subarray}{c}i\neq j\\ i^{\prime}\neq j^{\prime}\end{subarray}}\sum_{\begin{subarray}{c}j_{1},j_{2}, \ldots,j_{r}\notin\{i,j\}\\ j_{1}\neq j_{2}\neq\ldots\neq j_{r}\\ i_{1},i_{2},\ldots,i_{v}\notin\{i,j\}\\ i_{1}\neq i_{2}\neq\ldots\neq i_{v}\end{subarray}}\sum_{\begin{subarray}{c}i_{ 1}^{\prime},i_{2}^{\prime},\ldots,i_{r}^{\prime}\notin\{i^{\prime},j^{\prime} \}\\ i_{1}^{\prime}\neq i_{2}^{\prime}\neq\ldots\neq i_{r}^{\prime}\end{subarray}}f^{(s,t)}(w_{i(j)},w_{i(j)})\).
Since \(\lambda_{r;l}\geq 2\) for all \(l\in\{1,2,\ldots,r\}\), we have
\[\mathbb{E}[U_{rv}^{2}]\leq\mathbb{E}[U_{rv}^{2}]\leq\mathbb{E}[U_{rv}^{2}]\leq \mathbb{E}[U_{rv}^{2}]\leq\mathbb{E}[U_{rv}^{2}]\leq\mathbb{E}[U_{rv}^{2}]\leq \mathbb{E}[U_{rv}^{2}]\leq\mathbb{E}[U_{rv}^{2}]\leq\mathbb{E}[U_{rv}^{2}] \tag{99}\]
where \(U_{rv}^{2}=\sum_{\begin{subarray}{c}i\neq j\\ i^{\prime}\neq j^{\prime}\end{subarray}}\sum_{\begin{subarray}{c}j_{1},j_{2}, \ldots,j_{r}\notin\{i,j\}\\ i_{1}\neq i_{2}\neq\ldots\neq i_{v}\end{subarray}}\sum_{\begin{subarray}{c}i_{ 1}^{\prime},i_{2}^{\prime},\ldots,i_{r}^{\prime}\notin\{i^{\prime},j^{\prime} \}\\ i_{1}^{\prime}\neq i_{2}^{\prime}\neq\ldots\neq i_{r}^{\prime}\end{subarray}}f^{(s,t)}(w_{i(j)},w_{i(j)})\).
Since \(\lambda_{r;l}\geq 2\) for all \(l\in\{1,2,\ldots,r\}\), we have
\[\mathbb{E}[U_{rv}^{2}]\leq\mathbb{E}[U_{rv}^{2}]\leq\mathbb{E}[U_{rv}^{2}]\leq \mathbb{E}[U_{rv}^{2}]\leq\mathbb{E}[U_{rv}^{2}]\leq\mathbb{E}[U_{rv}^{2}]\leq \mathbb{E}[U_{rv}^{2}]\leq\mathbb{E}[U_{rv}^{2}]\leq\mathbb{E}[U_{rv}^{2}] \tag{100}\]
\[\times\left(\prod_{l=1}^{r_{0}}\bar{A}_{i^{\prime}j^{\prime}_{l}} \right)\left(\prod_{l=r_{0}+1}^{r}\bar{A}_{i^{\prime}j^{\prime}_{l}}^{\lambda_{r ;l}}\right)\left(\prod_{m=1}^{v_{0}}\bar{A}_{ji_{m}}\right)\left(\prod_{m=v_{0} +1}^{v}\bar{A}_{ji_{m}}^{\gamma_{v;m}}\right)\] \[\times\left(\prod_{m=1}^{v_{0}}\bar{A}_{j^{\prime}i^{\prime}_{m}} \right)\left(\prod_{m=v_{0}+1}^{v}\bar{A}_{j^{\prime}i^{\prime}_{m}}^{\gamma_ {v;m}}\right)\Bigg{]}.\]
We split the sum of (95) into two cases: \(\{i,j\}\neq\{i^{\prime},j^{\prime}\}\) and \(\{i,j\}=\{i^{\prime},j^{\prime}\}\).
Suppose \(\{i,j\}\neq\{i^{\prime},j^{\prime}\}\). Let \(v_{0}\geq 2\) and \(r_{0}\geq 2\). For \(1\leq m_{1}\leq v_{0}\), if \(\{j,i_{m_{1}}\}\neq\{i^{\prime},j^{\prime}_{l}\}\) for any \(l\), \(\{j,i_{m_{1}}\}\neq\{j^{\prime},i^{\prime}_{m}\}\) for any \(m\) and \(\{j,i_{m_{1}}\}\neq\{i^{\prime},j^{\prime}\}\), then \(A_{ji_{m_{1}}}\) is independent of \(A_{i^{\prime}j^{\prime}_{l}}\) for all \(l\), \(A_{j^{\prime}i^{\prime}_{m}}\) for all \(m\) and \(A_{i^{\prime}j^{\prime}}\). Then
\[E_{1} = \mathbb{E}\Bigg{[}(A_{ij}-p_{n}w_{ij})(A_{i^{\prime}j^{\prime}}-p _{n}w_{i^{\prime}j^{\prime}})\left(\prod_{l=1}^{r_{0}}\bar{A}_{ij_{l}}\right) \left(\prod_{l=r_{0}+1}^{r}\bar{A}_{ij_{l}}^{\lambda_{r;l}}\right) \tag{96}\] \[\times\left(\prod_{l=1}^{r_{0}}\bar{A}_{i^{\prime}j^{\prime}_{l} }\right)\left(\prod_{l=r_{0}+1}^{r}\bar{A}_{i^{\prime}j^{\prime}_{l}}^{\lambda _{r;l}}\right)\left(\prod_{m=1,m\neq m_{1}}^{v_{0}}\bar{A}_{ji_{m}}\right) \left(\prod_{m=v_{0}+1}^{v}\bar{A}_{ji_{m}}^{\gamma_{v;m}}\right)\] \[\times\left(\prod_{m=1}^{v_{0}}\bar{A}_{j^{\prime}i^{\prime}_{m}} \right)\left(\prod_{m=v_{0}+1}^{v}\bar{A}_{j^{\prime}i^{\prime}_{m}}^{\gamma_ {v;m}}\right)\Bigg{]}\mathbb{E}\left[\bar{A}_{ji_{m_{1}}}\right]=0.\]
Hence, \(\{j,i_{m_{1}}\}=\{i^{\prime},j^{\prime}_{l}\}\) for some \(l\) or \(\{j,i_{m_{1}}\}=\{j^{\prime},i^{\prime}_{m}\}\) for some \(m\) or \(\{j,i_{m_{1}}\}=\{i^{\prime},j^{\prime}\}\). Similar results hold for \(\{i,j\}\), \(\{i,j_{l}\}\) with \(1\leq l\leq r_{0}\), \(\{j^{\prime},i^{\prime}_{m}\}\) with \(1\leq m\leq v_{0}\), \(\{i^{\prime},j^{\prime}\}\), and \(\{i^{\prime},j^{\prime}_{l}\}\) with \(1\leq l\leq r_{0}\). (i) Suppose \(\{j,i_{1}\}=\{i^{\prime},j^{\prime}_{l_{1}}\}\). Then \(\{j,i_{2}\}=\{i^{\prime},j^{\prime}_{l_{2}}\}\) or \(\{j,i_{2}\}=\{i^{\prime},j^{\prime}\}\). If \(\{j,i_{2}\}=\{i^{\prime},j^{\prime}_{l_{2}}\}\), then \(\{i,j\}=\{i^{\prime},j^{\prime}_{l_{3}}\}\). Since \(j^{\prime}_{l}\neq j^{\prime}\) for all \(l\), then either \(\{j^{\prime},i^{\prime}_{1}\}\neq\{j^{\prime}_{l_{3}},j_{l}\}\) for all \(l\) or \(\{j^{\prime},i^{\prime}_{2}\}\neq\{j^{\prime}_{l_{3}},j_{l}\}\) for all \(l\). By a similar argument as in (96), \(E_{1}=0\). The same result holds if \(\{j,i_{2}\}=\{i^{\prime},j^{\prime}\}\). (ii) Suppose \(\{j,i_{1}\}=\{j^{\prime},i^{\prime}_{m_{1}}\}\) or \(\{j,i_{1}\}=\{i^{\prime},j^{\prime}\}\). By a similar argument as in the case \(\{j,i_{1}\}=\{i^{\prime},j^{\prime}_{l_{1}}\}\), it is easy to get \(E_{1}=0\).
Let \(v_{0}\geq 2,r_{0}\leq 1\). Given \(1\leq m\leq v_{0}\), if \(\{j,i_{m}\}=\{i^{\prime},j^{\prime}_{l}\}\) for some \(l\), then \(E_{1}=0\). Hence \(\{j,i_{m}\}=\{j^{\prime},i^{\prime}_{l}\}\) for some \(l\). Then \(j=j^{\prime}\) and
\[\{\{j,i\},\{j,i_{1}\},\ldots,\{j,i_{v_{0}}\}\}\subset\{\{j^{\prime},i^{\prime} \},\{j^{\prime},i^{\prime}_{1}\},\ldots,\{j^{\prime},i^{\prime}_{v}\}\}. \tag{97}\]
Similarly,
\[\{\{j^{\prime},i^{\prime}\},\{j^{\prime},i^{\prime}_{1}\},\ldots,\{j^{\prime},i^{ \prime}_{v_{0}}\}\}\subset\{\{j,i\},\{j,i_{1}\},\ldots,\{j,i_{v}\}\}. \tag{98}\]
Without loss of generality, let \(i=i^{\prime}_{1}\), \(i_{1}=i^{\prime}\), \(i_{2}=i^{\prime}_{2}\),..., \(i^{\prime}_{v_{1}}=i^{\prime}_{v_{1}}\) for \(v_{0}\leq v_{1}\leq v\). There are at most \(n^{2(r+v)+2-v_{1}}\) choices of these indices. If \(r_{0}=0\), \(E_{1}\) is bounded by \(O\left(p_{n}^{2(r+v)+1-v_{1}}\right)\). Then the sum of (95) over these indices is bounded by
\[n(np_{n})^{2(r+v)+1-v_{1}}f^{(s,t)}(np_{n},np_{n})^{2}=o(\sigma_{n}^{2}). \tag{99}\]
If \(r_{0}=1\), then \(\{i,j_{1}\}=\{i^{\prime},j^{\prime}_{1}\}\). In this case, the sum of (95) over these indices is bounded by
\[(np_{n})^{2(r+v)-v_{1}}f^{(s,t)}(np_{n},np_{n})^{2}=o(\sigma_{n}^{2}). \tag{100}\]
Let \(v_{0}=r_{0}=1\). In this case, \(\{j,i_{1}\}=\{i^{\prime},j^{\prime}_{l}\}\) for some \(l\) or \(\{j,i_{1}\}=\{j^{\prime},i^{\prime}_{m}\}\) for some \(m\) or \(\{j,i_{1}\}=\{i^{\prime},j^{\prime}\}\). Let \(\{j,i_{1}\}=\{i^{\prime},j^{\prime}_{l_{1}}\}\). If \(j=i^{\prime}\), then \(\{i,j\}=\{i^{\prime},j^{\prime}_{l_{2}}\}\). In this case, \(\{i^{\prime},j^{\prime}\}=\{j,i_{m_{1}}\}\) for some \(m_{1}\geq 2\), \(\{i,j_{1}\}=\{j^{\prime},i^{\prime}_{1}\}\) and \(l_{1}=1\). Otherwise \(E_{1}=0\). There are at most \(n^{2(r+v)-2}\) choices of these indices. In this case, the sum of (95) over these indices is bounded by
\[(np_{n})^{2(r+v)-2}f^{(s,t)}(np_{n},np_{n})^{2}=o(\sigma_{n}^{2}). \tag{101}\]
If \(j=j^{\prime}_{1}\), then \(\{i,j\}=\{j^{\prime},i^{\prime}_{1}\}\), \(\{i,j_{1}\}=\{i^{\prime},j^{\prime}\}\) and \(l_{1}=1\). Otherwise \(E_{1}=0\). There are at most \(n^{2(r+v)-1}\) choices of these indices. In this case, the sum of (95) over these indices is bounded by
\[(np_{n})^{2(r+v)-1}f^{(s,t)}(np_{n},np_{n})^{2}=o(\sigma_{n}^{2}). \tag{102}\]
The cases \(\{j,i_{1}\}=\{j^{\prime},i^{\prime}_{m}\}\) for some \(m\) and \(\{j,i_{1}\}=\{i^{\prime},j^{\prime}\}\) can be similarly bounded as in (100) and (102).
Let \(v_{0}=1\) and \(r_{0}=0\). In this case, \(\{j,i_{1}\}=\{i^{\prime},j^{\prime}_{l}\}\) for some \(l\) or \(\{j,i_{1}\}=\{j^{\prime},i^{\prime}_{m}\}\) for some \(m\) or \(\{j,i_{1}\}=\{i^{\prime},j^{\prime}\}\). Let \(\{j,i_{1}\}=\{j^{\prime},i^{\prime}_{m_{1}}\}\). If \(m_{1}=1\), then \(\{i,j\}=\{j^{\prime},i^{\prime}_{m_{2}}\}\) and \(\{i^{\prime},j^{\prime}\}=\{j,i_{m_{3}}\}\). There are at
most \(n^{2(r+v)-1}\) choices of these indices. Then the sum of (95) over these indices is bounded by (102). If \(m_{1}\geq 2\) and \(m_{3}=1\), then the sum of (95) over these indices is bounded by (102). If \(m_{1}\geq 2\) and \(m_{3}\geq 2\), then \(\{j^{\prime},i^{\prime}_{1}\}=\{j,i_{m_{3}}\}\). There are at most \(n^{2(r+v)-1}\) choices of these indices. Then the sum of (95) over these indices is bounded by
\[n(np_{n})^{2(r+v)-2}f^{(s,t)}(np_{n},np_{n})^{2}=o(\sigma_{n}^{2}). \tag{103}\]
The case \(\{j,i_{1}\}=\{i^{\prime},j^{\prime}_{l}\}\) for some \(l\) and \(\{j,i_{1}\}=\{i^{\prime},j^{\prime}\}\) can be similarly studied.
Suppose \(\{i,j\}=\{i^{\prime},j^{\prime}\}\). Consider \(i=i^{\prime}\) and \(j=j^{\prime}\) first. For any \(m\) with \(1\leq m\leq v_{0}\), if \(i_{m}\neq i^{\prime}_{m_{1}}\) for all \(1\leq m_{1}\leq v\), then the expectation \(E_{1}=0\). Hence, \(\{i_{1},\ldots,i_{v_{0}}\}\subset\{i^{\prime}_{1},\ldots,i^{\prime}_{v}\}\). Similarly, \(\{i^{\prime}_{1},\ldots,i^{\prime}_{v_{0}}\}\subset\{i_{1},\ldots,i_{v}\}\), \(\{j^{\prime}_{1},\ldots,j^{\prime}_{r_{0}}\}\subset\{j_{1},\ldots,j_{r}\}\), \(\{j_{1},\ldots,j_{r_{0}}\}\subset\{j^{\prime}_{1},\ldots,j^{\prime}_{r}\}\). Without loss of generality, let
\[\{i^{\prime}_{1},\ldots,i^{\prime}_{v_{1}}\}=\{i_{1},\ldots,i_{v_{1}}\}, \hskip 14.226378pt\{i^{\prime}_{v_{1}+1},\ldots,i^{\prime}_{v}\}\cap\{i_{v_{1} +1},\ldots,i_{v}\}=\emptyset.\]
\[\{j^{\prime}_{1},\ldots,j^{\prime}_{r_{1}}\}=\{j_{1},\ldots,j_{r_{1}}\}, \hskip 14.226378pt\{j^{\prime}_{r_{1}+1},\ldots,j^{\prime}_{r}\}\cap\{j_{r_{1} +1},\ldots,j_{r}\}=\emptyset,\]
where \(v_{0}\leq v_{1}\leq v\), \(r_{0}\leq r_{1}\leq r\). There are at most \(n^{2+r_{1}+2(r-r_{1})+v_{1}+2(v-v_{1})}\) indices. Let \(\sigma_{1}\) be a one-to-one map from \(\{i_{1},\ldots,i_{v_{1}}\}\) to \(\{i^{\prime}_{1},\ldots,i^{\prime}_{v_{1}}\}\) and \(\sigma_{2}\) be a one-to-one map from \(\{j_{1},\ldots,j_{r_{1}}\}\) to \(\{j^{\prime}_{1},\ldots,j^{\prime}_{r_{1}}\}\). Then
\[E_{1} = \mathbb{E}\Bigg{[}\bar{A}_{ij}^{2}\left(\prod_{l=1}^{r_{1}}\bar {A}_{ij_{l}}^{\lambda_{r;l}+\lambda_{r;\sigma_{2}(l)}}\right)\left(\prod_{l=r _{1}+1}^{r}\bar{A}_{ij_{l}}^{\lambda_{r;l}}\bar{A}_{ij^{\prime}_{l}}^{\lambda_ {r;l}}\right) \tag{104}\] \[\times\left(\prod_{m=1}^{v_{1}}\bar{A}_{ji_{m}}^{\gamma_{v;m}+ \gamma_{v;\sigma_{1}(m)}}\right)\left(\prod_{m=v_{1}+1}^{v}\bar{A}_{ji_{m}}^{ \gamma_{v;m}}\bar{A}_{ji^{\prime}_{m}}^{\gamma_{v;m}}\right)\Bigg{]}\] \[= \Theta\left(p_{n}^{1+r_{1}+2(r-r_{1})+v_{1}+2(v-v_{1})}\right).\]
Then the sum over indices \(\{i,j\}=\{i^{\prime},j^{\prime}\}\) in (95) is bounded by
\[n(np_{n})^{2(r+v)-(r_{1}+v_{1})+1}f^{(s,t)}(np_{n},np_{n})^{2}=o(\sigma_{n}^{2}). \tag{105}\]
Similarly, (105) holds for the case \(i=j^{\prime}\) and \(j=i^{\prime}\) with \(\max\{r_{0},v_{0}\}\leq\max\{r_{0},v_{0}\}\).
\(r_{1},v_{1}\leq\min\{r,v\}\).
Now we consider case (b). Suppose \(\lambda_{r;l}\geq 2\) for \(l\in\{1,\ldots,r\}\) and \(\gamma_{v;m}\geq 2\) for \(m\in\{1,\ldots,v\}\). In this case \(r\leq\frac{s}{2}\) and \(v\leq\frac{t}{2}\). If \(\{i,j\}\neq\{i^{\prime},j^{\prime}\}\), \(\{i,j\}\neq\{i^{\prime},j^{\prime}_{l}\}\) for all \(l\), \(\{i,j\}\neq\{j^{\prime},i^{\prime}_{m}\}\) for all \(m\), then \(E_{1}=0\).
Suppose \(\{i,j\}=\{i^{\prime},j^{\prime}\}\). There are two cases: \(i=i^{\prime}\) and \(j=j^{\prime}\) or \(i=j^{\prime}\) and \(j=i^{\prime}\). Let \(i=i^{\prime}\) and \(j=j^{\prime}\). Suppose \(|\{j_{1},\ldots,j_{r}\}\cap\{j^{\prime}_{1},\ldots,j^{\prime}_{r}\}|=r_{1}\) and \(|\{i_{1},\ldots,i_{v}\}\cap\{i^{\prime}_{1},\ldots,i^{\prime}_{v}\}|=v_{1}\). There are at most \(n^{2+r_{1}+2(r-r_{1})+v_{1}+2(v-v_{1})}\) possible indices. Without loss of generality, assume \(j_{l}=j^{\prime}_{l}\) for \(1\leq l\leq r_{1}\) and \(i_{l}=i^{\prime}_{l}\) for \(1\leq l\leq v_{1}\). Then
\[E_{1} = \mathbb{E}\Bigg{[}(A_{ij}-p_{n}w_{ij})^{2}\left(\prod_{l=1}^{r_{ 1}}\bar{A}_{ij_{l}}^{2\lambda_{r;l}}\right)\left(\prod_{l=r_{1}+1}^{r}\bar{A}_ {ij_{l}}^{\lambda_{r;l}}\right)\left(\prod_{l=r_{1}+1}^{r}\bar{A}_{ij^{\prime }_{l}}^{\lambda_{r;l}}\right) \tag{106}\] \[\times\left(\prod_{m=1}^{v_{1}}\bar{A}_{ji_{m}}^{2\gamma_{v;m}} \right)\left(\prod_{m=v_{1}+1}^{v}\bar{A}_{ji_{m}}^{\gamma_{v;m}}\right) \left(\prod_{m=v_{1}+1}^{v}\bar{A}_{ji^{\prime}_{m}}^{\gamma_{v;m}}\right) \Bigg{]}\] \[= O\left(p_{n}^{1+r_{1}+2(r-r_{1})+v_{1}+2(v-v_{1})}\right).\]
Then the sum over indices \(\{i,j\}=\{i^{\prime},j^{\prime}\}\) with \(i=i^{\prime}\) and \(j=j^{\prime}\) in (95) is bounded by
\[n(np_{n})^{2(r+v)-r_{1}-v_{1}+1}f^{(s,t)}(np_{n},np_{n})^{2}=o(\sigma_{n}^{2}).\]
Let \(i=j^{\prime}\) and \(j=i^{\prime}\). Suppose \(|\{j_{1},\ldots,j_{r}\}\cap\{i^{\prime}_{1},\ldots,i^{\prime}_{v}\}|=r_{1}\) and \(|\{i_{1},\ldots,i_{v}\}\cap\{j^{\prime}_{1},\ldots,j^{\prime}_{r}\}|=v_{1}\), where \(0\leq r_{1}\leq\min\{r,v\}\) and \(0\leq v_{1}\leq\min\{r,v\}\). There are at most \(n^{2+r_{1}+(r-r_{1})+(r-v_{1})+v_{1}+(v-v_{1})+(v-r_{1})}\) such indices. Without loss of generality, let \(j_{l}=i^{\prime}_{l}\) for \(l\leq r_{1}\) and \(i_{l}=j^{\prime}_{l}\) for \(l\leq v_{1}\). Then
\[E_{1} = \mathbb{E}\Bigg{[}(A_{ij}-p_{n}w_{ij})^{2}\left(\prod_{l=1}^{r_{ 1}}\bar{A}_{ij_{l}}^{\lambda_{r;l}+\gamma_{v;l}}\right)\left(\prod_{l=r_{1}+1}^ {r}\bar{A}_{ij_{l}}^{\lambda_{r;l}}\right)\left(\prod_{l=v_{1}+1}^{r}\bar{A}_ {jj^{\prime}_{l}}^{\lambda_{r;l}}\right) \tag{107}\] \[\times\left(\prod_{m=1}^{v_{1}}\bar{A}_{ji_{m}}^{\gamma_{v;m}+ \lambda_{r;m}}\right)\left(\prod_{m=v_{1}+1}^{v}\bar{A}_{ji_{m}}^{\gamma_{v;m}} \right)\left(\prod_{m=r_{1}+1}^{v}\bar{A}_{ii^{\prime}_{m}}^{\gamma_{v;m}} \right)\Bigg{]}\] \[= \Theta\left(p_{n}^{1+r_{1}+(r-r_{1})+(r-v_{1})+v_{1}+(v-v_{1})+( v-r_{1})}\right).\]
Then the sum over indices \(\{i,j\}=\{i^{\prime},j^{\prime}\}\) with \(i=j^{\prime}\) and \(j=i^{\prime}\) in (95) is bounded by
\[n(np_{n})^{2(r+v)-r_{1}-v_{1}+1}f^{(s,t)}(np_{n},np_{n})^{2}=o(\sigma_{n}^{2}).\]
Let \(i=j^{\prime}\) and \(j=i^{\prime}\). Suppose \(|\{j_{1},\ldots,j_{r}\}\cap\{i^{\prime}_{1},\ldots,i^{\prime}_{v}\}|=r_{1}\) and \(|\{i_{1},\ldots,i_{v}\}\cap\{j^{\prime}_{1},\ldots,j^{\prime}_{r}\}|=v_{1}\), where \(0\leq r_{1}\leq\min\{r,v\}\) and \(0\leq v_{1}\leq\min\{r,v\}\). There are at most \(n^{2+r_{1}+(r-r_{1})+(r-v_{1})+v_{1}+(v-v_{1})+(v-r_{1})}\) such indices. Without loss of generality, let \(j_{l}=i^{\prime}_{l}\) for \(l\leq r_{1}\) and \(i_{l}=j^{\prime}_{l}\) for \(l\leq v_{1}\). Then
\[E_{1} = \mathbb{E}\Bigg{[}(A_{ij}-p_{n}w_{ij})^{2}\left(\prod_{l=1}^{r_{ 1}}\bar{A}_{ij_{l}}^{\lambda_{r;l}+\gamma_{v;l}}\right)\left(\prod_{l=r_{1}+1}^ {r}\bar{A}_{ij_{l}}^{\lambda_{r;l}}\right)\left(\prod_{l=v_{1}+1}^{r}\bar{A}_{ jj_{l}}^{\lambda_{r;l}}\right) \tag{108}\] \[\times\left(\prod_{m=1}^{v_{1}}\bar{A}_{ji_{m}}^{\gamma_{v;m}+ \lambda_{r;m}}\right)\left(\prod_{m=v_{1}+1}^{v}\bar{A}_{ji_{m}}^{\gamma_{v;m}} \right)\left(\prod_{m=r_{1}+1}^{v}\bar{A}_{ii^{\prime}_{m}}^{\gamma_{v;m}} \right)\Bigg{]}\] \[= \Theta\left(p_{n}^{1+r_{1}+(r-r_{1})+(r-v_{1})+v_{1}+(v-v_{1})+(v-r_{ 1})}\right).\]
Then the sum over indices \(\{i,j\}=\{i^{\prime},j^{\prime}\}\) with \(i=j^{\prime}\) and \(j=i^{\prime}\) in (95) is bounded by
\[n(np_{n})^{2(r+v)-r_{1}-v_{1}+1}f^{(s,t)}(np_{n},np_{n})^{2}=o(\sigma_{n}^{2}).\]
Let \(i=j^{\prime}\) and \(j=i^{\prime}\). Suppose \(|\{j_{1},\ldots,j_{r}\}\cap\{i^{\prime}_{1},\ldots,i^{\prime}_{v}\}|=r_{1}\) and \(|\{i_{1},\ldots,i_{v}\}\cap\{j^{\prime}_{1},\ldots,j^{\prime}_{r}\}|=v_{1}\), where \(0\leq r_{1}\leq\min\{r,v\}\) and \(0\leq v_{1}\leq\min\{r,v\}\). There are at most \(n^{2+r_{1}+(r-r_{1})+(r-v_{1})+v_{1}+(v-v_{1})+(v-r_{1})}\) such indices. Without loss of generality, let \(j_{l}=i^{\prime}_{l}\) for \(l\leq r_{1}\) and \(i_{l}=j^{\prime}_{l}\) for \(l\leq v_{1}\). Then
\[E_{1} = \mathbb{E}\Bigg{[}(A_{ij}-p_{n}w_{ij})^{2}\left(\prod_{l=1}
bounded by
\[O\left(n(np_{n})^{1+2(r+v)-(r_{1}+v_{1})}f^{(s,t)}(np_{n},np_{n})^{2}\right)=o( \sigma_{n}^{2}).\]
Suppose \(\{i,j\}=\{i^{\prime},j^{\prime}_{l_{1}}\}\) for some \(l_{1}\). If \(i=j^{\prime}_{l_{1}}\) and \(j=i^{\prime}\), then \(j^{\prime}=i_{m_{1}}\) for some \(m_{1}\), otherwise \(E_{1}=0\). There are at most \(n^{3+r+(r-1)+v+(v-1)}\) possible nodes. In this case,
\[E_{1} = \mathbb{E}\Bigg{[}(A_{jj^{\prime}_{l_{1}}}-p_{n}w_{jj^{\prime}_{l _{1}}})^{1+\lambda_{r;l_{1}}}\left(\prod_{l=1}^{r}\bar{A}^{\lambda_{r;l}}_{ij_ {l}}\right)\left(\prod_{l=1,l\neq l_{1}}^{r}\bar{A}^{\lambda_{r;l}}_{jj^{ \prime}_{l}}\right) \tag{108}\] \[\times(A_{ji_{m_{1}}}-p_{n}w_{ji_{m_{1}}})^{1+\gamma_{v;m_{1}}} \left(\prod_{m=1,m\neq m_{1}}^{v}\bar{A}^{\gamma_{v;m}}_{ji_{m}}\right)\left( \prod_{m=1}^{v}\bar{A}^{\gamma_{v;m}}_{i_{m_{1}}i^{\prime}_{m}}\right)\Bigg{]}\] \[= \Theta\left(p_{n}^{2+r+(r-1)+v+(v-1)}\right).\]
Then the sum over indices \(\{i,j\}=\{i^{\prime},j^{\prime}_{l_{1}}\}\) in (95) is bounded by
\[O\left(n(np_{n})^{2(r+v)}f^{(s,t)}(np_{n},np_{n})^{2}\right)=o(\sigma_{n}^{2}). \tag{109}\]
Similarly, (109) holds for \(i=i^{\prime}\) and \(j=j^{\prime}_{l_{1}}\) or \(\{i,j\}=\{j^{\prime},i^{\prime}_{m_{1}}\}\) for some \(m_{1}\).
#### 4.2.4 Bound the last term of (26)
Now we prove the last term of (26) converges in probability to zero. To this end, we will show that
\[\mathbb{E}\left[\left|\sum_{i\neq j}R_{ij}A_{ij}\right|\right]=o\left(\sigma_{ n}\right). \tag{110}\]
Let \(s,t\) satisfy \(s+t=k_{0}\). By \((C3)\) of Assumption 1, \(|f^{(s,t)}(x,y)|\) is monotone in \(x\) and \(y\). There are four cases: \(|f^{(s,t)}(x,y)|\) is decreasing in \(x\) and \(y\), \(|f^{(s,t)}(x,y)|\) is increasing in \(x\) and \(y\), \(|f^{(s,t)}(x,y)|\) is increasing in \(x\) and decreasing in \(y\), \(|f^{(s,t)}(x,y)|\) is decreasing in \(x\) and increasing in \(y\).
Suppose \(|f^{(s,t)}(x,y)|\) is decreasing in \(x\) and \(y\). Let \(\delta_{n}=\left[\log(np_{n})\right]^{-2}\)
Then
\[\mathbb{E}[|f^{(s,t)}(X_{i(j)},X_{j(i)})||d_{i(j)}-w_{i(j)}|^{s}|d_{j (i)}-w_{j(i)}|^{t}] \tag{111}\] \[= \mathbb{E}\Bigg{[}|f^{(s,t)}(X_{i(j)},X_{j(i)})||d_{i(j)}-w_{i(j)}| ^{s}|d_{j(i)}-w_{j(i)}|^{t}\] \[\times I[X_{i(j)}\geq\delta_{n}w_{i(j)},X_{j(i)}\geq\delta_{n}w_{ j(i)}]\Bigg{]}\] \[+\mathbb{E}\Bigg{[}|f^{(s,t)}(X_{i(j)},X_{j(i)})||d_{i(j)}-w_{i(j) }|^{s}|d_{j(i)}-w_{j(i)}|^{t}\] \[\times I[X_{i(j)}<\delta_{n}w_{i(j)},X_{j(i)}\geq\delta_{n}w_{j(i) }]\Bigg{]}\] \[+\mathbb{E}\Bigg{[}|f^{(s,t)}(X_{i(j)},X_{j(i)})||d_{i(j)}-w_{i( j)}|^{s}|d_{j(i)}-w_{j(i)}|^{t}\] \[\times I[X_{i(j)}<\delta_{n}w_{i(j)},X_{j(i)}<\delta_{n}w_{j(i)} ]\Bigg{]}.\]
By the Cauchy-Schwarz inequality and (52), we have
\[\mathbb{E}\left[|d_{i(j)}-w_{i(j)}|^{s}\right] \leq \sqrt{\mathbb{E}\left[(d_{i(j)}-w_{i(j)})^{2s}\right]}\] \[= \sqrt{\sum_{r=1}^{2s}\sum_{\begin{subarray}{c}j_{1},j_{2},\ldots,j_{r}\notin\{i,j\}\\ j_{1}\neq j_{2}\neq\ldots\neq j_{r}\end{subarray}}\prod_{l=1}^{r}\mathbb{E} \left[(A_{ij_{1}}-p_{n}w_{iji})^{\lambda_{r;l}}\right]}\] \[= O\left(\sqrt{\sum_{r=1}^{s}(np_{n})^{r}}\right)=O\left((np_{n})^ {\frac{s}{2}}\right).\]
Then
\[\mathbb{E}\Bigg{[}|f^{(s,t)}(X_{i(j)},X_{j(i)})||d_{i(j)}-w_{i(j)}|^{ s}|d_{j(i)}-w_{j(i)}|^{t} \tag{112}\] \[\times I[X_{i(j)}\geq\delta_{n}w_{i(j)},X_{j(i)}\geq\delta_{n}w_{j (i)}]\Bigg{]}\] \[\leq \mathbb{E}\Bigg{[}|f^{(s,t)}(\delta_{n}w_{i(j)},\delta_{n}w_{j(i)} )|d_{i(j)}-w_{i(j)}|^{s}|d_{j(i)}-w_{j(i)}|^{t}\] \[\times I[X_{i(j)}\geq\delta_{n}w_{i(j)},X_{j(i)}\geq\delta_{n}w_{j (i)}]\Bigg{]}\] \[\leq |f^{(s,t)}(\delta_{n}w_{i(j)},\delta_{n}w_{j(i)})|\mathbb{E} \left[|d_{i(j)}-w_{i(j)}|^{s}|d_{j(i)}-w_{j(i)}|^{t}\right]\] \[= |f^{(s,t)}(\delta_{n}w_{i(j)},\delta_{n}w_{j(i)})|\mathbb{E} \left[|d_{i(j)}-w_{i(j)}|^{s}\right]\mathbb{E}\left[|d_{j(i)}-w_{j(i)}|^{t}\right]\] \[\leq (np_{n})^{\frac{k_{0}}{2}}|f^{(s,t)}(\delta_{n}w_{i(j)},\delta_{ n}w_{j(i)})|.\]
On the event \(\{X_{i(j)}<\delta_{n}w_{i(j)}\}\), if \(X_{i(j)}<d_{i(j)}\), then \(X_{i(j)}\) cannot be between \(d_{i(j)}\) and \(w_{i(j)}\). Hence \(X_{i(j)}<\delta_{n}w_{i(j)}\) implies \(d_{i(j)}\leq X_{i(j)}<\delta_{n}w_{i(j)}\). Similar result holds for \(X_{j(i)}\). By definition, \(d_{i(j)}\) and \(d_{j(i)}\) are independent if \(i\neq j\). By Lemma 1 and \((C2)\) of Assumption 1, one has
\[\mathbb{E}\Big{[}|f^{(s,t)}(X_{i(j)},X_{j(i)})||d_{i(j)}-w_{i(j)}| ^{s}|d_{j(i)}-w_{j(i)}|^{t} \tag{113}\] \[\times I[X_{i(j)}<\delta_{n}w_{i(j)},X_{j(i)}<\delta_{n}w_{j(i)} ]\Big{]}\] \[\leq \mathbb{E}\Big{[}|f^{(s,t)}(X_{i(j)},X_{j(i)})||d_{i(j)}-w_{i(j)} |^{s}|d_{j(i)}-w_{j(i)}|^{t}\] \[\times I[d_{i(j)}\leq X_{i(j)}<\delta_{n}w_{i(j)},d_{j(i)}\leq X_ {j(i)}<\delta_{n}w_{j(i)}]\Big{]}\] \[\leq \mathbb{E}\Big{[}|f^{(s,t)}(d_{i(j)},d_{j(i)})||d_{i(j)}-w_{i(j)} |^{s}|d_{j(i)}-w_{j(i)}|^{t}\] \[\times I[d_{i(j)}<\delta_{n}w_{i(j)},d_{j(i)}<\delta_{n}w_{j(i)} ]\Big{]}\] \[= \sum_{k=1}^{\delta_{n}w_{i(j)}\;\delta_{n}w_{j(i)}}|f^{(s,t)}(k, l)||k-w_{i(j)}|^{s}|l-w_{j(i)}|^{t}\mathbb{P}(d_{i(j)}=k)\mathbb{P}(d_{j(i)}=l)\] \[= O\left(\left(\delta_{n}np_{n}\right)^{M}\exp(-2np_{n}\beta(1+o(1)))\right.\] \[= \exp(-2np_{n}\beta(1+o(1)),\]
where \(M\) is a positive constant.
Similarly, the second term of (111) is bounded as follows.
\[\mathbb{E}\Big{[}|f^{(s,t)}(X_{i(j)},X_{j(i)})||d_{i(j)}-w_{i(j)}|^ {s}|d_{j(i)}-w_{j(i)}|^{t} \tag{114}\] \[\times I[X_{i(j)}\geq\delta_{n}w_{i(j)},X_{j(i)}<\delta_{n}w_{j(i)} ]\Big{]}\] \[\leq \mathbb{E}\Big{[}|f^{(s,t)}(\delta_{n}w_{i(j)},X_{j(i)})||d_{i(j)} -w_{i(j)}|^{s}|d_{j(i)}-w_{j(i)}|^{t}\] \[\times I[X_{i(j)}\geq\delta_{n}w_{i(j)},d_{j(i)}\leq X_{j(i)}< \delta_{n}w_{j(i)}]\Big{]}\] \[\leq \mathbb{E}\Big{[}|f^{(s,t)}(\delta_{n}w_{i(j)},d_{j(i)})||d_{i(j)} -w_{i(j)}|^{s}|d_{j(i)}-w_{j(i)}|^{t}\] \[\times I[d_{j(i)}<\delta_{n}w_{j(i)}]\Big{]}\] \[= \sum_{k=1}^{\delta_{n}w_{j(i)}}|f^{(s,t)}(\delta_{n}w_{i(j)},k)| (k-w_{j(i)})^{t}\mathbb{P}(d_{j(i)}=k)\mathbb{E}[|d_{i(j)}-w_{i(j)}|^{s}]\] \[= O\left((\delta_{n}np_{n})^{M}\exp(-np_{n}\beta(1+o(1))\right)\] \[= \exp(-np_{n}\beta(1+o(1)).\]
The third term of (111) can be similarly bounded.
Combining (111)-(114) yields
\[\mathbb{E}\left[|f^{(s,t)}(X_{i(j)},X_{j(i)})||d_{i(j)}-w_{i(j)}| ^{s}|d_{j(i)}-w_{j(i)}|^{t}\right]\] \[= O\left((np_{n})^{\frac{k_{0}}{2}}|f^{(s,t)}(\delta_{n}w_{i(j)}, \delta_{n}w_{j(i)})|+\exp(-np_{n}\beta(1+o(1))\right).\]
By \((C4)\) of Assumption 1, we have
\[\mathbb{E}\left[\left|\sum_{i\neq j}R_{ij}A_{ij}\right|\right]=O\left(n(np_{n} )^{\frac{k_{0}}{2}+1}|f^{(s,t)}(\delta_{n}np_{n},\delta_{n}np_{n})|\right)=o \left(\sigma_{n}\right).\]
Then (110) holds.
Suppose \(|f^{(s,t)}(x,y)|\) is increasing in \(x\) and \(y\). Let \(M\) be a large positive constant. Then
\[\mathbb{E}\left[|f^{(s,t)}(X_{i(j)},X_{j(i)})||d_{i(j)}-w_{i(j)}|^{s}|d_{j(i)} -w_{j(i)}|^{t}\right]\]
\[= \mathbb{E}\Bigg{[}|f^{(s,t)}(X_{i(j)},X_{j(i)})||d_{i(j)}-w_{i(j)}|^{ s}|d_{j(i)}-w_{j(i)}|^{t}\] \[\times I[X_{i(j)}\geq Mw_{i(j)},X_{j(i)}\geq Mw_{j(i)}]\Bigg{]}\] \[+\mathbb{E}\Bigg{[}|f^{(s,t)}(X_{i(j)},X_{j(i)})||d_{i(j)}-w_{i(j) }|^{s}|d_{j(i)}-w_{j(i)}|^{t}\] \[\times I[X_{i(j)}\geq Mw_{i(j)},X_{j(i)}<Mw_{j(i)}]\Bigg{]}\] \[+\mathbb{E}\Bigg{[}|f^{(s,t)}(X_{i(j)},X_{j(i)})||d_{i(j)}-w_{i(j) }|^{s}|d_{j(i)}-w_{j(i)}|^{t}\] \[\times I[X_{i(j)}<Mw_{i(j)},X_{j(i)}\geq Mw_{j(i)}]\Bigg{]}\] \[+\mathbb{E}\Bigg{[}|f^{(s,t)}(X_{i(j)},X_{j(i)})||d_{i(j)}-w_{i(j )}|^{s}|d_{j(i)}-w_{j(i)}|^{t}\] \[\times I[X_{i(j)}<Mw_{i(j)},X_{j(i)}<Mw_{j(i)}]\Bigg{]}.\]
On the event \(\{X_{i(j)}\geq Mw_{i(j)}\}\), if \(X_{i(j)}>d_{i(j)}\), then \(X_{i(j)}\) cannot be between \(d_{i(j)}\) and \(w_{i(j)}\). Hence \(X_{i(j)}\geq Mw_{i(j)}\) implies \(Mw_{i(j)}\leq X_{i(j)}\leq d_{i(j)}\). Similar result holds for \(X_{j(i)}\). By definition, \(d_{i(j)}\) and \(d_{j(i)}\) are independent if \(i\neq j\). Suppose \(np_{n}=\omega(\log n)\). By Lemma 1 and \((C2)\) of Assumption 1, one has
\[\mathbb{E}\Bigg{[}|f^{(s,t)}(X_{i(j)},X_{j(i)})||d_{i(j)}-w_{i(j) }|^{s}|d_{j(i)}-w_{j(i)}|^{t}\] \[\times I[X_{i(j)}\geq Mw_{i(j)},X_{j(i)}\geq Mw_{j(i)}]\Bigg{]}\] \[\leq \mathbb{E}\Bigg{[}|f^{(s,t)}(X_{i(j)},X_{j(i)})||d_{i(j)}-w_{i(j) }|^{s}|d_{j(i)}-w_{j(i)}|^{t}\] \[\times I[d_{i(j)}\geq X_{i(j)}\geq Mw_{i(j)},d_{j(i)}\geq X_{j(i) }\geq Mw_{j(i)}]\Bigg{]}\]
\[\leq \mathbb{E}\Bigg{[}|f^{(s,t)}(d_{i(j)},d_{j(i)})||d_{i(j)}-w_{i(j)}|^{s }|d_{j(i)}-w_{j(i)}|^{t}\] \[\times I[d_{i(j)}-1\geq Mw_{i(j)}-1,d_{j(i)}-1\geq Mw_{j(i)}-1] \Bigg{]}\] \[= \sum_{k=Mw_{i(j)}-1}^{n-2}\sum_{l=Mw_{j(i)}-1}^{n-2}|f^{(s,t)}(k+1,l+1)||k+1-w_{i(j)}|^{s}|l+1-w_{j(i)}|^{t}\] \[\times\mathbb{P}(d_{i(j)}-1=k)\mathbb{P}(d_{i(j)}-1=l)\] \[= O\left(n^{C_{s,t,f}}e^{-np_{n}\beta(1+o(1))}\right)\] \[= O\left(e^{-np_{n}\beta(1+o(1))}\right), \tag{116}\]
where \(C_{s,t,f}\) is some constant dependent on \(s,t\) and \(f\). Similarly,
\[\mathbb{E}\Bigg{[}|f^{(s,t)}(X_{i(j)},X_{j(i)})||d_{i(j)}-w_{i(j) }|^{s}|d_{j(i)}-w_{j(i)}|^{t}\] \[\times I[X_{i(j)}\geq Mw_{i(j)},X_{j(i)}<Mw_{j(i)}]\Bigg{]}\] \[\leq \mathbb{E}\Bigg{[}|f^{(s,t)}(X_{i(j)},X_{j(i)})||d_{i(j)}-w_{i(j) }|^{s}|d_{j(i)}-w_{j(i)}|^{t}\] \[\times I[d_{i(j)}\geq X_{i(j)}\geq Mw_{i(j)},X_{j(i)}<Mw_{j(i)}] \Bigg{]}\] \[\leq \mathbb{E}\Big{[}|f^{(s,t)}(d_{i(j)},Mw_{j(i)})||d_{i(j)}-w_{i(j) }|^{s}|d_{j(i)}-w_{j(i)}|^{t}\] \[\times I[d_{i(j)}\geq Mw_{i(j)}]\Big{]}\] \[= \mathbb{E}|d_{j(i)}-w_{j(i)}|^{t}\sum_{k=Mw_{i(j)}-1}^{n-2}|f^{(s, t)}(k+1,Mw_{j(i)})||k+1-w_{i(j)}|^{s}\] \[\times\mathbb{P}(d_{j(i)}-1=k)\] \[= O\left(e^{-np_{n}\beta(1+o(1))}\right), \tag{117}\]
and
\[\mathbb{E}\Bigg{[}|f^{(s,t)}(X_{i(j)},X_{j(i)})||d_{i(j)}-w_{i(j) }|^{s}|d_{j(i)}-w_{j(i)}|^{t}\] \[\times I[X_{i(j)}\geq Mw_{i(j)},X_{j(i)}<Mw_{j(i)}]\Bigg{]}\] \[\leq \mathbb{E}\Bigg{[}|f^{(s,t)}(X_{i(j)},X_{j(i)})||d_{i(j)}-w_{i(j) }|^{s}|d_{j(i)}-w_{j(i)}|^{t}\] \[\times I[d_{i(j)}\geq Mw_{i(j)},X_{j(i)}<Mw_{j(i)}]\Bigg{]}\] \[\leq \mathbb{E}\Big{[}|f^{(s,t)}(d_{i(j)},Mw_{j(i)})||d_{i(j)}-w_{i(j) }|^{s}|d_{j(i)}-w_{j(i)}|^{t}\] \[\times I[d_{i(j)}\geq Mw_{i(j)}]\Big{]}\] \[= \mathbb{E}|d_{j(i)}-w_{j(i)}|^{t}\sum_{k=Mw_{i(j)}-1}^{n-2}|f^{(s, t)}(k+1,Mw_{j(i)})||k+1-w_{i(j)}|^{s}\] \[\times\mathbb{P}(d_{j(i)}-1=k)\] \[= O\left(e^{-np_{n}\beta(1+o(1))}\right), \tag{118}\]
and
\[\mathbb{E}\Bigg{[}|f^{(s,t)}(X_{i(j)},X_{j(i)})||d_{i(j)}-w_{i(j) }|^{s}|d_{j(i)}-w_{j(i)}|^{t}\] \[\times I[X_{i(j)}\geq Mw_{i(j)},X_{j(i)}<Mw_{j(i)}]\Bigg{]}\] \[\leq \mathbb{E}\Bigg{[}|f^{(s,t)}(X_{i(j)},X_{j(i)})||d_{i(j)}-w_{i(j) }|^{s}|d_{j(i)}-w_{j(i)}|^{t}\] \[\times I[d_{i(j)}\geq Mw_{i(j)},X_{j(i)}<Mw_{j(i)}]\Bigg{]}\] \[\leq \mathbb{E}\Big{[}|f^{(s,t)}(d_{i(j)},Mw_{j(i)})||d_{i(j)}-w_{i(j) }|^{s}|d_{j(i)}-w_{j(i)}|^{t}\] \[\times I[d_{i(j)}\geq Mw_{i(j)}]\Big{]}\] \[= \mathbb{E}|d_{j(i)}-w_{j(i)}|^{t}\sum_{k=Mw_{i(j)}-1}^{n-2}|f^{(s, t)}(k+1,Mw_{j(i)})||k+1-w_{i(j)}|^{s}\] \[\times\mathbb{P}(d_{j(i)}-1=k)\] \[= O\left(e^{-np_{n}\beta(1+o(1))}\right), \tag{119}\]
and
\[\mathbb{E}\Bigg{[}|f^{(s,t)}(X_{i(j)},X_{j(i)})||d_{i(j)}-w_{i(j) }|^{s}|d_{j(i)}-w_{j(i)}|^{t}\] \[\times I[X_{i(j)}\geq Mw_{i(j)},X_{j(i)}<Mw_{j(i)}]\Bigg{]}\] \[\leq \mathbb{E}\Bigg{[}|f^{(s,t)}(X_{i(j)},X_{j(i)})||d_{i(j)}-w_{i(j) }|^{s}|d_{j(i)}-w_{j(i)}|^{t}\] \[\times I[d_{i(j)}\geq Mw_{i(j)},X_{j(i)}<Mw_{j(i)}]\Bigg{]}\] \[\leq \mathbb{E}\Big{[}|f^{(s,t)}(d_{i(j)},Mw_{j(i)})||d_{i(j)}-w_{i(j) }|^{s}|d_{j(i)}-w_{j(i)}|^{t}\] \[\times I[d_{i(j)}\geq Mw_{i(j)}]\Big{]}\] \[= \mathbb{E}|d_{j(i)}-w_{j(i)}|^{t}\sum_{k=Mw_{i(j)}-1}^{n-2}|f^{(s, t)}(k+1,Mw_{j(i)})||k+1-w_{i(j)}|^{s}\] \[\times\mathbb{P}(d_{j(i)}-1=k)\] \[= O\left(e^{-np_{n}\beta(1+o(1))}\right), \tag{120}\]
and
\[\mathbb{E}\Bigg{[}|f^{(s,t)}(X_{i(j)},X_{j(i)})||d_{i(j)}-w_{i(j) }|^{s}|d_{j(i)}-w_{j(i)}|^{t}\] \[\times I[X_{i(j)}\geq Mw_{i(j)},X_{j(i)}<Mw_{j(i)}]\Bigg{]}\] \[\leq \mathbb{E}\Bigg{[}|f^{(s,t)}(X_{i(j)},X_{j(i)})||d_{i(j)}-w_{i(j) }|^{s}|d_{j(i)}-w_{j(i)}|^{t}\] \[\times I[d_{i(j)}\geq Mw_{i(j)},X_{j(i)}<Mw_{j(i)}]\Bigg{]}\] \[\leq \mathbb{E}\Big{[}|f^{(s,t)}(d_{i(j)},Mw_{j(i)})||d_{i(j)}-w_{i(j) }|^{s}|d_{j(i)}-w_{j(i)}|^{t}\] \[\times I[d_{i(j)}\geq Mw_{i(j)}]\Big{]}\] \[= \mathbb{E}|d_{j(i)}-w_{j(i)}|^{t}\sum_{k=Mw_{i(j)}-1}^{n-2}|f^{(s, t)}(k+1,Mw_{j(i)})||k+1-w_{i(j)}|^{s}\] \[\times\mathbb{P}(d_{j(i)}-1=k)\] \[= O\left(e^{-np_{n}\beta(1+o(1))}\right), \tag{121}\]
and
\[\mathbb{E}\Bigg{[}|f^{(s,t)}(X_{i(j)},X_{j(i)})||d_{i(j)}-w_{i(j) }|^{s}|d_{j(i)}-w_{j(i)}|^{t}\] \[\times I[d_{i(j)}\geq Mw_{i(j)},X_{j(i)}<Mw_{j(i)}]\Bigg{]}\] \[\leq \mathbb{E}\Big{[}|f^{(s,t)}(X_{i(j)},X_{j(i)})||
\[\times I[X_{i(j)}<Mw_{i(j)},X_{j(i)}<Mw_{j(i)}]\] \[\leq \mathbb{E}\left[|f^{(s,t)}(Mw_{i(j)},Mw_{j(i)})||d_{i(j)}-w_{i(j)}| ^{s}|d_{j(i)}-w_{j(i)}|^{t}\right]\] \[= \mathbb{E}\left[|f^{(s,t)}(Mw_{i(j)},Mw_{j(i)})|\right]\mathbb{E} \left[|d_{i(j)}-w_{i(j)}|^{s}\right]\mathbb{E}\left[|d_{j(i)}-w_{j(i)}|^{t}\right]\] \[= O\left((np_{n})^{\frac{k_{0}}{2}}|f^{(s,t)}(Mnp_{n},Mnp_{n})| \right), \tag{118}\]
By (115), (116),(117) and (118), it follows that
\[\mathbb{E}\left[\left|\sum_{i\neq j}R_{ij}A_{ij}\right|\right]=O\left(n(np_{n} )^{\frac{k_{0}}{2}+1}|f^{(s,t)}(Mnp_{n},Mnp_{n})|\right)=o\left(\sigma_{n} \right).\]
Then (110) holds.
Suppose \(|f^{(s,t)}(x,y)|\) is decreasing in \(x\) and increasing in \(y\). Let \(\delta_{n}=\left[\log(np_{n})\right]^{-2}\) and \(M\) be a large positive constant. Then
\[\mathbb{E}\left[|f^{(s,t)}(X_{i(j)},X_{j(i)})||d_{i(j)}-w_{i(j)}| ^{s}|d_{j(i)}-w_{j(i)}|^{t}\right]\] \[= \mathbb{E}\Bigg{[}|f^{(s,t)}(X_{i(j)},X_{j(i)})||d_{i(j)}-w_{i(j) }|^{s}|d_{j(i)}-w_{j(i)}|^{t}\] \[\times I[X_{i(j)}\geq\delta_{n}w_{i(j)},X_{j(i)}\geq Mw_{j(i)}] \Bigg{]}\] \[+\mathbb{E}\Bigg{[}|f^{(s,t)}(X_{i(j)},X_{j(i)})||d_{i(j)}-w_{i(j )}|^{s}|d_{j(i)}-w_{j(i)}|^{t}\] \[\times I[X_{i(j)}\geq\delta_{n}w_{i(j)},X_{j(i)}<Mw_{j(i)}] \Bigg{]}\] \[+\mathbb{E}\Bigg{[}|f^{(s,t)}(X_{i(j)},X_{j(i)})||d_{i(j)}-w_{i(j )}|^{s}|d_{j(i)}-w_{j(i)}|^{t}\] \[\times I[X_{i(j)}<\delta_{n}w_{i(j)},X_{j(i)}\geq Mw_{j(i)}] \Bigg{]}\] \[+\mathbb{E}\Bigg{[}|f^{(s,t)}(X_{i(j)},X_{j(i)})||d_{i(j)}-w_{i(j )}|^{s}|d_{j(i)}-w_{j(i)}|^{t}\]
\[\times I[X_{i(j)}<\delta_{n}w_{i(j)},X_{j(i)}<Mw_{j(i)}]\Bigg{]}.\]
The first term of (119) can be bounded by
\[\mathbb{E}\Bigg{[}|f^{(s,t)}(X_{i(j)},X_{j(i)})||d_{i(j)}-w_{i(j)} |^{s}|d_{j(i)}-w_{j(i)}|^{t} \tag{120}\] \[\times I[X_{i(j)}\geq\delta_{n}w_{i(j)},X_{j(i)}\geq Mw_{j(i)}] \Bigg{]}\] \[\leq \mathbb{E}\Bigg{[}|f^{(s,t)}(\delta_{n}w_{i(j)},d_{j(i)})||d_{i(j) }-w_{i(j)}|^{s}|d_{j(i)}-w_{j(i)}|^{t}\] \[\times I[X_{i(j)}\geq\delta_{n}w_{i(j)},d_{j(i)}\geq Mw_{j(i)}] \Bigg{]}\] \[= (np_{n})^{\frac{s}{2}}\sum_{k=Mw_{j(i)}}^{n-2}|f^{(s,t)}(\delta_{ n}w_{i(j)},k)||k-w_{j(i)}|^{t}\mathbb{P}(d_{j(i)}=k)\] \[= \exp(-np_{n}\beta(1+o(1)),\]
The second term of (119) can be bounded by
\[\mathbb{E}\Bigg{[}|f^{(s,t)}(X_{i(j)},X_{j(i)})||d_{i(j)}-w_{i(j) }|^{s}|d_{j(i)}-w_{j(i)}|^{t} \tag{121}\] \[\times I[X_{i(j)}\geq\delta_{n}w_{i(j)},X_{j(i)}<Mw_{j(i)}]\Bigg{]}\] \[\leq |f^{(s,t)}(\delta_{n}w_{i(j)},Mw_{j(i)})|(np_{n})^{\frac{k_{0}}{2}}.\]
The third term of (119) can be bounded by
\[\mathbb{E}\Bigg{[}|f^{(s,t)}(X_{i(j)},X_{j(i)})||d_{i(j)}-w_{i(j) }|^{s}|d_{j(i)}-w_{j(i)}|^{t}\] \[\times I[X_{i(j)}<\delta_{n}w_{i(j)},X_{j(i)}\geq Mw_{j(i)}]\Bigg{]}\]
\[\leq \mathbb{E}\Bigg{[}|f^{(s,t)}(d_{i(j)},d_{j(i)})||d_{i(j)}-w_{i(j)}|^{ s}|d_{j(i)}-w_{j(i)}|^{t}\] \[\times I[d_{i(j)}<\delta_{n}w_{i(j)},d_{j(i)}\geq Mw_{j(i)}]\Bigg{]}\] \[= \sum_{k=0}^{\delta_{n}w_{i(j)}}\sum_{l=Mw_{j(i)}}^{n-2}|f^{(s,t)}(k,l)||k-w_{i(j)}|^{s}|l-w_{j(i)}|^{t}\] \[\times\mathbb{P}(d_{i(j)}=k)\mathbb{P}(d_{j(i)}=l)\] \[= \exp(-np_{n}\beta(1+o(1)). \tag{122}\]
By (119)-(122), (110) holds.
The case that \(|f^{(s,t)}(x,y)|\) is increasing in \(x\) and decreasing in \(y\) can be similarly processed. We omit it. Then the proof is complete.
### Proof of Theorem 2 in Subsection 3.1
To prove Theorem 2 in Subsection 3.1, we only need to derive the asymptotic distribution of the Randic index of the Erdos-Renyi random graph \(\mathcal{G}_{n}(\alpha)\), that is, \(\tau=-\frac{1}{2}\). Recall that for the Erdos-Renyi random graph \(\mathcal{G}_{n}(\alpha)\), \(w_{i(k)}=1+(n-2)p_{n}\). Then
\[f_{x}(w_{i(j)},w_{j(i)})=f_{y}(w_{i(j)},w_{j(i)})=-\frac{1}{2(1+(n-2)p_{n}))^ {2}},\]
\[f_{xx}(w_{i(j)},w_{j(i)})=f_{yy}(w_{i(j)},w_{j(i)})=\frac{3}{4(1+(n-2)p_{n}))^ {3}},\]
\[f_{xy}(w_{i(j)},w_{j(i)})=\frac{1}{4(1+(n-2)p_{n}))^{3}},\]
\[|f^{(s,t)}(np_{n},np_{n})|=O\left(\frac{1}{(np_{n})^{1+(s+t)}}\right).\]
Let \(k_{0}=\max\left\{\lfloor 2+\frac{1}{1-\alpha}\rfloor+1,3\right\}\). By (25) and the proof of Theorem 1, we have
\[\mathcal{I}_{n}-\mathbb{E}[\mathcal{I}_{n}] = \frac{1}{2}\sum_{i\neq j}(M_{ij}A_{ij}-\mathbb{E}[M_{ij}A_{ij}])+ \frac{1}{2}\sum_{i\neq j}(S_{ij}A_{ij}-\mathbb{E}[S_{ij}A_{ij}])\] \[+O_{P}\left(\sqrt{\frac{n}{(np_{n})^{3}}}\right).\]
By the calculations in Section 3.1, \(a_{ij}=\Theta\left(\frac{1}{(np_{n})^{2}}\right)\) for \(\tau=-\frac{1}{2}\). Then
\[Var\left(\sum_{i\neq j}a_{ij}(A_{ij}-p_{n}w_{ij})\right)=O\left(\frac{n}{(np_{ n})^{3}}\right).\]
By equations (27)- (32), we have
\[\frac{1}{2}\sum_{i\neq j}\left(M_{ij}A_{ij}-\mathbb{E}[M_{ij}A_{ ij}]\right)\] \[= -\frac{\sum_{i\neq j\neq l}(A_{il}-p_{n})(A_{ij}-p_{n})}{2(1+(n-2 )p_{n}))^{2}}+O_{P}\left(\sqrt{\frac{n}{(np_{n})^{3}}}\right).\]
By equations (33)-(47), we have
\[\frac{1}{2}\sum_{i\neq j}\left(S_{ij}A_{ij}-\mathbb{E}[S_{ij}A_{ ij}]\right)\] \[= \frac{1}{4}\sum_{\begin{subarray}{c}i\neq j,s\neq t\\ s,t\notin\{i,j\}\end{subarray}}p_{n}\left(f_{xx}(w_{i(j)},w_{j(i)})+f_{yy}(w _{i(j)},w_{j(i)})\right)\] \[\times(A_{is}-p_{n})(A_{it}-p_{n})+O_{P}\left(\sqrt{\frac{n}{(np_ {n})^{3}}}\right)\] \[= \frac{3(n-3)p_{n}}{8(1+(n-2)p_{n}))^{3}}\sum_{s\neq t\neq i}(A_{ is}-p_{n})(A_{it}-p_{n})+O_{P}\left(\sqrt{\frac{n}{(np_{n})^{3}}}\right).\]
Hence, we get
\[\mathcal{I}_{n}-\mathbb{E}[\mathcal{I}_{n}] = \mathcal{X}_{n}+O_{P}\left(\sqrt{\frac{n}{(np_{n})^{3}}}\right), \tag{123}\]
where
\[\mathcal{X}_{n}=-\frac{1}{8(1+(n-2)p_{n}))^{2}}\sum_{s\neq t\neq i}(A_{is}-p_{n})( A_{it}-p_{n}).\]
Note that
\[-\mathcal{X}_{n} = \sum_{i<j<k}\frac{(A_{ij}-p_{n})(A_{ik}-p_{n})}{4(1+(n-2)p_{n}))^{2 }}+\sum_{i<j<k}\frac{(A_{ji}-p_{n})(A_{jk}-p_{n})}{4(1+(n-2)p_{n}))^{2}}\] \[+\sum_{i<j<k}\frac{(A_{ki}-p_{n})(A_{kj}-p_{n})}{4(1+(n-2)p_{n}))^ {2}},\]
and the variance \(\sigma_{n}^{2}\) of \(\mathcal{X}_{n}\) is equal to
\[\sigma_{n}^{2}=\frac{n(n-1)(n-2)p_{n}^{2}(1-p_{n})^{2}}{32(1+(n-2)p_{n}))^{4}}= \Theta\left(\frac{n}{(np_{n})^{2}}\right).\]
By Theorem 6.1 in [13], we have
\[\frac{\mathcal{X}_{n}}{\sigma_{n}}\Rightarrow\mathcal{N}(0,1),\]
from which and (123) it follows that
\[\frac{\mathcal{I}_{n}-\mathbb{E}[\mathcal{I}_{n}]}{\sigma_{n}}\Rightarrow \mathcal{N}(0,1).\]
Then the proof is complete.
_Acknowledgment:_ The author is grateful to the anonymous reviewers for valuable comments.
|
2303.05219 | Designing Dynamic Robot Characters to Improve Robot-Human Communications | Socially Assistive Robots navigate highly sensible environments, which place
high demands on safety and communication with users. The reasoning behind an
SAR's actions must be transparent at any time to earn users' trust and
acceptance. Although different communication modalities have been extensively
studied, there is a lack of long-term studies investigating changes in users'
communication needs over time. Considering two decades of research in
Human-Robot Communication, we formulate the need to design dynamic robot
personalities to unveil the full potential of SARs. | Carl Oechsner, Daniel Ullrich | 2023-03-09T12:44:29Z | http://arxiv.org/abs/2303.05219v1 | # Designing Dynamic Robot Characters to Improve Robot-Human Communications
###### Abstract.
Socially Assistive Robots navigate highly sensible environments, which place high demands on safety and communication with users. The reasoning behind an SAR's actions must be transparent at any time to earn users' trust and acceptance. Although different communication modalities have been extensively studied, there is a lack of long-term studies investigating changes in users' communication needs over time. Considering two decades of research in Human-Robot Communication, we formulate the need to design dynamic robot personalities to unveil the full potential of SARs.
human computer interaction +
Footnote †: journal: Computer systems organization Robotics
+
Footnote †: journal: Computer systems organization Robotics
In sum, to clarify how trust and acceptance of a SAR can be achieved, we have to look at the combination of communication, personality, and relationship with the robot.
## 2. Related Work
In the following, we will briefly summarize prior research on robot communication and transparency, robot embodiment, and types of robot-human relationships.
### Reasoning and Communication
For users to accept and trust robotic systems, they must be able to understand the reasoning behind the robot's actions. Therefore, the robot must be able to communicate its internal state and intentions to the user (Sundundar et al., 2017). Especially in collaborative tasks, non-verbal communication can remove the ambiguities of verbal exchange and increase task performance (Beng et al., 2017). Furthermore, interactive social cues can help to achieve more social user responses (Han et al., 2018), improve user experience (Sundar et al., 2017) and also help shape the perception of robot personality and emotion (Sundar et al., 2018).
While the right choice of words, voice, pitch and volume are crucial for verbal interaction, other audible queues can be used to indicate and support the robot's reasoning (Sundar et al., 2017; Sundar et al., 2018; Sundar et al., 2018). In the following, we will briefly touch on further non-verbal communication modalities.
MovementDuring collaboration, especially object handovers, humans communicate intent and timing mainly through posture and limb movement. Based on these observations, Strabala et al. (Strabala et al., 2018) derived crucial elements for robot handovers. The robot should have a "carrying posture" that is highly distinguishable from other poses, so the willingness to hand an object is clearly recognizable even if the user is not currently focusing on the robot. In this pose, object and limbs are held close to the robot. To signal the handover intent, the robot should move the object towards the torso of the user, ideally holding it sideways and tilting it towards the user.
Even when the robot is inactive, the user needs to know when it is operable. When the robot is not moving, there is no telling apart from being switched off or inactive. Breazeal et al. (Beza et al., 2017) introduced an idle movement to their robot to signal "aliveness", and Terzioglu et al. (Terzioglu et al., 2018) found that a "breathing" motion of their robotic arm is suitable to display its internal state and intent. In general, motions that are "human-like" are reported to have a positive notion and help users predict robot movements faster and more accurately compared to more direct or abstract movements (Sundar et al., 2018; Strabala et al., 2018).
GesturesEven robots with few movable extremities can achieve interpretable gestures (see R2-D2), like nodding or shaking for approval and refusal. Imitating the user's head movements can lead to more acceptance (Han et al., 2018) and a "shrugging" gesture can signal the user that an input could not be interpreted (Beng et al., 2017). Gestures accompanying verbal output by the robot can determine the level of its perceived extraversion (Beza et al., 2017) and therefore help shape its personality. While head gestures seem to have an engaging effect on users (Beza et al., 2017) and can convey emotional states like anger to the user (Beza et al., 2017), simply turning towards the user can signal attention (Sundar et al., 2018).
GazeCues help communicate the robot's internal state and intent (Terzioglu et al., 2018). In collaborative tasks, gaze can help establish grounding, disambiguation of spoken information, joint attention that signals understanding, and turn-taking (Han et al., 2018). Moon et al. (Moon et al., 2018) found that handovers are significantly faster if the robot gazed toward the anticipated handover location. One might think this only applies to robots with face-like or even just eye-like features (like, e.g., (Beng et al., 2017)). However, in their work, Terzioglu et al. (Terzioglu et al., 2018) demonstrate how gaze and posture cues can be easily achieved, even with a non-humanoid robot. In their studies, they used a robotic arm with a two-finger end effector and achieved sufficient cues by attaching a pair of glasses on top of it while pointing the fingers at the object in question.
### Personality and Embodiment
According to Deng et al. (Deng et al., 2017), the physical embodiment of robots "includes the internal and external mechanical structures, embedded sensors, and motors that allow them to interact with the world around them". Compared to virtual representations, embodied robots affect user performance and perception of an interaction (Deng et al., 2018): it increases compliance (Beng et al., 2017), social engagement and enjoyment (Beza et al., 2017; Sundar et al., 2018; Sundar et al., 2018), improves cognitive learning (Sundar et al., 2018) and motor skills (Beng et al., 2017), and increases user engagement in social (Beng et al., 2017), educational (Sundar et al., 2018) and clinical (Beng et al., 2017) context.
Designing a robotic assistant does not stop at the visual appearance, number and functionality of extremities, level of human-likeness, size, color, and shape. Considering the strong impact assistant embodiment has, profound thought has to go into the "how" of the robot's actions: How and when does the robot move? How fast should it move, and how close should it approach the human? Are the movements abstract or more human-like? How are movements linked to other communication channels?
Deng et al. (Deng et al., 2017) propose a process for designing robot embodiment that considers the desired context. They suggest starting from the task a robotic assistant is to fulfill. According to Megrath (Mograth, 2018), collaborative tasks can be classified by four task natures: Generate, Choose, Negotiate, and Execute. Based on the task, decide which relation (or role) the assistant should have to the user (see Section 2.3). The assistant's role falls between abstract (metaphorical) and literal (or realistic). The levels of abstractedness, task nature, and the chosen role later influence the level of autonomy and intelligence the users expect from the assistant.
### Relation and Habituation
An assistant's relation with the user falls between subordinate and superior (Sundar et al., 2018; Deng et al., 2017). A _subordinate_ role can signal that the assistant wants to learn from or be instructed by the user and is the least complex to implement. It can encourage empathy (Sundar et al., 2018) and self-efficacy (Beza et al., 2017; Deng et al., 2017). The _peer_ meets the user on equal footing. It can learn from and correct the users and successfully engage them in cognitive competition (Deng et al., 2017). The role which is most difficult to implement is the _superior_(Beng et al., 2017). It can be used to increase user compliance and achieves higher reliability and competence (Deng et al., 2017) and therefore is suitable, e.g., for coaching purposes.
When the user first uses the system, there is, of course, a novelty effect. The user is yet to learn, understand and trust the robot. In this phase, transparency has to be high, and parameters, like, e.g. action speed, have to be low. After a while, when the user has built up trust in the system and gained knowledge about its capabilities, reasoning
can be dialed down, and speeds can be increased. Nevertheless, these are not the only parameters that have to be adapted over time. A study by Salter et al. (Salter et al., 2017) has shown that the engaging functions of a robot can deteriorate over time, especially when used in the wild.
## 3. Creating Dynamic Robot Personalities
We learned that adequate communication is necessary for a robotic system and that the robot's personality can shape the quality of communication. A well-defined robot personality can help users understand the robot's reasoning. Mimicking human behaviors helps engagement and trust but does not have to be exact (Salter et al., 2017). Even abstract behavioral cues are sufficient to distinguish between different robot personalities (Krishnan et al., 2017). What personality should a robot have? Studies indicate that the preferred personality amplifies the user's traits (Salter et al., 2017). Extrocvorts, for example, using more vivid and more frequent gestures during a conversation, also accept robots approaching closer during interaction (Selmer et al., 2017). However, other studies have found participants to prefer a character opposite to theirs (Selmer et al., 2017). In their study Mileounis et al. (Mileounis et al., 2017) confirm that a robot's personality design directly affects its perceived intelligence and, more importantly, social intelligence. Asserting social intelligence is crucial for users to believe the robot is capable of making reasonable decisions.
How should robot personalities be designed? Whittaker et al. (Whittaker et al., 2017) suggest using classic persona design (Masetty et al., 2017). Starting from a persona, designers can combine personality traits that make robot behavior more predictable. In general, users seem to react better to extrocvort robot personalities (Salter et al., 2017) and perceive it as more socially intelligent (Mileounis et al., 2017). However, as mentioned earlier, most robot studies are short-term and are conducted in controlled environments. Given a short time frame, an extrocvort character leaves a better and more memorable impression than an introvert one could. We assume that a robot companion with exclusively extravert behavior would be draining over a more extended period.
This is where the dynamic aspect of robot personality comes into play: We propose a more open personality emphasizing invitation and transparency for the first interaction phase ("getting-to-know"). After that, facets of extraversion and communication frequency, as well as an excessive amount of gestures, should be toned down, as well as explanations that serve transparency (e.g., explaining each time for repetitious tasks why the SAR reaches a particular stance in a decision-making process). The result should be a smooth transition from the novelty phase, shaped by amazement considering the unknown functionalities, to the phase of habituation, in which the novelty effect is depleted, and users value a robust, reliable system.
In our view, one big challenge that this approach of dynamic personality poses is again rooted in anthropomorphism: We, as humans, value consistent personalities in other humans and are deterred by personality fluctuations. Changes in behavior or personality traits can hint at impostors - a link we do not want in the context of trust-building. Therefore, personality changes must be fine-tuned to fly under the radar - otherwise, we would change one drawback for another.
## 4. Conclusion
We argue that during the design of SARs these vital must be taken into account to achieve transparent and trustful SARs: a coherent robot personality, that reflects in coherent behavior, movement, verbal and non-verbal communication as well as changing factors in human-robot relationship dynamics. More long-term studies have to be conducted that focus on the change of requirements to derive best practices on how the user can implicitly or explicitly control the amount of reasoning by the robot which, given the rapid development in AI techniques over the last years, is now more likely to happen than ever.
###### Acknowledgements.
This project is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 425412993 and is part of Priority Program SPP2199 Scalable Interaction Paradigms for Pervasive Computing Environments.
|
2303.06710 | Decision Making for Human-in-the-loop Robotic Agents via
Uncertainty-Aware Reinforcement Learning | In a Human-in-the-Loop paradigm, a robotic agent is able to act mostly
autonomously in solving a task, but can request help from an external expert
when needed. However, knowing when to request such assistance is critical: too
few requests can lead to the robot making mistakes, but too many requests can
overload the expert. In this paper, we present a Reinforcement Learning based
approach to this problem, where a semi-autonomous agent asks for external
assistance when it has low confidence in the eventual success of the task. The
confidence level is computed by estimating the variance of the return from the
current state. We show that this estimate can be iteratively improved during
training using a Bellman-like recursion. On discrete navigation problems with
both fully- and partially-observable state information, we show that our method
makes effective use of a limited budget of expert calls at run-time, despite
having no access to the expert at training time. | Siddharth Singi, Zhanpeng He, Alvin Pan, Sandip Patel, Gunnar A. Sigurdsson, Robinson Piramuthu, Shuran Song, Matei Ciocarlie | 2023-03-12T17:22:54Z | http://arxiv.org/abs/2303.06710v2 | # Decision Making for Human-in-the-loop Robotic Agents via Uncertainty-Aware Reinforcement Learning
###### Abstract
In a Human-in-the-Loop paradigm, a robotic agent is able to act mostly autonomously in solving a task, but can request help from an external expert when needed. However, knowing when to request such assistance is critical: too few requests can lead to the robot making mistakes, but too many requests can overload the expert. In this paper, we present a Reinforcement Learning based approach to this problem, where a semi-autonomous agent asks for external assistance when it has low confidence in the eventual success of the task. The confidence level is computed by estimating the variance of the return from the current state. We show that this estimate can be iteratively improved during training using a Bellman-like recursion. On discrete navigation problems with both fully- and partially-observable state information, we show that our method makes effective use of a limited budget of expert calls at run-time, despite having no access to the expert at training time.
## I Introduction
Deep Reinforcement Learning (DRL) has shown great progress in learning decision-making for complex robotic skills [1, 2, 3, 4] using experiences collected by a robotic agent exploring an environment and receiving reward signals. Traditional RL agents use a policy learned during training in order to act autonomously at deployment. However, even a well-trained agent can encounter situations when deployed that are hard to make decisions for, for reasons such as partial state observability, uncertain dynamics, changes in state distributions between training and testing, etc.
The Human-in-the-Loop (HitL) paradigm has been developed in robotics precisely for situations where an agent can act autonomously most of the time, but would still benefit from receiving assistance from an available (tele-)operator, usually assumed to be a human expert. This paradigm is particularly powerful if the agent itself makes the decision of when to request assistance, thus freeing the operator from having to monitor task progress. However, this approach gives rise to a critical decision-making problem on the agent's part: when to request help? Too few such requests can lead to the robot making mistakes, but too many requests will overload the expert and lose the benefit of semi-autonomous operation.
In this paper, we propose a DRL-based method for a HitL agent to make the critical determination of when to request expert assistance. We posit that the best moment to request such assistance is when the agent is highly uncertain in the successful outcome of the task. From an RL perspective, _we relate this uncertainty to the variance of the return from the current state, as perceived by the agent_. Numerous RL algorithms provide methods for the agent to estimate the expected return from a given state. We show that similar methods can be used to estimate the variance of return as well during the training process. At deployment time, the agent can then request expert assistance when its estimate of the return variance from the current state falls below a given threshold. We dub our method HULA, for Human-in-the-loop Uncertainty-aware Learning Agent, and illustrate its operation in Fig. 1.
Critically, HULA does not need to make any calls to the expert during training. This stands in contrast to a standard RL approach, where an agent could learn how to use an expert simply by making numerous such calls at train time. Nevertheless, our method is able to make effective use of a limited budget of expert calls in order to improve task performance. We summarize our key contributions as follows:
* To the best of our knowledge, we are the first to propose a method for an HitL RL agent to learn how to effectively budget its interactions with a human expert at deployment time, without needing any expert calls during training.
Fig. 1: **An illustration of HULA, the method we propose in this paper. In a partially-observable environment, an agent without the help of an expert (A) cannot localize itself accurately due to partial observability, goes down the wrong passage and fails to reach the target. A HULA agent (B) decides to request assistance from an available external expert in the states marked with a red E and achieves the goal (denoted by a green star ). In both cases, the agent can only observe a 5x5 grid around its current location; shadow areas represent unobserved regions throughout the navigation.**
* We show that the variance of return, which can be estimated during training using Bellman-like equations, is an effective measure for agent uncertainty and can be used at deployment to determine when to request assistance.
* Our experiments on discrete navigation problems with both fully- and partially-observable state information show that our method is as effective or better in managing its budget of expert calls compared to a standard learning approach that also makes thousands of expert calls during training.
## II Related Work
Our work extends standard reinforcement learning algorithms to human-in-the-loop policy that solves robotics tasks with the help of humans. Researchers have investigated how to learn such a policy. For example, Arakawa et al. propose DQN-TAMER [5], which is a method that incorporates a human observer model by real-time human feedback during training. Expected Local Improvement (ELI) [6] trains a state selector that suggests states requiring to query new actions from human experts. PAINT [7] learns a classifier to identify irreversible states and query the expert in case entering such states cannot be avoided. Hug-DRL [8] leverages a control transfer mechanism between humans and automation that corrects the agent's unreasonable actions during training by human intervention, where it has demonstrated significant potential in autonomous driving applications. While these methods exploit human intervention during training, our work relies on the internal uncertainty of an agent and only uses an expert in test time.
HULA explicitly estimates the uncertainty of an RL agent and use it to make decisions about requesting an expert's assistance. Prior work has explored representing the uncertainty of outcomes during RL training [9, 10, 11]. For instance, Kahn et al. use the variance of the dynamics model to represent the uncertainty of collision to avoid damage from high-speed collisions [12]. Ensemble Quantile Networks (EQN) [13] estimates both aleatoric and epistemic uncertainties via the combination of quantile networks(IQN) [14] and randomized prior function(RPF) [15]. These works use the uncertainty of an agent to perform task-specific decision-making, while our work differs in using it to collaborate with a human. Moreover, our work represents an agent's uncertainty using the variance of returns, which is a direct indicator of task performance.
The closest work to ours is RCMP, which learns a HitL policy that queries the expert when the epistemic uncertainty estimated by the variance of multiple value functions is high [16]. However, RCMP requires expert queries during training, whereas our work learns an uncertainty model based on the variance of return and does not query an expert in train time.
## III Method
Consider a problem for an autonomous agent formulated as a standard Markov decision process (MDP). A general MDP is defined as a tuple of \((S,A,r,p)\), where \(S\) represents the state space, \(A\) represents the action space, \(r(s,a)\) is reward function that evaluates immediate reward of action \(a\in A\) in a state \(s\in S\), and \(p\) represent a transition distribution \(p(s_{t+1}|s_{t},a_{t})\). The goal of solving an MDP is to learn a policy \(\pi(a|s)\) that maximizes its expected return \(\mathbb{E}_{\pi}[R]\), where \(R=\sum\gamma^{t}r(s_{t},a_{t})\) and \(\gamma\) is a discount factor.
Going beyond the standard MDP formulation, we now assume the availability of an expert that can give instructions to the robotic agent. When queried from a given state \(s_{t}\), this expert can directly provide an action \(a_{exp}(s_{t})\) for the robot to execute. We assume that the expert always provides high-quality advice, i.e. a policy of always following expert-provided actions from all states will achieve a satisfactorily high return on the MDP problem above. However, such a policy would be impractical: we assume that the expert has limited bandwidth or availability, thus it is desirable for the agent to balance the goal of achieving high returns with the goal of not overloading the expert with requests. Such a scenario is typical in HitL robotics applications.
Our method aims to determine when the agent should request expert assistance while making the most of a limited number of such calls available during deployment. Furthermore, we will like to limit (or eliminate) the number of calls made to the expert during its training phase.
To achieve this goal, our method leverages the uncertainty of the outcomes of the agent during exploration. Specifically, we use the variance of the return from a given state as a measure of uncertainty. Intuitively, a state with high variance of the return is one from which the agent, acting alone, could achieve a range of outcomes, ranging from very poor to very effective. We posit that these are the best states in which to request assistance. In contrast, states where the variance is low are those in which the agent is certain of the outcome of the task, and there are fewer ways in which external assistance can be of help.
Most RL algorithms work by estimating the expected return from a given state; this estimate is continuously refined during training time. We modify this training procedure also to maintain and refine an estimate of \(\texttt{Var}_{\pi}(R)\), the variance of the returns under the learned policy, as an indicator of the agent's uncertainty. At test time, our agent can use this estimate of variance to decide when to request help. We detail these processes next.
### _Estimation for the variance of return_
The expected return from a given state is encapsulated by the state value functions, which are explicitly relied on by most RL algorithms. However, while most RL algorithms are not concerned with the variance of the return, we would like to compute and maintain similar estimates for it during training. By definition, the variance of the return is defined by:
\[\texttt{Var}(R)=\mathbb{E}[R^{2}]-\mathbb{E}[R]^{2}\]
The expected return of taking action \(a_{t}\) in state \(s_{t}\) is given by the Q-function under policy \(\pi\):
\[\mathbb{E}[R]=Q^{\pi}(s_{t},a_{t}) \tag{1}\]
Since the transition function is not available, it is usually approximated via Bellman update:
\[Q^{\pi}(s_{t},a_{t})\leftarrow(1-\alpha)Q^{\pi}(s_{t},a_{t})+ \alpha(r(s_{t},a_{t})\] \[+\gamma\max_{a_{t+1}\in A}Q(a_{t+1},s_{t+1})) \tag{2}\]
The second moment of return \(\mathbb{E}[R^{2}]\), which we call \(M^{\pi}(s_{t},a_{t})\), can be approximated using a sampling-based method by:
\[M^{\pi}(s_{t},a_{t})=\mathbb{E}[R^{2}]\] \[\quad=\mathbb{E}[[r(s_{t},a_{t})+\gamma\sum_{k=t+1}^{k=N}r(s_{k},a _{k})]^{2}]\] \[\quad=\mathbb{E}[r^{2}(s_{t},a_{t})+2\gamma\sum_{k=t+1}^{k=N}r(s_ {k},a_{k})+(\sum_{k=t+1}^{k=N}r(s_{k},a_{k}))^{2}]\] \[\quad=r^{2}(s_{t},a_{t})+2\gamma\sum_{s_{t+1}\in S}p(s_{t+1}|s_{t},a_{t})Q^{\pi}(s_{t+1},a_{t+1})\] \[\quad\quad+\gamma^{2}\sum_{s_{t+1}\in S}p(s_{t+1}|s_{t},a_{t})M^{ \pi}(s_{t+1},a_{t+1}) \tag{3}\]
Since the transition function is not available for a model-free agent, we can estimate \(M^{\pi}\) using a Bellman-like formula with sampled data:
\[M^{\pi}(s_{t},a_{t})\leftarrow(1-\alpha)M^{\pi}(s_{t},a_{t})+ \alpha M^{\prime}(s_{t},a_{t}) \tag{4}\]
where
\[M^{\prime}(s_{t},a_{t}) =r^{2}(s_{t},a_{t})+2\gamma Q^{\pi}(s_{t+1},a_{t+1})\] \[+\gamma^{2}M(s_{t+1},a_{t+1}) \tag{5}\]
Here, \(a_{t+1}=\operatorname*{argmax}_{a_{t+1}\in A}Q(s_{t+1},a_{t+1})\), since we assume a greedy agent that will take the actions that lead to max expected return during deployment.
Finally, combining \(M\) and \(Q\), we can approximate the variance of returns in a state by:
\[\texttt{Var}(R)=M(s_{t},a_{t})-Q^{2}(s_{t},a_{t}) \tag{6}\]
Hence, the variance of the return given a state and action pair only depends on the \(Q\) function as well as the \(M\) (second function), which in turn depends on \(Q\) and can be updated via a Bellman-like recursion.
```
1:while not done do
2: Compute variance \(v_{t}\) in current state using Eq. (6)
3:if\(v_{t}\geq\epsilon\)then
4: Execute action \(a_{exp}(s_{t})\) provided by expert
5:else
6: Execute action from own policy \(\pi(a_{t}|s_{t})\)
7:endif
8:endwhile
```
**Algorithm 1** HULA Training
### _HULA: Complete method_
We are now ready to integrate the variance estimation into a complete learning algorithm. We use algorithms from the Q-learning family, as these already build an explicit estimate of the \(Q\) function at train time, and also use a greedy policy w.r.t. \(Q\) at run-time.
During training, in addition to the function estimator for the \(Q\) function (referred to as \(Q_{\theta_{1}}\)), we build and update an estimator for \(M\) (referred to as \(Q_{\theta_{2}}\)). This estimator can be of the same type as used for \(Q\): for tabular Q-learning, we can use a table, while for Deep Q-Networks (DQN) [17] we use a deep neural network. At every iteration, we use Eqs. (4-5) to update the estimator for \(M\). In the case of tabular Q-learning, Eq. (4) can be used directly, while in the case of DQN it can be transformed in a loss function analogous to the one used for \(Q\). The complete procedure is shown in Alg. 1.
Finally, during deployment, the agent can use the trained estimators for the \(Q\) and \(M\) functions to compute the variance of the return for the current state. If this variance exceeds a threshold, the agent will request assistance and execute the action provided by the expert. Otherwise, the agent will execute the action prescribed by its own policy (in this case, a greedy selection based on the trained \(Q\)-function).
```
1:while not done do
2: Compute variance \(v_{t}\) in current state using Eq. (6)
3:if\(v_{t}\geq\epsilon\)then
4: Execute action \(a_{exp}(s_{t})\) provided by expert
5:else
6: Execute action from own policy \(\pi(a_{t}|s_{t})\)
7:endif
8:endwhile
```
**Algorithm 2** HULA Deployment
We note a key feature of our method: _it does not require the presence of an expert during train time_. The only change at train time compared to traditional, expert-less Q-learning is the addition of the function estimator for \(M\), which will be used at run-time to help provide a variance estimator. In turn, the variance estimator is used to decide when to request assistance from the expert, which is only needed at deployment.
Fig. 2: **Fully-observable grid worlds. The trap world (A) requires the agent to navigate around traps. In the shortcut world (B), the agent can leverage the expert to use the shortcut alley to achieve higher returns.**
## IV Evaluation
### _Environments_
We evaluate HULA in a discrete navigation scenario, where the agent must reach a goal while avoiding obstacles and/or traps. Furthermore, we test our approach both on problems where the agent has exact knowledge of its current state (which is tractable via tabular Q-learning) and problems where the agent only has access to sensor data of limited range, creating ambiguity (which requires more powerful function estimators, and the use of DQN).
#### Iv-A1 Fully-observable MDPs
These are discrete navigation environments where the agent is provided its exact location as observation. In addition to the goal cell, the environment also contains obstacles and traps: colliding with a trap would terminate the episode, while colliding with an obstacle results in a failure of action, and the agent would stay in its original cell. The agent also receives a small penalty for each step it takes. Available actions for an agent are moving up, down, left, and right, and the observations are the coordinates \((x,y)\) of the agent's location. We use the maps shown in Fig. 2, and train and test individually on each map.
For this simple class of problems, an unsophisticated agent can easily learn to act optimally on its own. However, we introduce uncertainty in the form of a stochastic transition function: at every step, after selecting an action, the agent moves in the desired direction with probability \(\psi\), and moves in a random direction with probability \(1-\psi\) (in practice, we use \(\psi=0.45\)). We think of this setting as a "slippery world". Thus, even with an optimal policy, the robot may not reach its goal.
To help, we introduce an expert: when called, the expert provides the optimal action in the direction of the goal, and the action provided by the expert is not subject to transition function stochasticity. In this setting, the expert can thus be thought of as an "action corrector" with a better understanding of the environment dynamics, and whose actions always produce the expected results. Intuitively, we would expect an agent to call the expert in high-risk situations where it needs to avoid a wrong move, such as very close to traps. In this experiment, we test HULA on two maps: 1, trap world that requires an agent to navigate around traps with an expert; 2, shortcut grid where the agent can leverage the expert to take the shortcut for higher returns.
#### Iv-A2 Partially-observable MDP's
To test how our algorithm performs in more complex environments, we extend to a case of partially observable MDP's where the agent does not have complete information about the system. Instead of observing state information as the \((x,y)\) position in the grid, the observation now only includes a finite patch of grid cells around the agent's current location (we use 5x5 grids in our implementation).
As shown in Figure 3, due to partial observability, it is not always possible for the agent to uniquely identify its own location in the map from observations. In such ambiguous regions, the same action taken based on identical observations can lead to utterly different results. However, other parts of the map are uniquely identifiable based on sensor observations, and thus an autonomous agent can always select the optimal action.
In this experiment, the expert observes the full state and always provides optimal actions to the agent. Intuitively, we would expect the agent to make use of the expert when traversing ambiguous areas of the map.
### _Evaluation Approach_
The main evaluation metric for our approach focuses on its ability to make effective use of the expert: we would like to see how performance (measured as average episodic return) changes as a function of the number of expert calls made during deployment. We can measure this performance by varying the value of the variance threshold \(\epsilon\) used in Alg. 2: for large values of \(\epsilon\), the agent will never make use of the expert; conversely, if \(\epsilon\) is very small, the agent will call the expert in every state. By sweeping the value of \(\epsilon\) between these extremes, we obtain more or less autonomous agents, and can plot performance as a function of the number of expert calls that result in each case.
As a baseline, we use a standard RL approach that simply integrates the expert into the training procedure. We refer to this baseline as the _penalty-based agent_. Specifically, the
Fig. 4: **A learned variance map for the trap world environment.** (A) shows the return variance computed by Monte-Carlo Sampling with the same policy. (B) shows the estimated variance by our method (darker colors represent higher variances).
Fig. 3: **Partially-observable grid world.** In this environment, an agent may not localize itself by observing a surrounding 5x5 region. For example, the blue state and the red state result in identical observations.
penalty-based agent treats calling expert as an extra action \(a_{call}\) that can be called alongside the other actions at both training and deployment time.
However, since the expert is optimal, the penalty-based agent will learn to always call for help. To learn a nontrivial policy that does not call the expert excessively, we employ a penalty of calling an expert in the reward function at train time: \(r^{\prime}(s,a)=r(s,a)+c\), where \(c<0\) is a penalty assessed only when calling an expert. By training this method with varying values of \(c\), we again obtain agents that are more or less autonomous: the penalty-based agents trained with high \(c\) will call the expert less often, while those trained will low \(c\) will call more often. Again, we sweep the value of \(c\) between these extremes, and plot performance as a function of the number of expert calls that result in each case. However, for a fair comparison against HULA, we do not assess the expert penalty at deployment time.
## V Results
### _Uncertainty Estimation_
The first question we would like to answer is whether HULA successfully captures the uncertainty of the agent and allocate the expert calls efficiently. We compare the learned variances by our method with the ground truth variance of returns by Monte Carlo sampling the environment with the same policy without the assistance of an expert.
As shown in Figure 4, in the fully-observable experiment, HULA successfully estimates a variance map that is similar to the ground truth. Specifically, it is uncertain about its outcome in states close to traps and confident when it is far from traps.
In the partially observable case, HULA identifies ambiguous states when the agent cannot localize itself in the map. In these states, the agent fails to produce an action faithfully in that the same action taken can lead to different results (e.g., hitting the boundary or reaching the goal). Our results align with the ground-truth variance map, which also has high uncertainty in a similar region.
To further investigate HULA's efficiency in expert allocation, we compare the states with the highest \(N\) variances with those in the ground truth variance map. These states are usually where the agent calls the expert, especially when the agent has a budget of \(N\) expert calls.
Our results show that HULA can estimate the uncertainty of an RL agent accurately and allocate its expert call based on the return variance efficiently. As shown in Table I, in trap world, \(80\%\) of the top-\(5\) and top-\(10\) variance states estimated by HULA are also the top-\(5\) and \(10\) accordingly in the ground-truth variance. In the shortcut world, although our method only correctly estimates \(1\) of \(5\) highest variance states, its performance is higher and reaches \(60\%\) in the top-\(10\) case. This indicates that it allocates expert calls efficiently if given more budget. In the partially-observable experiment, our agent also recovers most of the high variance states in both the top-\(5\) and top-\(10\) evaluation.
### _Task Performance_
In all experiments, our results show that HULA agents achieve higher performance when requesting help from experts compared to the ones that does not use an expert. In the trap world, as shown in Fig. 7, HULA achieves similar performance as the penalty-based agent while using the same number of expert calls. Interestingly, in the shortcut world, we find that the penalty-based method has a higher average return than HULA. This is because our method does not evaluate the effect of calling an expert during training. As shown in Fig. 8, the penalty-based agent learns to move towards states
\begin{table}
\begin{tabular}{c c c c} \hline \hline Experiment & Trap world & Shortcut world & PO \\ \hline Top-5 & 0.8 & 0.2 & 0.8 \\ Top-10 & 0.8 & 0.6 & 0.7 \\ \hline \hline \end{tabular}
\end{table} TABLE I: **Quantitative results of the expert call efficiency. Accuracy of top-\(N\) variance states estimation. PO represents the partially-observable experiments.**
Fig. 5: Comparisons between a variance map learned by HULA and the ground truth for the shortcut world.
Fig. 6: Comparisons between a variance map learned by HULA and the ground truth for the partially-observable environment.
Fig. 7: **# of expert calls vs Average returns for the fully-observable experiments. Curves are generated with the rolling mean method (window size of 4) for visualization purposes.**
that ask for help from an expert since it knows it will call an expert in a future state, whereas HULA navigates around the uncertain states to avoid them. However, in the highest-variance states (e.g., states between traps), both agents are able to call the expert and complete the task.
In this experiment, we find that HULA is robust to different expert call budgets compared to the penalty-based agent. For example, in the shortcut world, the penalty-based agent cannot learn a policy that uses 2 expert calls to complete the task even if we perform hyper-parameter sweeping with the expert penalty \(c\). This can be caused by the high expert penalty that modifies the original reward structure and hinders the learning of the task. In contrast, our method can flexibly incorporate different expert call budgets and maintain reasonable performance.
In the partially-observed grid world, the HULA agent outperforms the penalty-based agent when the allowed expert call is limited in the range of \((2,8]\). When both agents are given enough expert assistance, they achieve similar performance. This indicates that HULA efficiently utilizes the expert to localize itself and improve the task performance.
An important feature of our method is that it does not introduce extra complexity in training an RL agent. Practically, we stop training when the \(Q\) function converges. Therefore, the training efficiency is similar to its underlying RL algorithm. In contrast, the penalty-based agent requires access to an expert during training time. Our experiments show that training each penalty-based agent for the fully-observable environment results in about \(70000\) expert calls. The requirement of an expert during train time limits its application on learning policies that interact with humans.
## VI Conclusion
We introduced HULA, a method to learn human-in-the-loop policies by estimating an RL agent's uncertainty. We proposed a return variance estimation method that captures an RL agent's uncertainty. Our experimental results demonstrate that HULA can capture an RL agent's uncertainty and use it to request assistance from experts to achieve high task performance. An important feature of our method is that it does not require the presence of an expert during training. We envision that our approach can be applied to more complex problems. For example, we plan to extend HULA to continuous RL algorithms (e.g. DDPG [18]) and learns HitL policies to solve continuous control problems. We also would like to explore the direction of learning uncertain agents in other domains, such as language-guided navigation.
|
2302.11145 | Para-Kähler and pseudo-Kähler structures on Lie-Yamaguti algebras | For a pre-Lie-Yamaguti algebra $A$, by using its sub-adjacent Lie-Yamaguti
algebra $A^c$, we are able to construct a semidirect product Lie-Yamaguti
algebra via a representation of $A^c$. The investigation of such semidirect
Lie-Yamaguti algebras leads us to the notions of para-K\"ahler structures and
pseudo-K\"ahler structures on Lie-Yamaguti algebras, and also gives the
definition of complex product structures on Lie-Yamaguti algebras which was
first introduced in [25]. Furthermore, a Levi-Civita product with respect to a
pseudo-Riemannian \Lie-Yamaguti algebra is introduced and we explore its
relation with pre-Lie-Yamaguti algebras. | Jia Zhao, Yuqin Feng, Yu Qiao | 2023-02-22T04:53:21Z | http://arxiv.org/abs/2302.11145v2 | # Para-Kahler and pseudo-Kahler structures on Lie-Yamaguti algebras
###### Abstract.
For a pre-Lie-Yamaguti algebra \(A\), by using its sub-adjacent Lie-Yamaguti algebra \(A^{c}\), we are able to construct a semidirect product Lie-Yamaguti algebra via a representation of \(A^{c}\). The investigation of such semidirect Lie-Yamaguti algebras leads us to the notions of para-Kahler structures and pseudo-Kahler structures on Lie-Yamaguti algebras, and also gives the definition of complex product structures on Lie-Yamaguti algebras which was first introduced in [25]. Furthermore, a Levi-Civita product with respect to a pseudo-Riemannian Lie-Yamaguti algebra is introduced and we explore its relation with pre-Lie-Yamaguti algebras.
_Mathematics Subject Classification_ (2020): 17A30, 17A60, 17B99 _keywords_: Lie-Yamaguti algebra, complex product structure, para-Kahler structure, pseudo-Kahler structure.
1: _School of Sciences, Nantong University, Nantong 226019, Jiangsu, China._
2: _School of Mathematics and Statistics, Shaanxi Normal University, Xi'an 710119, Shaanxi, China._
*: Corresponding author.
_Emails_: [email protected], [email protected], [email protected]
###### Contents
* 1 Introduction
* 2 Preliminaries
* 3 Complex product structures on Lie-Yamaguti algebras
* 4 Para-Kahler structures on Lie-Yamaguti algebras
* 5 Pseudo-Kahler structures on Lie-Yamaguti algebras
## 1. Introduction
An almost product structure on a Lie algebra \(\mathfrak{g}\) is a linear map \(E:\mathfrak{g}\longrightarrow\mathfrak{g}\) such that \(E^{2}=\mathrm{Id}\). If in addition, \(E\) also satisfies the following integrability condition:
\[[Ex,Ey]=E\Big{(}[Ex,y]+[x,Ey]-E[x,y]\Big{)},\quad\forall x,y\in\mathfrak{g},\]
then \(E\) is a product structure on \(\mathfrak{g}\). Note that there exists a product structure on a Lie algebra \(\mathfrak{g}\) if and only if \(\mathfrak{g}\) can be decomposed as a direct sum of two subalgebras: \(\mathfrak{g}=\mathfrak{g}_{1}\oplus\mathfrak{g}_{2}\). Parallel to product structures, an almost complex structure is a linear map \(J:\mathfrak{g}\longrightarrow\mathfrak{g}\) such that \(J^{2}=-\mathrm{Id}\). A complex structure on \(\mathfrak{g}\) is also an almost complex structure such that the integrability condition is satisfied. A complex product structure is a pair \((J,E)\), where \(J\) is a complex structure and \(E\) is a product structure such that a certain condition is satisfied. One can read [2, 3, 23] for more details about complex product structures on Lie or 3-Lie algebras. A symplectic structure on \(\mathfrak{g}\) is a nondegenerate 2-cocycle \(\omega\in\wedge^{2}\mathfrak{g}^{*}\)[9]. A para-Kahler structure is a pair \((\omega,E)\), where \(\omega\) is a symplectic structure and \(E\) is a paracomplex structure (a product structure such that the decomposed two subalgebras have the same dimension) such that a certain condition is satisfied.
Introduction
A _pseudo-Kahler structure_ is a pair \((\omega,J)\) where \(\omega\) is a symplectic structure and \(J\) is a complex structure. A _pseudo-Kahler structure_ is a pair \((\omega,J)\) where \(\omega\) is a symplectic structure and \(J\) is a complex structure such that a certain condition is satisfied. See [1, 10, 23] for more details about pseudo-Kahler structures on Lie algebras or on 3-Lie algebras. Moreover, pseudo-metric Riemannian Lie algebras were investigated in [17]. These structures have many applications in mathematics, geometry, and mathematical physics. As we all know, complex product structures, para-Kahler structures, and pseudo-Kahler structures are closely related with pre-Lie algebras, which are the underlying algebra structures of relative Rota-Baxter operators (also called \(\mathcal{O}\)-operators or Kupershmidt operators). Bai and his collaborators explored several properties about pre-Lie algebras (also called left-symmetric algebras) and studied their relation with the classical Yang-Baxter equation. See [4, 5, 6, 7] for more details about pre-Lie algebras or 3-pre-Lie algebras and see [18, 21] for relative Rota-Baxter operators and Rota-Baxter algebras.
A Lie-Yamaguti algebra is a generalization of Lie algebras and Lie triple systems, and can be dated back to Nomizu's work on invariant affine connections on homogeneous spaces in 1950's ([22]) and Yamaguti's work on general Lie triple systems and Lie triple algebras ([26]). Its representation and cohomology theory were constructed in [27, 28] during 1950's to 1960's. Later until 21st century, Kinyon and Weinstein named this object as a Lie-Yamaguti algebra in their study of Courant algebroids in [20]. Lie-Yamaguti algebras have attracted much attention in recent years. For instance, Benito, Draper, and Elduque investigated Lie-Yamaguti algebras related to simple Lie algebras of type\(G_{2}\)[13]. Afterwards, Benito, Elduque, and Mart\(\acute{i}\)n-Herce explored irreducible Lie-Yamaguti algebras in [14, 15]. Furthermore, Benito, Bremmer, and Madariaga examined orthogonal Lie-Yamaguti algebras in [12]. Recently, we studied cohomology and deformations of relative Rota-Baxter operators on Lie-Yamaguti algebras [31], relative Rota-Baxter-Nijenhuis structures on a Lie-Yamaguti algebra with a representation [32], and bialgebra theory of Lie-Yamaguti algebras [33].
Moreover, Sheng and the first author explored product structures and complex structures on Lie-Yamaguti algebras in [25] and deeply examined relative Rota-Baxter operators, symplectic structures, and pre-Lie-Yamaguti algebras in [24]. This motivates us to consider compatibility conditions between a product structure and a symplectic structure, and between a complex structure and a symplectic structure, which in turn lead us to introduce the notions of para-Kahler structure and pseudo-Kahler structure on a Lie-Yamaguti algebra. Thus this paper can be regarded as a sequel to [24] and [25].
Parallel to the context of Lie algebras, the notion of para-Kahler structures of a Lie-Yamaguti algebra is obtained via a paracomplex structure and a symplectic structure. An equivalent description of a para-Kahler structure is given by the decomposition of the original Lie-Yamaguti algebra. With respect to a para-Kahler structure, there exists a pseudo-Riemannian structure. We define Levi-Civita product with respect to a pseudo-Riemannian Lie-Yamaguti algebra, and give its precise formula. Finally, we add a compatibility condition between a complex structure and a symplectic structure to introduce the notion of pseudo-Kahler structures on Lie-Yamaguti algebras. The relation between a para-Kahler structure and a pseudo-Kahler structure is investigated. Furthermore, for a pre-Lie-Yamaguti algebra \(A\), by using its sub-adjacent Lie-Yamaguti algebra \(A^{c}\), we obtain the semidirect product Lie-Yamaguti algebra \(A^{c}\ltimes_{L^{*},\neg\neg\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak \nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak \nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak \nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak \nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak \nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak \nolinebreak\nolinebreak\nolinebreak\nolinebreak\noline\nolinebreak\nolinebreak \nolinebreak\nolinebreak\noline\nolinebreak\nolinebreak\nolinebreak\nolinebreak \nolinebreak\nolinebreak\noline\nolinebreak\nolinebreak\nolinebreak\noline\nolinebreak \noline\nolinebreak\nolinebreak\noline\nolinebreak\noline\nolinebreak\noline\)\) via the representation \((A^{*};L^{*},\neg\neg\neg\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\noline\nolinebreak\nolinebreak\nolineoline\nolinebreak\nolineoline\)\) of \(A^{c}\). Following [25], we construct a perfect complex product structure on the semidirect product Lie-Yamaguti algebras \(A^{c}\ltimes_{L^{*},\neg\neg\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolineoline\nolinebreak\nolinebreak\nolinebreak\noline\nolinebreak\nolineoline\nolinebreak\noline\nolinebreak\nolineoline\nolinebreak\nolineoline\nolinebreak\nolineoline\nolinebreak\noline\nolineoline\nolinebreak\noline\)\) and \(A^{c}\ltimes_{L,\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\noline\nolinebreak\nolinebreak\nolinebreak\nolinebreak\noline\nolinebreak\olineolineoline\nolinebreak\nolinebreak\noline\olinebreak\nolineoline\nolinebreak\nolineoline\)\) respectively, and build further a para-Kahler structure and a pseudo-Kahler structure on \(A^{c}\ltimes_{L^{*},\neg\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolinebreak\nolineoline\nolinebreak\nolinebreak\nolineoline\nolinebreak\nolinebreak\nolineoline\nolinebreak\noline\olinebreak\nolineoline\nolineoline\break\nolineolineoline\nolineoline\break\nolineoline\nolineoline\break\nolineoline\)\) respectively.
This paper is structured as follows. In Section 2, we recall some basic definitions. In Section 3, we construct a complex product structure on a larger Lie-Yamaguti algebra from a pre-Lie-Yamaguti algebra. In Section 4, we introduce the notion of para-Kahler structures and give its equivalent descriptions. Furthermore, the notion of Levi-Civita products with respect to a pseudo-Riemannian Lie-Yamaguti algebra is introduced and we show that Levi-Civita product coincides with the pre-Lie-Yamaguti algebra structure. In Section 5, we introduce the notion of pseudo-Kahler structures and study its relation with para-Kahler structures.
In this paper, all the vector spaces are over \(\mathbb{K}\), a field of characteristic \(0\).
**Acknowledgements:** We would like to thank Professor Yunhe Sheng of Jilin University for useful discussions. Qiao was partially supported by NSFC grant 11971282.
## 2. Preliminaries
In this section, we first recall some basic notions such as Lie-Yamaguti algebras and representations, which are main objects throughout this paper.
**Definition 2.1**.: [26] A **Lie-Yamaguti algebra** is a vector space \(\mathfrak{g}\), together with a bilinear bracket \([\cdot,\cdot]:\wedge^{2}\mathfrak{g}\to\mathfrak{g}\) and a trilinear bracket \([\![\cdot,\cdot,\cdot]\!]:\wedge^{2}\mathfrak{g}\otimes\mathfrak{g}\to\mathfrak{g}\) such that the following equations are satisfied for all \(x,y,z,w,t\in\mathfrak{g}\),
\[[\![x,y],z]+[\![y,z],x]+[\![z,x],y]+[\![x,y,z]\!]+[\![y,z,x]\!]+[\![z,x,y]\!]=0, \tag{2}\] \[[\![[x,y],z,w]\!]+[\![[y,z],x,w]\!]+[\![z,x],y,w]\!]=0,\] (3) \[[\![[x,y,[z,w]\!]\!]=[\![[x,y,z]\!]\!],w]+[\![z,[\![x,y,w]\!]],\] (4) \[[\![[x,y,[\![z,w,t]\!]\!]\!]=[\![[[x,y,z]\!]\!],w,t]\!]+[\![z,[\![x,y,w ]\!]\!]+[\![z,w,[\![x,y,t]\!]\!]]\,. \tag{1}\]
**Remark 2.2**.: If the binary bracket \([\cdot,\cdot]=0\), then a Lie-Yamaguti algebra reduces to a Lie triple system; If the ternary bracket \([\![\cdot,\cdot,\cdot]\!]=0\), then a Lie-Yamaguti algebra reduces to a Lie algebra.
**Example 2.3**.: _Let \((\mathfrak{g},[\cdot,\cdot]\!)\) be a Lie algebra. We define a trilinear bracket \([\![\cdot,\cdot,\cdot]\!]:\wedge^{2}\mathfrak{g}\otimes\mathfrak{g}\to\mathfrak{g}\) by_
\[[\![x,y,z]\!]:=[[x,y],z],\quad\forall x,y,z\in\mathfrak{g}.\]
_Then \((\mathfrak{g},[\cdot,\cdot],[\![\cdot,\cdot,\cdot]\!])\) becomes a Lie-Yamaguti algebra naturally._
More examples can be found in [13].
**Definition 2.4**.: [27] Let \((\mathfrak{g},[\cdot,\cdot],[\![\cdot,\cdot,\cdot]\!])\) be a Lie-Yamaguti algebra and \(V\) a vector space. A **representation** of \(\mathfrak{g}\) on \(V\) consists of a linear map \(\rho:\mathfrak{g}\to\mathfrak{gl}(V)\) and a bilinear map \(\mu:\otimes^{2}\mathfrak{g}\to\mathfrak{gl}(V)\) such that for all \(x,y,z,w\in\mathfrak{g}\)
\[\mu([x,y],z)-\mu(x,z)\rho(y)+\mu(y,z)\rho(x)=0,\] \[\mu(x,[y,z])-\rho(y)\mu(x,z)+\rho(z)\mu(x,y)=0,\] \[\rho([\![x,y,z]\!])=[D_{\rho,\mu}(x,y),\rho(z)],\] \[\mu(z,w)\mu(x,y)-\mu(y,w)\mu(x,z)-\mu(x,[\![y,z,w]\!])+D_{\rho,\mu} (y,z)\mu(x,w)=0,\] \[\mu([\![x,y,z]\!],w)+\mu(z,[\![x,y,w]\!])=[D_{\rho,\mu}(x,y),\mu(z,w)],\]
where \(D_{\rho,\mu}:\otimes^{2}A\to\mathfrak{gl}(V)\) is defined to be
\[D_{\rho,\mu}(x,y):=\mu(y,x)-\mu(x,y)+[\rho(x),\rho(y)]-\rho([x,y]),\quad \forall x,y,\in A. \tag{5}\]
It is obvious that \(D_{\rho,\mu}\) is skew-symmetric. We denote a representation of \(\mathfrak{g}\) on \(V\) by \((V;\rho,\mu)\).
**Remark 2.5**.: If a Lie-Yamaguti algebra reduces to a Lie triple system (then \([\cdot,\cdot]=0\)), we get the notion of representation of the Lie triple system \((\mathfrak{g},[\![\cdot,\cdot,\cdot]\!])\) on \(V\) (then \(\rho=0\)).
**Proposition 2.6**.: _Let \((V;\rho,\mu)\) be a representation of a Lie-Yamaguti algebra \((\mathfrak{g},[\![\cdot,\cdot]\!],[\![\cdot,\cdot,\cdot,\cdot]\!])\), then the following equalities hold for all \(x,y,z,w\in\mathfrak{g}\)_
\[D_{\rho,\mu}([\![x,y],z)+D_{\rho,\mu}([y,z],x)+D_{\rho,\mu}([z,x],y)=0,\] \[D_{\rho,\mu}([\![x,y,z]\!],w)+D_{\rho,\mu}(z,[\![x,y,w]\!])=[D_{ \rho,\mu}(x,y),D_{\rho,\mu}(z,w)],\] \[\mu([\![x,y,z]\!],w)=\mu(x,w)\mu(z,y)-\mu(y,w)\mu(z,x)-\mu(z,w)D_{ \rho,\mu}(x,y).\]
**Example 2.7**.: _Let \((\mathfrak{g},[\![\cdot,\cdot]\!],[\![\cdot,\cdot,\cdot]\!])\) be a Lie-Yamaguti algebra. We define \(\mathrm{ad}:\mathfrak{g}\to\mathfrak{gl}(V)\) and \(\mathfrak{R}:\otimes^{2}\mathfrak{g}\to\mathfrak{gl}(V)\) by \(x\mapsto\mathrm{ad}_{x}\) and \((x,y)\mapsto\mathfrak{R}_{x,y}\) respectively, where \(\mathrm{ad}_{x}z=[\![x,z]\) and \(\mathfrak{R}_{x,y}z=[\![z,x,y]\!]\) for all \(z\in\mathfrak{g}\). Then \((\mathrm{ad},\mathfrak{R})\) forms a representation of \(\mathfrak{g}\) on itself, called the_ **adjoint representation**_. In this case, by (5), \(\mathfrak{L}\triangleq D_{\mathrm{ad},\mathfrak{R}}\) is given by_
\[\mathfrak{L}_{x,y}=\mathfrak{R}_{y,x}-\mathfrak{R}_{x,y}-\mathrm{ad}_{[x,y]} +[\mathrm{ad}_{x},\mathrm{ad}_{y}].\]
_Moreover, by (1) and (5), we have_
\[\mathfrak{L}_{x,y}z=[\![x,y,z]\!]\,,\quad\forall z\in\mathfrak{g}. \tag{6}\]
Representations of a Lie-Yamaguti algebra can be characterized by the semidirect product Lie-Yamaguti algebras.
**Proposition 2.8**.: _[_29, 30_]_ _Let \((\mathfrak{g},[\![\cdot,\cdot]\!],[\![\![\cdot,\cdot,\cdot]\!])\) be a Lie-Yamaguti algebra and \(V\) a vector space. If \(\rho:\mathfrak{g}\to\mathfrak{gl}(V)\) and \(\mu:\otimes^{2}\mathfrak{g}\to\mathfrak{gl}(V)\) are linear and bilinear map respectively, then \((V;\rho,\mu)\) is a representation of \((\mathfrak{g},[\![\cdot,\cdot]\!],[\![\cdot,\cdot,\cdot]\!])\) if and only if \(\mathfrak{g}\oplus V\) endows with a Lie-Yamaguti algebra structure under for all \(x,y,z\in\mathfrak{g},u,v,w\in V\)_
\[[x+u,y+v]_{\rho,\mu} = [x,y]+\rho(x)v-\rho(y)u,\] \[[\![x+u,y+v,z+w]\!]_{\rho,\mu} = [\![x,y,z]\!]+D_{\rho,\mu}(x,y)w+\mu(y,z)u-\mu(x,z)v,\]
_where \(D_{\rho,\mu}\) is given by (5). The Lie-Yamaguti algebra \((\mathfrak{g}\oplus V,[\![\cdot,\cdot]\!]_{\rho,\mu},[\![\![\cdot,\cdot,\cdot ]\!]_{\rho,\mu})\) is called the_ **semidirect product Lie-Yamaguti algebra**_, denoted by \(\mathfrak{g}\ltimes_{\rho,\mu}V\)._
Let \((V;\rho,\mu)\) be a representation of a Lie-Yamaguti algebra \((\mathfrak{g},[\![\cdot,\cdot]\!],[\![\![\cdot,\cdot,\cdot]\!])\). We define linear maps \(\rho^{*}:\mathfrak{g}\to\mathfrak{gl}(V^{*})\) and \(\mu^{*}:\otimes^{2}\mathfrak{g}\to\mathfrak{gl}(V^{*})\) to be
\[\langle\rho^{*}(x)\alpha,v\rangle = -\langle\alpha,\rho(x)v\rangle,\] \[\langle\mu^{*}(x,y)\alpha,v\rangle = -\langle\alpha,\mu(x,y)v\rangle,\]
for all \(x,y\in\mathfrak{g},\ \alpha\in V^{*},v\in V\).
Let \(V\) be a vector space. Define the switching operator \(\tau:\otimes^{2}V\to\otimes^{2}V\) by
\[\tau(x\otimes y)=y\otimes x,\quad\forall x\otimes y\in\otimes^{2}V.\]
In [24], we have constructed the dual representation of a Lie-Yamaguti algebra.
**Proposition 2.9**.: _Let \((\mathfrak{g},[\![\cdot,\cdot]\!],[\![\![\cdot,\cdot,\cdot,\cdot]\!])\) be a Lie-Yamaguti algebra with a representation \((V;\rho,\mu)\). Then_
\[(V^{*};\rho^{*},-\mu^{*}\tau)\]
_is a representation of Lie-Yamaguti algebra \((\mathfrak{g},[\![\cdot,\cdot]\!],[\![\![\cdot,\cdot,\cdot]\!])\) on \(V^{*}\), which is called the_ **dual representation** _of \(\mathfrak{g}\). Here \(D^{*}_{\rho,\mu}=D_{\rho^{*},-\mu^{*}\tau}\)._
**Example 2.10**.: _Let \((\mathfrak{g};\mathrm{ad},\mathfrak{R})\) be the adjoint representation of a Lie-Yamaguti algebra \((\mathfrak{g},[\cdot,\cdot],[\![\cdot,\cdot,\cdot]\!])\), where \(\mathrm{ad}\) and \(\mathfrak{R}\) are given in Example 2.7. Then \((\mathfrak{g}^{*};\mathrm{ad}^{*},-\mathfrak{R}^{*}\tau)\) is the dual representation of the adjoint representation, called the_ **coadjoint representation**_. Note that \(\mathfrak{L}^{*}\) is the dual of \(\mathfrak{L}\)._
Next, let us recall the notion of pre-Lie-Yamaguti algebras introduced in [24].
**Definition 2.11**.: ([24]) A **pre-Lie-Yamaguti algebra** is a vector space \(A\) with a bilinear operation \(*:\otimes^{2}\!A\to A\) and a trilinear operation \(\{\cdot,\cdot,\cdot\}:\otimes^{3}\!A\to A\) such that for all \(x,y,z,w,t\in A\)
\[\{z,[x,y]_{C},w\}-\{y*z,x,w\}+\{x*z,y,w\}=0,\] \[\{x,y,[z,w]_{C}\}=z*\{x,y,w\}-w*\{x,y,z\},\] \[\{\{x,y,z\},w,t\}-\{\{x,y,w\},z,t\}-\{x,y,\{z,w,t\}_{D}\}-\{x,y, \{z,w,t\}\}\] \[+\{x,y,\{w,z,t\}\}+\{z,w,\{x,y,t\}\}_{D}=0,\] \[\{z,\{x,y,w\}_{D},t\}+\{z,\{x,y,w\},t\}-\{z,\{y,x,w\},t\}+\{z,w, \{x,y,t\}_{D}\}\] \[+\{z,w,\{x,y,t\}\}-\{z,w,\{y,x,t\}\}=\{x,y,\{z,w,t\}\}_{D}-\{\{x,y,z \}_{D},w,t\},\] \[\{x,y,z\}_{D}*w+\{x,y,z\}*w-\{y,x,z\}*w=\{x,y,z*w\}_{D}-z*\{x,y,w \}_{D},\]
where the commutator \([\cdot,\cdot]_{C}:\wedge^{2}\mathfrak{g}\to\mathfrak{g}\) and \(\{\cdot,\cdot,\cdot\}_{D}:\otimes^{3}\!A\to A\) are defined by for all \(x,y,z\in A\),
\[[x,y]_{C}:=x*y-y*x,\quad\forall x,y\in A, \tag{7}\]
and
\[\{x,y,z\}_{D}:=\{z,y,x\}-\{z,x,y\}+(y,x,z)-(x,y,z), \tag{8}\]
respectively. Here \((\cdot,\cdot,\cdot)\) denotes the associator: \((x,y,z):=(x*y)*z-x*(y*z)\). It is obvious that \(\{\cdot,\cdot,\cdot\}_{D}\) is skew-symmetric with respect to the first two variables. We denote a pre-Lie-Yamaguti algebra by \((A,*,\{\cdot,\cdot,\cdot\})\).
Let \((A,*,\{\cdot,\cdot,\cdot\})\) be a pre-Lie-Yamaguti algebra. Define
* a ternary operation \([\![\cdot,\cdot,\cdot]\!]_{C}\) to be (9) \[[\![x,y,z]\!]_{C} = \{x,y,z\}_{D}+\{x,y,z\}-\{y,x,z\},\quad\forall x,y,z\in\mathfrak{g},\] where \(\{\cdot,\cdot,\cdot\}_{D}\) is given by (8).
* linear maps \[L:A\to\mathfrak{gl}(A),\quad\mathcal{R}:\otimes^{2}\!A\to\mathfrak{gl}(A)\] to be \[x\mapsto L_{x},\quad(x,y)\mapsto\mathcal{R}(x,y)\] respectively, where \(L_{x}z=x*z\) and \(R(x,y)z=\{z,x,y\}\) for all \(z\in A\).
**Proposition 2.12**.: ([24]) _With the above notations, then we have_
* _the operation_ \(([\cdot,\cdot]_{C},[\![\cdot,\cdot,\cdot]\!]_{C})\) _defines a Lie-Yamaguti algebra structure on_ \(A\)_, where_ \([\cdot,\cdot]_{C}\) _and_ \([\![\cdot,\cdot,\cdot]\!]_{C}\) _are give by Eqs. (_7_) and (_9_) respectively. This Lie-Yamaguti algebra_ \((A,[\cdot,\cdot]_{C},\)__\([\![\cdot,\cdot,\cdot]\!]_{C})\) _is called the_ **sub-adjacent Lie-Yamaguti algebra** _and is denoted by_ \(A^{c}\)_;_
* _the triple_ \((A;L,\mathcal{R})\) _is a representation of the sub-adjacent Lie-Yamaguti algebra_ \(A^{c}\) _on_ \(A\)_. Furthermore, the identity map_ \(\mathrm{Id}:A\longrightarrow A\) _is a relative Rota-Baxter operator on_ \(A^{c}\) _with respect to the representation_ \((A;L,\mathcal{R})\)_, where_ \[\mathcal{L}:=D_{L,\mathcal{R}}:\wedge^{2}\!A\longrightarrow\mathfrak{gl}(A), \quad(x,y)\mapsto L(x,y)\] _is given by_ \[\mathcal{L}(x,y)z=\{x,y,z\}_{D},\quad\forall z\in A.\]
A **quadratic Lie-Yamaguti algebra**[19] is a Lie-Yamaguti algebra \((\mathfrak{g},[\cdot,\cdot],[\![\cdot,\cdot,\cdot]\!])\) equipped with a nondegenerate symmetric bilinear form \(\mathcal{B}\in\otimes^{2}\mathfrak{g}^{*}\) satisfying the following invariant conditions for all \(x,y,z,w\in\mathfrak{g}\)
\[\mathcal{B}([x,y],z) = -\mathcal{B}(y,[x,z]),\] \[\mathcal{B}([\![x,y,z]\!]\!]\,,w) = \mathcal{B}(x,[\![w,z,y]\!]).\]
Then \(\mathcal{B}\) induces an isomorphism
\[\mathcal{B}^{\sharp}:\mathfrak{g}\to\mathfrak{g}^{*}\]
defined by
\[\langle\mathcal{B}^{\sharp}(x),y\rangle=\mathcal{B}(x,y),\quad\forall x,y\in \mathfrak{g}. \tag{10}\]
**Remark 2.13**.: Note that
\[\mathcal{B}(x,[\![w,z,y]\!])=-\mathcal{B}(x,[\![z,w,y]\!])=-\mathcal{B}([\![ z,w,y]\!]\!]\,,x)=-\mathcal{B}(z,[\![x,y,w]\!]).\]
Thus we obtain
\[\mathcal{B}([\![x,y,z]\!]\,,w)=-\mathcal{B}(z,[\![x,y,w]\!]). \tag{11}\]
**Proposition 2.14**.: ([24]) _With the above notations, \((\mathrm{Id},\mathcal{B}^{\sharp})\) forms an isomorphism from the adjoint representation \((\mathfrak{g};\mathrm{ad},\mathfrak{R})\) to the coadjoint representation \((\mathfrak{g}^{*};\mathrm{ad}^{*},-\mathfrak{R}^{*}\tau)\)._
**Remark 2.15**.: Similarly, by (6), (10) and (11), we have
\[\langle\mathcal{B}^{\sharp}(\mathfrak{L}_{x,y}z),w\rangle=\mathcal{B}([\![x,y, z]\!]\,,w)=-\mathcal{B}(z,[\![x,y,w]\!])=-\langle\mathcal{B}^{\sharp}(z),[\![x,y,w]\!] \rangle=\langle(\mathfrak{L}_{x,y})\mathcal{B}^{\sharp}(z),w\rangle.\]
Thus we obtain that
\[\mathcal{B}^{\sharp}(\mathfrak{L}_{x,y}z)=(\mathfrak{L}_{x,y})^{*}\mathcal{B} ^{\sharp}(z).\]
In the sequel, we recall the notions of symplectic structures, product structures, and complex structures on Lie-Yamaguti algebras.
**Definition 2.16**.: ([24]) Let \((\mathfrak{g},[\cdot,\cdot],[\![\cdot,\cdot,\cdot]\!])\) be a Lie-Yamaguti algebra. A **symplectic structure** on \(\mathfrak{g}\) is a nondegenerate, skew-symmetric bilinear form \(\omega\in\wedge^{2}\mathfrak{g}^{*}\) such that for all \(x,y,z,w\in\mathfrak{g}\),
\[\omega(x,[y,z])+\omega(y,[z,x])+\omega(z,[x,y])=0,\] \[\omega(z,[\![x,y,w]\!])-\omega(x,[\![w,z,y]\!])+\omega(y,[\![w,z, x]\!])-\omega(w,[\![x,y,z]\!])=0.\]
A Lie-Yamaguti algebra \(\mathfrak{g}\) with a symplectic structure \(\omega\) is called a **symplectic Lie-Yamaguti algebra**, denoted by \(\big{(}(\mathfrak{g},[\cdot,\cdot],[\![\cdot,\cdot,\cdot]\!]),\omega\big{)}\) or \((\mathfrak{g},\omega)\) for short.
**Definition 2.17**.: ([25]) Let \((\mathfrak{g},[\cdot,\cdot],[\![\cdot,\cdot,\cdot]\!])\) be a Lie-Yamaguti algebra. An **almost product structure** on \(\mathfrak{g}\) is a linear map \(E:\mathfrak{g}\longrightarrow\mathfrak{g}\) satisfying \(E^{2}=\mathrm{Id}\) (\(E\neq\pm\mathrm{Id}\)). An almost product structure is called a **product structure** if the following integrability conditions are satisfied:
\[[Ex,Ey] = E[Ex,y]+E[x,Ey]-[x,y],\] \[[\![Ex,Ey,Ez]\!] = E[\![Ex,Ey,z]\!]+E[\![x,Ey,Ez]\!]+E[\![Ex,y,Ez]\!]\] \[-[\![Ex,y,z]\!]-[\![x,Ey,z]\!]-[\![x,y,Ez]\!]-[\![x,y,Ez]\!]+E[\![x, y,z]\!]\,,\quad\forall x,y,z\in\mathfrak{g}.\]
A product structure gives rise to a decomposition of a Lie-Yamaguti algebra.
**Proposition 2.18**.: ([25]) _Let \((\mathfrak{g},[\cdot,\cdot],[\![\cdot,\cdot,\cdot]\!])\) be a Lie-Yamaguti algebra. Then there is a product structure on \(\mathfrak{g}\) if and only if \(\mathfrak{g}\) admits a decomposition:_
\[\mathfrak{g}=\mathfrak{g}_{+}\oplus\mathfrak{g}_{-},\]
_where \(\mathfrak{g}_{+}\) and \(\mathfrak{g}_{-}\) are subalgebras of \(\mathfrak{g}\)._
**Definition 2.19**.: If two decomposed subalgebras \(\mathfrak{g}_{+}\) and \(\mathfrak{g}_{-}\) have the same dimension, we call the product structure \(E\) a **paracomplex structure**. If moreover, the paracomplex structure \(E\) is perfect 1, \(E\) is called a **perfect paracomplex structure**.
Footnote 1: For the notion of perfect product structures, one can see Definition 4.12 in [25]
We are able to construct a perfect paracomplex structure on a semidirect product Lie-Yamaguti algebra from a pre-Lie-Yamaguti algebra.
**Proposition 2.20**.: _Let \((A,*,\{\cdot,\cdot,\cdot\})\) be a pre-Lie-Yamaguti algebra. Then on the semidirect product Lie-Yamaguti algebra \(A^{c}\ltimes_{L^{*},\neg R^{*}\tau}A^{*}\), there is a perfect paracomplex structure \(E:A^{c}\ltimes_{L^{*},\neg R^{*}\tau}A^{*}\) given by_
\[E(x+\alpha)=x-\alpha,\quad\forall x\in A^{c},\alpha\in A^{*}. \tag{12}\]
Proof.: It is obvious that \(E^{2}=\mathrm{Id}\). Moreover, we have that \((A^{c}\ltimes_{L^{*},\neg R^{*}\tau}A^{*})_{+}=A\) and \((A^{c}\ltimes_{L^{*},\neg R^{*}\tau}A^{*})_{-}=A^{*}\) and that they are two subalgebras of the semidirect product Lie-Yamaguti algebra \(A^{c}\ltimes_{L^{*},\neg R^{*}\tau}A^{*}\). Thus \(E\) is a product structure on \(A^{c}\ltimes_{L^{*},\neg R^{*}\tau}A^{*}\). Since \(A\) and \(A^{*}\) have the same dimension, \(E\) is a paracomplex structure on \(A^{c}\ltimes_{L^{*},\neg R^{*}\tau}A^{*}\). It is not hard to deduce that \(E\) is perfect.
**Definition 2.21**.: ([25]) Let \((\mathfrak{g},[\cdot,\cdot],[\![\cdot,\cdot,\cdot]\!])\) be a real Lie-Yamaguti algebra. A linear map \(J:\mathfrak{g}\longrightarrow\mathfrak{g}\) is called **an almost complex structure** if \(J^{2}=-\mathrm{Id}\). An almost complex structure is called a **complex structure** if the following integrability conditions hold:
\[[Jx,Jy] = J[Jx,y]+J[x,Jy]+[x,y], \tag{14}\] \[J\,[\![x,y,z]\!] = -\,[\![Jx,Jy,Jz]\!]+[\![Jx,y,z]\!]+[\![x,Jy,z]\!]+[\![x,y,Jz]\!]\] \[+J\,[\![Jx,Jy,z]\!]+J\,[\![Jx,y,Jz]\!]+J\,[\![x,Jy,Jz]\!]\,,\quad \forall x,y,z\in\mathfrak{g}. \tag{13}\]
**Proposition 2.22**.: ([25]) _Let \((\mathfrak{g},[\cdot,\cdot],[\![\cdot,\cdot,\cdot]\!])\) be a real Lie-Yamaguti algebra. Then there is a complex structure on \(\mathfrak{g}\) if and only if there is a decomposition of \(\mathfrak{g}_{\mathbb{C}}\):_
\[\mathfrak{g}_{\mathbb{C}}=\mathfrak{g}_{i}\oplus\mathfrak{g}_{-i},\]
_where \(\mathfrak{g}_{i}=\{x-iJx:x\in\mathfrak{g}\}\) and \(\mathfrak{g}_{-i}=\{x+iJx:x\in\mathfrak{g}\}\) are subalgebras of \(\mathfrak{g}_{\mathbb{C}}\). Observe that \(\mathfrak{g}_{-i}=\sigma(\mathfrak{g}_{i})\), where \(\mathfrak{g}_{\mathbb{C}}\) means the complexification of \(\mathfrak{g}\), i.e.,_
\[\mathfrak{g}_{\mathbb{C}}=\mathfrak{g}\otimes_{\mathbb{R}}\mathbb{C}\cong\{x +iy:x,y\in\mathfrak{g}\},\]
_and \(\sigma\) means the conjugation map in \(\mathfrak{g}_{\mathbb{C}}\), i.e.,_
\[\sigma(x+iy)=x-iy,\quad\forall x,y\in\mathfrak{g}.\]
## 3. Complex product structures on Lie-Yamaguti algebras
The notion of complex product structures of Lie-Yamaguti algebras was introduced in [25], where we gave an equivalent description of it (see Proposition 3.4). In this section, we construct a perfect complex product structure on a larger Lie-Yamaguti algebra from a pre-Lie-Yamaguti algebra.
**Definition 3.1**.:
1. Let \((\mathfrak{g},[\cdot,\cdot],[\![\![\cdot,\cdot,\cdot]\!])\) be a real Lie-Yamaguti algebra. A **complex product structure** on \(\mathfrak{g}\) is a pair \((J,E)\) consisting of a product structure \(E\) and a complex structure \(J\) such that \[E\circ J=-J\circ E.\]
2. If \(E\) is perfect, we call \((J,E)\) a **perfect complex product structure**.
**Proposition 3.2**.: _[_25_]_ _Let \((\mathfrak{g},[\cdot,\cdot],[\![\![\cdot,\cdot,\cdot]\!])\) be a complex Lie-Yamaguti algebra. Then \(E\) is a product structure on \(\mathfrak{g}\) if and only if \(J=-iE\) is a complex structure on \(\mathfrak{g}\)._
The following corollary is direct.
**Corollary 3.3**.: _Let \(J\) be a complex structure on a real Lie-Yamaguti algebra \((\mathfrak{g},[\cdot,\cdot],[\![\![\cdot,\cdot,\cdot]\!])\). Then \(-iJ_{\mathbb{C}}\) is a paracomplex structure on the complex Lie-Yamaguti algebra \((\mathfrak{g}_{\mathbb{C}},[\cdot,\cdot]_{\mathbb{C}},[\![\cdot,\cdot,\cdot]\!] _{\mathbb{C}})\), where \(J_{\mathbb{C}}:\mathfrak{g}_{\mathbb{C}}\to\mathfrak{g}_{\mathbb{C}}\) is given by_
\[J_{\mathbb{C}}(x+iy)\triangleq Jx+iJy,\quad\forall x,y\in\mathfrak{g}.\]
**Proposition 3.4**.: _[_25_]_ _Let \(J\) be a complex structure and \(E\) a product structure on a real Lie-Yamaguti algebra \((\mathfrak{g},[\cdot,\cdot],[\![\![\cdot,\cdot,\cdot]\!])\). Then \(\mathfrak{g}\) has a complex product structure on \(\mathfrak{g}\) if and only if \(\mathfrak{g}\) has a complex structure \(J\) and can be decomposed as \(\mathfrak{g}=\mathfrak{g}_{+}\oplus\mathfrak{g}_{-}\) such that_
\[\mathfrak{g}_{-}=J\mathfrak{g}_{+},\]
_where \(\mathfrak{g}_{\pm}\) are eigenspaces corresponding to the eigenvalues \(\pm 1\) of \(E\)._
Let \((\mathfrak{g},[\cdot,\cdot,\cdot],[\![\![\cdot,\cdot,\cdot]\!])\) be a Lie-Yamaguti algebra and \(E\) a paracomplex structure on \(\mathfrak{g}\). We define an endomorphism \(J\) of \(\mathfrak{g}\) via an isomorphism \(\phi:\mathfrak{g}_{+}\to\mathfrak{g}_{-}\) by
\[J(x+\alpha)=-\phi^{-1}(\alpha)+\phi(x),\quad\forall x\in\mathfrak{g}_{+}, \alpha\in\mathfrak{g}_{-}. \tag{15}\]
It is not hard to deduce that \(J\) is an almost complex structure on \(\mathfrak{g}\) and \(E\circ J=-J\circ E\). Moreover, we have the following proposition.
**Proposition 3.5**.: _Let \(E\) be a perfect paracomplex structure on a real Lie-Yamaguti algebra \((\mathfrak{g},[\cdot,\cdot],[\![\![\cdot,\cdot,\cdot]\!])\). Then there is a perfect complex product structure on \(\mathfrak{g}\) if and only if there exists a linear isomorphism \(\phi:\mathfrak{g}_{+}\to\mathfrak{g}_{-}\) satisfying the following equations_
\[\phi([x,y]) = -\phi^{-1}[\phi(x),y]-\phi^{-1}[x,\phi(y)]+[x,y], \tag{17}\] \[\phi\,[\![\![x,y,z]\!] = \tag{16}\]
Proof.: Let \((J,E)\) be a perfect complex product structure on \(\mathfrak{g}\). Define a linear isomorphism \(\phi:\mathfrak{g}_{+}\longrightarrow\mathfrak{g}_{-}\) by \(\phi\triangleq J_{|\mathfrak{g}_{+}}:\mathfrak{g}_{+}\longrightarrow\mathfrak{g}_ {-}\). Then by Definition 2.21 and definition of perfect product structures in [25], we deduce that Eqs. (16) and (17) hold.
Conversely, define a linear map \(J:\mathfrak{g}\longrightarrow\mathfrak{g}\) as in (15). Then it is obvious that \(J\) is an almost complex structure on \(\mathfrak{g}\) and that \(E\circ J=-J\circ E\). For all \(\alpha,\beta,\gamma\in\mathfrak{g}_{-}\), there exist \(x,y,z\in\mathfrak{g}_{+}\), such that \(\phi(x)=\alpha,\phi(y)=\beta\) and \(\phi(z)=\gamma\). By Eq. (17), we have
\[-\llbracket J\alpha,J\beta,J\gamma\rrbracket+\llbracket J\alpha, \beta,\gamma\rrbracket+\llbracket\alpha,J\beta,\gamma\rrbracket+\llbracket \alpha,\beta,J\gamma\rrbracket\] \[+J\llbracket J\alpha,J\beta,\gamma\rrbracket+J\llbracket \alpha,J\beta,J\gamma\rrbracket+J\llbracket J\alpha,\beta,J\gamma\rrbracket\] \[= \llbracket x,y,z\rrbracket-\llbracket x,\phi(y),\phi(z) \rrbracket-\llbracket\phi(x),y,\phi(z)\rrbracket-\llbracket\phi(x),\phi(y),z\rrbracket\] \[-\phi^{-1}\llbracket x,y,\phi(z)\rrbracket-\phi^{-1}\llbracket \phi(x),y,z\rrbracket-\phi^{-1}\llbracket x,\phi(y),z\rrbracket\] \[= -\phi^{-1}\llbracket\phi(x),\phi(y),\phi(z)\rrbracket\] \[= J\llbracket\alpha,\beta,\gamma\rrbracket\,,\]
which implies that (14) holds for all \(\alpha,\beta,\gamma\in\mathfrak{g}_{-}\). Similarly, by Eq. (16), we obtain that
\[[J\alpha,J\beta]=J[J\alpha,\beta]+J[\alpha,J\beta]+[\alpha,\beta],\quad\forall \alpha,\beta\in\mathfrak{g}_{-}.\]
Thus we obtain that \(J\) is a complex structure on \(\mathfrak{g}_{-}\). Other cases can be deduced similarly. Thus \(J\) is a complex structure, and hence \((J,E)\) is a perfect complex product structure on \(\mathfrak{g}\). This finishes the proof.
In the sequel, we construct a perfect complex product structure via pre-Lie-Yamaguti algebras. Let \((A,*,\{\cdot,\cdot,\cdot\})\) be a pre-Lie-Yamaguti algebra. A nondegenerate symmetric bilinear form \(\mathfrak{B}\in\mathfrak{g}^{2}A^{*}\) on \(A\) is called **invariant** if for all \(x,y,z,w\in A\)
\[\mathfrak{B}(x*y,z) = -\mathfrak{B}(y,x*z), \tag{19}\] \[\mathfrak{B}(\{x,y,z\},w) = \mathfrak{B}(x,\{w,z,y\}). \tag{18}\]
Then \(\mathfrak{B}\) induces a linear isomorphism \(\mathfrak{B}^{\sharp}:A\to A^{*}\) by
\[\langle\mathfrak{B}^{\sharp}(x),y\rangle=\mathfrak{B}(x,y),\quad\forall x,y \in A. \tag{20}\]
**Remark 3.6**.: It is clear that by a direct calculation, we have
\[\mathfrak{B}(\{x,y,z\}_{D},w) = -\mathfrak{B}(z,\{x,y,w\}_{D}). \tag{21}\]
**Theorem 3.7**.: _Let \((A,*,\{\cdot,\cdot,\cdot\})\) be a pre-Lie-Yamaguti algebra with a nondegenerate symmetric invariant bilinear form \(\mathfrak{B}\in\mathfrak{B}^{2}A^{*}\). Then there is a perfect complex product structure \((J,E)\) on the semidirect product Lie-Yamaguti algebra \(A^{c}\ltimes_{L^{*},\cdot\cdot\cdot\cdot\cdot}A^{*}\), where \(E\) is given by (12), and the complex structure \(J\) is given by_
\[J(x+\alpha)=-\mathfrak{B}^{\sharp^{-1}}(\alpha)+\mathfrak{B}^{\sharp}(x), \quad\forall x\in A,\alpha\in A^{*}. \tag{22}\]
Proof.: By Proposition 2.20, \(E\) is a perfect product on \(A^{c}\ltimes_{L^{*},\cdot\cdot\cdot\cdot}A^{*}\). For all \(x,y\in A\), we have
\[[\mathfrak{B}^{\sharp}(x),\mathfrak{B}^{\sharp}(y)]_{L^{*}, \cdot\cdot\cdot\cdot}+(\mathfrak{B}^{\sharp})^{-1}([\mathfrak{B}^{\sharp}(x), y]_{L^{*},\cdot\cdot\cdot\cdot}+(\mathfrak{B}^{\sharp})^{-1}[x,\mathfrak{B}^{ \sharp}(y)]_{L^{*},\cdot\cdot\cdot\cdot}-[x,y]_{C}\] \[= -(\mathfrak{B}^{\sharp})^{-1}\Big{(}L^{*}(y)\mathfrak{B}^{ \sharp}(x)\Big{)}+(\mathfrak{B}^{\sharp})^{-1}\Big{(}L^{*}(x)\mathfrak{B}^{ \sharp}(y)\Big{)}-[x,y]_{C}.\]
For any \(z\in A\), by (18) and (20), we have
\[-\langle L^{*}(y)\mathfrak{B}^{\sharp}(x)-L^{*}(x)\mathfrak{B}^{ \sharp}(y)+\mathfrak{B}^{\sharp}([x,y]_{C}),z\rangle\] \[= \langle\mathfrak{B}^{\sharp}(x),y*z\rangle-\langle\mathfrak{B}^{ \sharp}(y),x*z\rangle+\langle\mathfrak{B}^{\sharp}([x,y]_{C}),z\rangle\] \[= \mathfrak{B}(x,y*z)-\mathfrak{B}(y,x*z)-\mathfrak{B}(x*y,z)+ \mathfrak{B}(y*x,z)\] \[= 0,\]
which implies that (16) holds. Similarly, for all \(x,y,z\in A\), by (19), (21) and (20), we have
\[-\left[\mathfrak{B}^{\sharp}(x),\mathfrak{B}^{\sharp}(y),\mathfrak{B }^{\sharp}(z)\right]_{L^{*},-\mathcal{R}^{*}\tau}+\left[\mathfrak{B}^{\sharp}(x ),y,z\right]\hskip-1.0pt\right]_{L^{*},-\mathcal{R}^{*}\tau}+\left[\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\[= \phi([\![x,y,z]\!]_{C}\,,0),\]
from where (17) follows. Thus \((J,E)\) is a perfect complex structure on \(\mathfrak{a}\mathfrak{f}\mathfrak{f}(A)=A^{c}\ltimes_{L,\mathcal{R}}A\).
## 4. Para-Kahler structures on Lie-Yamaguti algebras
In this section, we add a compatibility condition between a symplectic structure and a paracomplex structure to introduce the notion of a para-Kahler structure on a Lie-Yamaguti algebra. A para-Kahler structure gives rise to a pseudo-Riemannian metric. Moreover, we introduce the notion of a Levi-Civita product associated to a pseudo-Riemannian Lie-Yamaguti algebra and give its relation with the associated pre-Lie-Yamaguti algebra structure on a para-Kahler Lie-Yamaguti algebra.
Recall that a **phase space** of a Lie-Yamaguti algebra \((\mathfrak{b},[\![\cdot,\cdot]\!]_{\mathfrak{b}},[\![\cdot,\cdot,\cdot]\!]_{ \mathfrak{b}})\) is a symplectic Lie-Yamaguti algebra \(\left((T^{*}\mathfrak{b}=\mathfrak{b}\oplus\mathfrak{b}^{*},[\![\cdot,\cdot] \!][\![\cdot,\cdot,\cdot]\!]\right),\omega_{p}\right)\) such that \(\mathfrak{b}\) and \(\mathfrak{b}^{*}\) are subalgebras of \(T^{*}\mathfrak{b}\), where the symplectic structure \(\omega_{p}\) is given by
\[\omega_{p}(x+\alpha,y+\beta)=\langle\alpha,y\rangle-\langle\beta,x\rangle\,, \quad\alpha,\beta\in\mathfrak{b}^{*},\ x,y\in\mathfrak{b}. \tag{23}\]
In the sequel, we give the main definition in this section.
**Definition 4.1**.: Let \((\mathfrak{g},[\![\cdot,\cdot]\!],[\![\cdot,\cdot,\cdot]\!])\) be a Lie-Yamaguti algebra, \(\omega\) a symplectic structure, and \(E\) a paracomplex on \(\mathfrak{g}\). The pair \((\omega,E)\) is called a **para-Kahler structre** on the Lie-Yamaguti algebra \(\mathfrak{g}\) if the following equality holds:
\[\omega(Ex,Ey)=-\omega(x,y),\quad\forall x,y\in\mathfrak{g}. \tag{24}\]
The triple \((\mathfrak{g},\omega,E)\) is called a **para-Kahler Lie-Yamaguti algebra**.
We give an equivalent description of para-Kahler Lie-Yamaguti algebras.
**Theorem 4.2**.: _Let \((\mathfrak{g},\omega)\) be a symplectic Lie-Yamaguti algebra. Then there is a paracomplex structure \(E\) on \(\mathfrak{g}\) such that \((\mathfrak{g},\omega,E)\) is a para-Kahler Lie-Yamaguti algebra if and only if there exist two isotropic Lie-Yamaguti subalgebras \(\mathfrak{g}_{+}\) and \(\mathfrak{g}_{-}\) such that \(\mathfrak{g}=\mathfrak{g}_{+}\oplus\mathfrak{g}_{-}\) as the direct sum of vector spaces._
Proof.: Let \((\mathfrak{g},\omega,E)\) be a para-Kahler Lie-Yamaguti algebra. Since \(E\) is a paracomplex structure on \(\mathfrak{g}\), we have \(\mathfrak{g}=\mathfrak{g}_{+}\oplus\mathfrak{g}_{-}\), where \(\mathfrak{g}_{+}\) and \(\mathfrak{g}_{-}\) are Lie-Yamaguti subalgebras of \(\mathfrak{g}\). For all \(x,y\in\mathfrak{g}_{+}\), on the one hand, we have
\[\omega(Ex,Ey)=\omega(x,y).\]
On the other hand, since \((\mathfrak{g},\omega,E)\) is a para-Kahler Lie-Yamaguti algebra, by (24), we get that \(\mathfrak{g}_{+}\) is isotropic. Similarly, \(\mathfrak{g}_{-}\) is also isotropic.
Conversely, since \(\mathfrak{g}_{+}\) and \(\mathfrak{g}_{-}\) are Lie-Yamaguti algebras, \(\mathfrak{g}=\mathfrak{g}_{+}\oplus\mathfrak{g}_{-}\) as vector spaces, there is a product structure on \(\mathfrak{g}\) defined by
\[E(x+\alpha)=x-\alpha,\quad\forall x\in\mathfrak{g},\alpha\in\mathfrak{g}^{*}.\]
Since \(\mathfrak{g}_{+}\) and \(\mathfrak{g}_{-}\) are isotropic, we have \(dim(\mathfrak{g}_{+})=dim(\mathfrak{g}_{-})\). Thus \(E\) is a paracomplex structure on \(\mathfrak{g}\). For all \(x,y\in\mathfrak{g}_{+},\alpha,\beta\in\mathfrak{g}_{-}\), since \(\mathfrak{g}_{+}\) and \(\mathfrak{g}_{-}\) are isotropic, we have
\[\omega(E(x+\alpha),E(y+\beta)) = \omega(x-\alpha,y-\beta)=\omega(x,\beta)-\omega(\alpha,y)\] \[= -\omega(x+\alpha,y+\beta).\]
Thus \((\mathfrak{g},\omega,E)\) is a para-Kahler Lie-Yamaguti algebra. This finishes the proof.
**Example 4.3**.: _Let \((\mathfrak{g},[\cdot,\cdot],[\![\![\cdot,\cdot,\cdot]\!])\) be a Lie-Yamaguti algebra with a basis \(\{e_{1},e_{2}\}\) whose nonzero brackets are given as follows:_
\[[e_{1},e_{2}]=e_{1},\quad[\![e_{1},e_{2},e_{2}]\!]=e_{1}.\]
_Then \((\omega,E)\) is a para-Kahler structure on \(\mathfrak{g}\), where \(\omega\) and \(E\) are given by_
\[\omega=ke_{1}^{*}\wedge e_{2}^{*}\quad\text{and}\quad E=\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}\]
_respectively._
We construct a para-Kahler Lie-Yamaguti algebra from a pre-Lie-Yamaguti algebra.
**Proposition 4.4**.: _Let \((A,*,\{\cdot,\cdot,\cdot\})\) be a pre-Lie-Yamaguti algebra. Then \((A^{c}\ltimes_{L^{*},-\mathcal{R}^{*}\tau}A^{*},\omega_{p},E)\) is a perfect para-Kahler Lie-Yamaguti algebra, where \(E\) is given by (12), and \(\omega_{p}\) is given by (23)._
Proof.: By Theorem 4.7 in [24], \((A^{c}\ltimes_{L^{*},-\mathcal{R}^{*}\tau}A^{*},\omega_{p})\) is a symplectic Lie-Yamaguti algebra. By Proposition 2.20, \(E\) is a perfect paracomplex structure on the phase space \(T^{*}A^{c}\). For all \(x,y\in A,\ \alpha,\beta\in A^{*}\), we have
\[\omega_{p}(E(x+\alpha),E(y+\beta)) = \omega_{p}(x-\alpha,y-\beta)=\langle-\alpha,y\rangle-\langle- \beta,x\rangle\] \[= -\omega_{p}(x+\alpha,y+\beta).\]
Hence, \((T^{*}A^{c}=A^{c}\ltimes_{L^{*},-\mathcal{R}^{*}\tau}A^{*},\omega_{p},E)\) is a perfect para-Kahler Lie-Yamaguti algebra.
**Proposition 4.5**.: _Let \((\mathfrak{h},[\cdot,\cdot],[\![\![\cdot,\cdot,\cdot]\!])\) be a Lie-Yamaguti algebra and \((\mathfrak{h}\oplus\mathfrak{h}^{*},\omega_{p})\) its phase space, where \(\omega_{p}\) is given by (23). Then \(E:\mathfrak{h}\oplus\mathfrak{h}^{*}\to\mathfrak{h}\oplus\mathfrak{h}^{*}\) defined by_
\[E(x+\alpha)=x-\alpha,\quad\forall x\in\mathfrak{h},\alpha\in\mathfrak{h}^{*},\]
_is a paracomplex structure and \((\mathfrak{h}\oplus\mathfrak{h}^{*},\omega_{p},E)\) is a para-Kahler Lie-Yamaguti algebra._
Proof.: It is obvious that \(E^{2}=\mathrm{Id}\), and that \(E\) satisfies the integrability condition, i.e. \(E\) is a product structure on \(\mathfrak{h}\oplus\mathfrak{h}^{*}\). Moreover, \(\mathfrak{h}\) and \(\mathfrak{h}^{*}\) have the same dimension, then \(E\) is a paracomplex structure on \(\mathfrak{h}\oplus\mathfrak{h}^{*}\). For all \(x,y\in\mathfrak{h},\alpha,\beta\in\mathfrak{h}^{*}\), we have
\[\omega_{p}(E(x+\alpha),E(y+\beta))=\omega_{p}(x-\alpha,y-\beta)=-\langle\alpha,y\rangle+\langle\beta,x\rangle=-\omega_{p}(x+\alpha,y+\beta).\]
Thus \((\mathfrak{h}\oplus\mathfrak{h}^{*},\omega_{p},E)\) is a para-Kahler Lie-Yamaguti algebra.
Let \((\mathfrak{g},\omega,E)\) be a para-Kahler Lie-Yamaguti algebra. Then it is obvious that \(\mathfrak{g}_{-}\cong\mathfrak{g}_{+}^{*}\) via the symplectic structure \(\omega\). Moreover, it is straightforward to deduce the following proposition.
**Proposition 4.6**.: _Any para-Kahler Lie-Yamaguti algebra is isomorphic to a phase space of a Lie-Yamaguti algebra. More precisely, let \((\mathfrak{g},\omega,E)\) be a para-Kahler Lie-Yamaguti algebra such that \(\mathfrak{g}=\mathfrak{g}_{+}\oplus\mathfrak{g}_{-}\), then it is isomorphic to the phase space of \(\mathfrak{g}_{+}\)._
In the sequel, we focus on the para-Kahler Lie-Yamaguti algebras whose paracomplex structures are perfect and abelian at the same time.
**Proposition 4.7**.: _Let \((\mathfrak{g},\omega,E)\) be a para-Kahler Lie-Yamaguti algebra and \(E\) be abelian and perfect. The pre-Lie-Yamaguti algebra structure defined by \(\omega\) verifies_
\[E(x*y) = Ex*y, \tag{26}\] \[E(x,y,z) = -\{Ex,y,z\}+\{x,Ey,z\}+\{x,y,Ez\}. \tag{25}\]
Proof.: For all \(x,y,z,w\in\mathfrak{g}\), by Proposition 4.3 in [24], we have
\[\omega\Big{(}E(x*y),z\Big{)}=-\omega\Big{(}x*y,Ez\Big{)}=\omega\Big{(}y,[x,Ez] \Big{)}\stackrel{{ E\text{ is abelian}}}{{=}}-\omega\Big{(}y,[Ex,z] \Big{)}=\omega\Big{(}Ex*y,z\Big{)},\]
which proves (25). And similarly, we have
\[\omega\Big{(}E(x,y,z],w\Big{)} =\] \[=\] \[= \omega\Big{(}x,[[w,Ez,y]\Big{]}\Big{)}+\omega\Big{(}x,[[w,z,Ey] \Big{]}\Big{)}-\omega\Big{(}Ex,[[w,z,y]]\Big{)}\] (Since \[E\] is perfect) \[= \omega\Big{(}\{x,y,Ez\}+\{x,Ey,z\}-\{Ex,y,z\},w\Big{)},\]
which proves (26). This finishes the proof.
**Remark 4.8**.: By the definition of \(\{\cdot,\cdot,\cdot\}_{D}\) and (26), we can get the following directly
\[E\{x,y,z\}_{D}=\{Ex,y,z\}_{D}+\{x,Ey,z\}_{D}-\{x,y,Ez\}_{D}.\]
**Proposition 4.9**.: _Let \((\mathfrak{g},\omega,E)\) be a para-Kahler Lie-Yamaguti algebra and \(E\) be abelian. Then abelian Lie-Yamaguti subalgebras \(\mathfrak{g}_{+}\) and \(\mathfrak{g}_{-}\) are pre-Lie-Yamaguti subalgebras of \(\mathfrak{g}\) endowed with the pre-Lie-Yamaguti algebra structures defined by \(\omega\)._
Proof.: For all \(x,y,z,w\in\mathfrak{g}_{+}\), one has
\[\omega(x*y,z)=-\omega(y,[x,z]_{\mathfrak{g}_{+}})=0,\]
and one has
\[\omega(\{x,y,z\},w)=\omega(x,[[w,z,y]]_{\mathfrak{g}_{+}})=0,\]
hence \(x*y,\ \{x,y,z\}\in\mathfrak{g}_{+}\), which shows that \(\mathfrak{g}_{+}\) is a subalgebra for the pre-Lie-Yamaguti algebra structures. An analogous reason shows the result for \(\mathfrak{g}_{-}\).
At the end of this section, we study the Levi-Civita product associated to a para-Kahler Lie-Yamaguti algebra.
**Definition 4.10**.: A **pseudo-Riemannian Lie-Yamaguti algebra** is a Lie-Yamaguti algebra \((\mathfrak{g},[\cdot,\cdot],[\mathbb{I}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Proof.: Since \(\omega\) is skew-symmetric and \(\omega(Ex,Ey)=-\omega(x,y)\), we have
\[S(y,x)=\omega(y,Ex)=-\omega(Ey,E^{2}x)=-\omega(Ey,x)=\omega(x,Ey)=S(x,y),\]
which implies that \(S\) is symmetric. It is obvious that \(S\) is nondegenerate, thus \(S\) is a pseudo-Riemannian metric on \(\mathfrak{g}\). Moreover, by (25), we have
\[S(Ex,y)=\omega(Ex,Ey)=-\omega(x,y)=-S(x,Ey).\]
Thus sine \(E\) is perfect, we have
\[3S(\Delta_{Ex,Ey}Ez,w)\] \[= S(\left[\!\left[Ex,Ey,w\right]\!\right],Ez)+S(\left[\!\left[Ex,Ey,Ez\right]\!\right],w)+S(\left[\!\left[w,Ez,Ex\right]\!\right],Ey)+2S(\left[\! \left[w,Ez,Ey\right]\!\right],Ex)\] \[= S(E\left[\!\left[x,y,Ew\right]\!\right],Ez)+S(E\left[\!\left[x,y, z\right]\!\right],Ew)+S(E\left[\!\left[Ew,z,x\right]\!\right],Ey)+2S(E\left[\! \left[Ew,z,y\right]\!\right],Ex)\] \[= -\Big{(}S(\left[\!\left[x,y,Ew\right]\!\right],z)+S(\left[\! \left[x,y,z\right]\!\right],Ew)+S(\left[\!\left[Ew,z,x\right]\!\right],y)+2S( \left[\!\left[Ew,z,y\right]\!\right],x)\Big{)}\] \[= -3S(\Delta_{x,y}z,Ew)\] \[= 3S(E\Delta_{x,y}z,w),\]
which implies that \(E\Delta_{x,y}z=\Delta_{Ex,Ey}Ez\) holds.
There is a close relationship between the Levi-Civita product on a para-Kahler Lie-Yamaguti algebra and its compatible pre-Lie-Yamaguti algebra structure. The following proposition reinforces this fact.
**Proposition 4.12**.: _Let \((\mathfrak{g},\omega,E)\) be a para-Kahler Lie-Yamaguti algebra and \((\nabla,\Delta)\) the associated Levi-Civita product. Then for all \(x,y,z\in\mathfrak{g}_{+}\) and \(\alpha,\beta,\gamma\in\mathfrak{g}_{-}\), we have_
\[\nabla_{x}y=x*y,\ \Delta_{x,y}z=\{x,y,z\},\ \nabla_{\alpha}\beta=\alpha*\beta, \ \Delta_{\alpha,\beta}\gamma=\{\alpha,\beta,\gamma\}.\]
_Moreover, if \(S\) is invariant, we also have_
\[\Delta_{x,y}z=\{x,y,z\}_{D},\ \Delta_{\alpha,\beta}\gamma=\{\alpha,\beta, \gamma\}_{D}.\]
Proof.: Since \((\mathfrak{g},\omega,E)\) is a para-Kahler Lie-Yamaguti algebra and \(\mathfrak{g}=\mathfrak{g}_{+}\oplus\mathfrak{g}_{-}\), where \(\mathfrak{g}_{+}\) and \(\mathfrak{g}_{-}\) are isotropic subalgebras, then for all \(x,y,z,w\in\mathfrak{g}_{+}\), we have that
\[\omega(\nabla_{x}y,z)=0,\ \omega(\Delta_{x,y}z,w)=0.\]
Since \(\mathfrak{g}_{+}\) is isotropic, we obtain \(\nabla_{x}y\), \(\Delta_{x,y}z\in\mathfrak{g}_{+}\). Similarly, for all \(\alpha,\beta,\gamma\in\mathfrak{g}_{-}\), we have \(\nabla_{\alpha}\beta,\Delta_{\alpha,\beta}\gamma\in\mathfrak{g}_{-}\). Furthermore, for all \(x,y\in\mathfrak{g}_{+},\alpha\in\mathfrak{g}_{-}\), we have
\[\omega(\nabla_{x}y,\alpha)=S(\nabla_{x}y,E\alpha)=-S(\nabla_{x}y,\alpha)\] \[= -S(\left[x,y\right],\alpha)-S(\left[\alpha,x\right],y)-S(\left[ \alpha,y\right],x)\] \[= -\omega(\alpha,\left[x,y\right])+\omega(y,\left[\alpha,x\right]) +\omega(x,\left[\alpha,y\right])\] \[= -\omega(y,\left[x,\alpha\right])=\omega(x*y,\alpha).\]
For all \(x,y,z\in\mathfrak{g}_{+},\alpha\in\mathfrak{g}_{-}\), we have
\[3\omega(\Delta_{x,y}z,\alpha)=3S(\Delta_{x,y}z,E\alpha)=-3S( \Delta_{x,y}z,\alpha)\] \[= -S(\left[\!\left[x,y,z\right]\!\right],z)-S(\left[\!\left[x,y,z \right]\!\right],\alpha)-S(\left[\!\left[\alpha,z,x\right]\!\right],y)-2S( \left[\!\left[\alpha,z,y\right]\!\right],x)\] \[= \omega(z,\left[\!\left[x,y,\alpha\right]\!\right])-\omega(\alpha, \left[\!\left[x,y,z\right]\!\right])+\omega(y,\left[\!\left[\alpha,z,x\right]\! \right])+2\omega(x,\left[\!\left[\alpha,z,y\right]\!\right])\] \[= 3\omega(x,\left[\!\left[\alpha,z,y\right]\!\right])\]
\[= 3\omega(\{x,y,z\},\alpha).\]
Thus we have proved that
\[\nabla_{x}y=x*y,\ \ \Delta_{x,y}z=\{x,y,z\}.\]
Furthermore, if \(S\) is invariant, we have
\[3\omega(\Delta_{x,y}z,\alpha)=3S(\Delta_{x,y}z,E\alpha)=-3S( \Delta_{x,y}z,\alpha)\] \[= S(\llbracket\alpha,z,y\rrbracket,x)-S(\llbracket\alpha,z,x \rrbracket,y)-S(\llbracket x,y,z\rrbracket,\alpha)+2S(\llbracket x,y,\alpha \rrbracket,z)\] \[= -\omega(x,\llbracket\alpha,z,y\rrbracket)+\omega(y,\llbracket \alpha,z,x\rrbracket)-\omega(\alpha,\llbracket x,y,z\rrbracket)-2\omega(z, \llbracket x,y,\alpha\rrbracket)\] \[= -3\omega(z,\llbracket x,y,\alpha\rrbracket)=3\omega(\{x,y,z\}_{D},\alpha),\]
which implies that \(\Delta_{x,y}z=\{x,y,z\}_{D}\). Other equalities can be proved similarly and we omit the details.
## 5. Pseudo-Kahler structures on Lie-Yamaguti algebras
In this section, we add a compatibility condition between a symplectic structure and a complex structure to introduce the notion of a pseudo-Kahler structure on a Lie-Yamaguti algebra. Moreover, the relation between para-Kahler structures and pseudo-Kahler structures on a Lie-Yamaguti algebra is studied, and we construct a Kahler Lie-Yamaguti algebra from a pre-Lie-Yamaguti algebra.
**Definition 5.1**.: Let \((\mathfrak{g},\llbracket\cdot,\cdot\rrbracket,\llbracket\cdot,\cdot,\cdot \rrbracket)\) be a a real Lie-Yamaguti algebra, \(\omega\) a symplectic structure, and \(J\) a complex structure on \(\mathfrak{g}\). The pair \((\omega,J)\) is called a **pseudo-Kahler structure** on the Lie-Yamaguti algebra \(\mathfrak{g}\) if
\[\omega(Jx,Jy)=\omega(x,y),\quad\forall x,y\in\mathfrak{g}. \tag{29}\]
The triple \((\mathfrak{g},\omega,J)\) is called a real **pseudo-Kahler Lie-Yamaguti algebra**.
**Example 5.2**.: _Let \((\mathfrak{g},\llbracket\cdot,\cdot\rrbracket,\llbracket\cdot,\cdot,\cdot \rrbracket)\) be the Lie-Yamaguti algebra given in Example 4.3, then \((\omega,J)\) is a pseudo-Kahler structure on \(\mathfrak{g}\), where \(\omega\) and \(J\) are given by_
\[\omega=ke_{1}^{*}\wedge e_{2}^{*}\quad\text{and}\quad J=\begin{pmatrix}0&1\\ -1&0\end{pmatrix}\]
_respectively._
The following two theorems illustrate the relation between the para-Kahler structures and pseudo-Kahler structures on Lie-Yamaguti algebras.
**Theorem 5.3**.: _Let \((\mathfrak{g},\omega,E)\) be a complex para-Kahler Lie-Yamaguti algebra. Then \((\mathfrak{g}_{\mathbb{R}},\omega_{\mathbb{R}},J)\) is a real pseudo-Kahler Lie-Yamaguti algebra, where \(\mathfrak{g}_{\mathbb{R}}\) is the underlying real Lie-Yamaguti algebra, \(J=iE\) and \(\omega_{\mathbb{R}}=\operatorname{Re}(\omega)\) is the real part of \(\omega\)._
Proof.: By Proposition 3.2, \(J=iE\) is a complex structure on the complex Lie-Yamaguti algebra \(\mathfrak{g}\). Thus \(J\) is also a complex structure on the real Lie-Yamaguti algebra \(\mathfrak{g}_{\mathbb{R}}\). It is obvious that \(\omega_{\mathbb{R}}\) is skew-symmetric. If for all \(x\in\mathfrak{g}\), \(\omega_{\mathbb{R}}(x,y)=0\). Then we have
\[\omega(x,y)=\omega_{\mathbb{R}}(x,y)+i\omega_{\mathbb{R}}(-ix,y)=0.\]
By the nondegeneracy of \(\omega\), we obtain that \(y=0\). Thus \(\omega_{\mathbb{R}}\) is nondegenerate. Therefore, \(\omega_{\mathbb{R}}\) is a symplectic structure on the real Lie-Yamaguti algebra \(\mathfrak{g}_{\mathbb{R}}\). By \(\omega(Ex,Ey)=-\omega(x,y)\), we have
\[\omega_{\mathbb{R}}(Jx,Jy)=\operatorname{Re}(\omega(iEx,iEy))=\operatorname{ Re}(-\omega(Ex,Ey))=\operatorname{Re}(\omega(x,y))=\omega_{\mathbb{R}}(x,y).\]
Thus \((q_{\mathbb{R}},\omega_{\mathbb{R}},J)\) is a real pseudo-Kahler Lie-Yamaguti algebra.
Conversely, we have the following theorem.
**Theorem 5.4**.: _Let \((\mathfrak{g},\omega,J)\) be a real pseudo-Kahler Lie-Yamaguti algebra. Then \((g_{\mathbb{C}},\omega_{\mathbb{C}},E)\) is a complex para-Kahler Lie-Yamaguti algebra, where \(g_{\mathbb{C}}=\mathfrak{g}\otimes_{\mathbb{R}}\mathbb{C}\) is the complexification of \(\mathfrak{g}\), \(E=-iJ_{\mathbb{C}}\) and \(\omega_{\mathbb{C}}\) is the complexification of \(\omega\):_
\[\omega_{\mathbb{C}}(x_{1}+iy_{1},x_{2}+iy_{2})=\omega(x_{1},x_{2 })-\omega(y_{1},y_{2})+i\omega(x_{1},y_{2})+i\omega(y_{1},x_{2}),\] \[\forall x_{1},x_{2},y_{1},y_{2}\in\mathfrak{g}.\]
Proof.: By Corollary 3.3, \(E=-iJ_{\mathbb{C}}\) is a paracomplex structure on the complex Lie-Yamaguti algebra \(g_{\mathbb{C}}\). It is obvious that \(\omega_{\mathbb{C}}\) is skew-symmetric and nondegenerate. Moreover, since \(\omega\) is a symplectic structure on \(\mathfrak{g}\), we deduce that \(\omega_{\mathbb{C}}\) is a symplectic structure on \(g_{\mathbb{C}}\). Finally, by \(\omega(Jx,Jy)=\omega(x,y)\), we have
\[\omega_{\mathbb{C}}(E(x_{1}+iy_{1}),E(x_{2}+iy_{2})) = \omega_{\mathbb{C}}(Jy_{1}-iJx_{1},Jy_{2}-iJx_{2})\] \[= \omega(Jy_{1},Jy_{2})-\omega(Jx_{1},Jx_{2})-i\omega(Jx_{1},Jy_{2} )-i\omega(Jy_{1},Jx_{2})\] \[= \omega(y_{1},y_{2})-\omega(x_{1},x_{2})-i\omega(x_{1},y_{2})-i \omega(y_{1},x_{2})\] \[= -\omega_{\mathbb{C}}(x_{1}+iy_{1},x_{2}+iy_{2}).\]
Thus \((g_{\mathbb{C}},\omega_{\mathbb{C}},-iJ_{\mathbb{C}})\) is a complex para-Kahler Lie-Yamaguti algebra.
**Proposition 5.5**.: _Let \((\mathfrak{g},\omega,J)\) be a real pseudo-Kahler Lie-Yamaguti algebra. Define a bilinear form \(S\) on \(\mathfrak{g}\) by_
\[S(x,y)\triangleq\omega(x,Jy),\quad\forall x,y\in\mathfrak{g}. \tag{30}\]
_Then \((\mathfrak{g},S)\) is a pseudo-Riemannian Lie-Yamaguti algebra._
Proof.: By (29), we have
\[S(y,x)=\omega(y,Jx)=\omega(Jy,J^{2}x)=-\omega(Jy,x)=\omega(x,Jy)=S(x,y),\]
which implies that \(S\) is symmetric. Moreover, since \(\omega\) is nondegenerate and \(J^{2}=-\mathrm{Id}\), it is obvious that \(S\) is nondegenerate. Thus \(S\) is a pseudo-Riemannian metric on the Lie-Yamaguti algebra \(\mathfrak{g}\).
**Definition 5.6**.: Let \((\mathfrak{g},\omega,J)\) be a real pseudo-Kahler Lie-Yamaguti algebra. If the associated pseudo-Riemannian metric defined by (30) is positive definite, we call \((\mathfrak{g},\omega,J)\) a real **Kahler Lie-Yamaguti algebra**.
At the end of this section, we construct a Kahler Lie-Yamaguti algebra from a pre-Lie-Yamaguti algebra with a symmetric and invariant bilinear form.
**Proposition 5.7**.: _Let \((A,*,\{\cdot,\cdot,\cdot\})\) be a real pre-Lie-Yamaguti algebra with a symmetric invariant bilinear form \(\mathfrak{B}\). Then \((A^{c}\ltimes_{L^{*},\neg R^{*}\tau}A^{*},\omega_{p},-J)\) is a real Kahler Lie-Yamaguti algebra, where \(J\) is given by (22) and \(\omega_{p}\) is given by (23)._
Proof.: By Proposition 2.20 and 3.7, we have that \(\omega_{p}\) is a symplectic structure and \(J\) is a complex structure on the semidirect product Lie-Yamaguti algebra \(A^{c}\ltimes_{L^{*},\neg R^{*}\tau}A^{*}\). Obviously, \(-J\) is a complex structure on \(A^{c}\ltimes_{L^{*},\neg R^{*}\tau}A^{*}\). Let \(\{e_{1},\cdots,e_{n}\}\) be a basis of \(A\) such that \(\mathfrak{B}(e_{i},e_{j})=\delta_{ij}\) and \(\{e_{1}^{*},\cdots,e_{n}^{*}\}\) be the dual basis of \(A^{*}\). Then for all \(i,j,k,l\in\{1,\cdots,n\}\), we have
\[\omega_{p}(e_{i}+e_{j}^{*},e_{k}+e_{i}^{*}) = \delta_{jk}-\delta_{ii},\]
\[\omega_{p}(-J(e_{i}+e_{j}^{*}),-J(e_{k}+e_{l}^{*})) = \omega_{p}(e_{j}-e_{i}^{*},e_{l}-e_{k}^{*})=-\delta_{il}+\delta_{kj}\]
which implies that
\[\omega_{p}(-J(x+\alpha),-J(y+\beta))=\omega_{p}(x+\alpha,y+\beta),\quad\forall x,y\in A,\alpha,\beta\in A^{*}.\]
Therefore \((A^{c}\star_{L^{*},-\mathcal{R}^{*}\tau}A^{*},\omega_{p},-J)\) is a pseudo-Kahler Lie-Yamaguti algebra. Finally, let \(x=\sum_{i=1}^{n}\lambda_{i}e_{i}\in A,\ \alpha=\sum_{j=1}^{n}\mu_{j}e_{j}^{*}\in A^{*}\) such that \(x+\alpha\neq 0\). We have
\[S(x+\alpha,x+\alpha) = \omega_{p}(x+\alpha,-J(x+\alpha))\] \[= \omega_{p}\Big{(}\sum_{i=1}^{n}\lambda_{i}e_{i}+\sum_{j=1}^{n}\mu _{j}e_{j}^{*},\sum_{j=1}^{n}\mu_{j}e_{j}-\sum_{i=1}^{n}\lambda_{i}e_{i}^{*} \Big{)}\] \[= \sum_{j=1}^{n}\mu_{j}^{2}+\sum_{i=1}^{n}\lambda_{i}^{2}>0.\]
Therefore, \(S\) is positive definite. Thus \((A^{c}\star_{L^{*},-\mathcal{R}^{*}\tau}A^{*},\omega_{p},-J)\) is a real Kahler Lie-Yamaguti algebra. This finishes the proof.
|
2303.09423 | Closed systems refuting quantum-speed-limit hypotheses | Many quantum speed limits for isolated systems can be generalized to also
apply to closed systems. This is, for example, the case with the well-known
Mandelstam-Tamm quantum speed limit. Margolus and Levitin derived an equally
well-known and ostensibly related quantum speed limit, and it seems to be
widely believed that the Margolus-Levitin quantum speed limit can be similarly
generalized to closed systems. However, a recent geometrical examination of
this limit reveals that it differs significantly from most known quantum speed
limits. In this paper, we show that, contrary to the common belief, the
Margolus-Levitin quantum speed limit does not extend to closed systems in an
obvious way. More precisely, we show that for every hypothetical bound of
Margolus-Levitin type, there are closed systems that evolve with a conserved
normalized expected energy between states with any given fidelity in a time
shorter than the bound. We also show that for isolated systems, the
Mandelstam-Tamm quantum speed limit and a slightly weakened version of this
limit that we call the Bhatia-Davies quantum speed limit always saturate
simultaneously. Both of these evolution time estimates extend straightforwardly
to closed systems. We demonstrate that there are closed systems that saturate
the Mandelstam-Tamm but not the Bhatia-Davies quantum speed limit. | Niklas Hörnedal, Ole Sönnerborn | 2023-03-16T15:55:13Z | http://arxiv.org/abs/2303.09423v2 | # Closed systems refuting quantum speed limit hypotheses
###### Abstract
Quantum speed limits for isolated systems that take the form of a distance divided by a speed extend straightforwardly to closed systems. This is, for example, the case with the well-known Mandelstam-Tamm quantum speed limit. Margolus and Levitin derived an equally well-known and ostensibly related quantum speed limit, and it seems to be widely believed that the Margolus-Levitin quantum speed limit can be similarly extended to closed systems. However, a recent geometrical examination of this limit reveals that it differs significantly from most quantum speed limits. In this paper, we show contrary to the common belief that the Margolus-Levitin quantum speed limit does not extend to closed systems in an obvious way. More precisely, we show that there exist closed systems that evolve between states with any given fidelity in an arbitrarily short time while keeping the normalized expected energy fixed at any chosen value. We also show that for isolated systems, the Mandelstam-Tamm quantum speed limit and a slightly weakened version of this limit that we call the Bhatia-Davies quantum speed limit always saturate simultaneously. Both of these evolution time estimates extend straightforwardly to closed systems. We demonstrate that there are closed systems that saturate the Mandelstam-Tamm quantum speed limit but not the Bhatia-Davies quantum speed limit.
## I Introduction
Many quantum speed limits (QSLs) for isolated systems take the form of a distance divided by a speed [1; 2; 3]. Such evolution time estimates can be straightforwardly extended to closed systems.1 The famous Mandelstam-Tamm QSL is an estimate of this kind [4; 5]. The Mandelstam-Tamm QSL states that the time it takes for an isolated system to evolve between two fully distinguishable states is bounded from below by2; 3
Footnote 1: An _isolated system_ is a system that evolves according to the von Neumann equation with a time-independent Hamiltonian, and a _closed system_ is one that evolves according to the von Neumann equation with a time-varying Hamiltonian.
Footnote 2: _State_ will always refer to a pure quantum state, that is, a state that can be represented by a density operator of rank 1.
Footnote 3: All quantities are expressed in units such that \(\hbar=1\).
\[\tau_{\text{arr}}=\frac{\pi}{2\Delta H}, \tag{1}\]
where \(\Delta H\) is the energy uncertainty. More generally, the time it takes for an isolated system to evolve between two states with fidelity \(\delta\) is bounded from below by4
Footnote 4: The _fidelity_ or _overlap_ between two states \(\rho_{1}\) and \(\rho_{2}\) is \(\text{tr}(\rho_{1}\rho_{2})\).
\[\tau_{\text{arr}}(\delta)=\frac{\arccos\sqrt{\delta}}{\Delta H}. \tag{2}\]
This estimate is also due to Mandelstam and Tamm but was rediscovered and formulated more concisely in [6].
The Mandelstam-Tamm QSL can be extended to closed systems by replacing the denominator in (2) with the corresponding time average. Thus, the evolution time of a closed system evolving between two states with fidelity \(\delta\) is bounded from below by
\[\bar{\tau}_{\text{arr}}(\delta)=\frac{\arccos\sqrt{\delta}}{\ll \llangle\Delta H_{\text{f}}\rr}, \tag{3}\]
with \(\llangle\Delta H_{\text{f}}\rr\) being the time average of the energy uncertainty. Since the Fubini-Study distance between two states with fidelity \(\delta\) is \(\arccos\sqrt{\delta}\) and the Fubini-Study speed with which a state evolves is \(\Delta H_{\text{f}}\)[5; 7], the Mandelstam-Tamm QSL is saturated if and only if the state follows a Fubini-Study geodesic in the projective Hilbert space. Mandelstam and Tamm's QSL has been extended to systems in mixed states [7; 8; 9; 10].
Margolus and Levitin [11] derived a seemingly similar evolution time estimate. The Margolus-Levitin QSL states that the time it takes for an isolated system to evolve between two fully distinguishable states is greater than or equal to
\[\tau_{\text{int}}=\frac{\pi}{2\langle H-\epsilon_{\text{min}}\rangle}, \tag{4}\]
where \(\langle H-\epsilon_{\text{min}}\rangle\) is the expected energy \(\langle H\rangle\) shifted by the smallest occupied energy \(\epsilon_{\text{min}}\), hereafter called the normalized expected energy. A more general result states that the time it takes for an isolated system to evolve between two states with fidelity \(\delta\) is lower bounded by
\[\tau_{\text{int}}(\delta)=\frac{\alpha(\delta)}{\langle H-\epsilon_{\text{ min}}\rangle}, \tag{5}\]
where
\[\alpha(\delta)=\min_{z^{2}\leq\delta}\frac{1+z}{2}\arccos\Big{(}\frac{2\delta- 1-z^{2}}{1-z^{2}}\Big{)}. \tag{6}\]
Like \(\tau_{\textsc{mt}}(\delta)\), the bound \(\tau_{\textsc{ML}}(\delta)\) is tight, and \(\tau_{\textsc{ML}}(0)=\tau_{\textsc{ML}}\). The bound \(\tau_{\textsc{ML}}(\delta)\) was established numerically in [12] and derived analytically in [13]. Reference [13] also contains a geometric interpretation of \(\tau_{\textsc{ML}}(\delta)\) and a complete description of the systems that reach the bound.
A natural guess is that the Margolus-Levitin QSL is also valid for closed systems, provided one puts the time average of the normalized expected energy in the denominator. More generally, one might expect that the evolution time of a closed system is lower bounded by a quantity of the form \(\mathcal{L}(\delta)/\langle\!\langle H_{t}-\epsilon_{\textsc{min};t}\rangle\!\rangle\) where \(\mathcal{L}\) is some positive function that depends only on the fidelity \(\delta\) between the initial and the final state. In the next section, we show that this is not the case:
_We show that for each state \(\rho\) and \(0\leq\delta\leq 1\), there exists a Hamiltonian \(H_{t}\) that evolves \(\rho\) to a state with fidelity \(\delta\) relative to \(\rho\) in an arbitrarily short time while keeping the normalized expected energy fixed at an arbitrary predetermined value._
Lui et al. [14] used the Bhatia-Davies inequality to transform the Mandelstam-Tamm QSL into an upper bound for a proposed operationally defined QSL [15]. This upper bound is a new QSL that we call the Bhatia-Davies QSL, although one should rightly attribute it to the authors of [14]. The Bhatia-Davies QSL states that the time it takes for an isolated system to evolve between two states with fidelity \(\delta\) is bounded from below by
\[\tau_{\textsc{BD}}(\delta)=\frac{\arccos\sqrt{\delta}}{\sqrt{\langle\epsilon_ {\max}-H\rangle\langle H-\epsilon_{\min}\rangle}}, \tag{7}\]
where \(\epsilon_{\max}\) is the largest and \(\epsilon_{\min}\) is the smallest occupied energy. The Bhatia-Davies QSL also extends straightforwardly to closed systems:
\[\bar{\tau}_{\textsc{BD}}(\delta)=\frac{\arccos\sqrt{\delta}}{\langle\!\langle \sqrt{\langle\epsilon_{\max;t}-H_{t}\rangle\langle H_{t}-\epsilon_{\min;t} \rangle}\,\rangle\!\rangle} \tag{8}\]
The Bhatia-Davies QSL is weaker than that of Mandelstam and Tamm in the sense that \(\bar{\tau}_{\textsc{MT}}(\delta)\geq\bar{\tau}_{\textsc{BD}}(\delta)\) with a strict inequality in general for both isolated and closed systems. We show that the Mandelstam-Tamm and the Bhatia-Davies QSLs are always saturated simultaneously for isolated systems but that this need not be the case for closed systems:
_We provide an example of a closed system that saturates the Mandelstam-Tamm but not the Bhatia-Davies QSL._
## II Time-dependent systems that disprove common belief
One obtains a relatively simple type of time-dependent Hamiltonian if one conjugates a time-independent Hamiltonian \(H\) with a one-parameter group of unitaries generated by a Hermitian operator \(A\):
\[H_{t}=e^{-iAt}He^{iAt}. \tag{9}\]
Such a group action will preserve the eigenvalues but rotate the eigenvectors of \(H\). If a state \(\rho\) evolves under the influence of \(H_{t}\),
\[\dot{\rho}_{t}=-i[H_{t},\rho_{t}],\qquad\rho_{0}=\rho, \tag{10}\]
the state in the rotating frame picture,
\[\rho_{t}^{\textsc{W}}=e^{iAt}\rho_{t}e^{-iAt}, \tag{11}\]
evolves as if the time-independent Hamiltonian \(A-H\) governed the dynamics:
\[\dot{\rho}_{t}^{\textsc{W}}=-i[A-H,\rho_{t}^{\textsc{W}}],\qquad\rho_{0}^{ \textsc{W}}=\rho. \tag{12}\]
As a consequence, in the Schrodinger picture,
\[\rho_{t}=e^{-iAt}e^{-i(A-H)t}\rho e^{i(A-H)t}e^{iAt}. \tag{13}\]
In general, the behavior of \(\rho_{t}\) can be quite complex even though \(H_{t}\) has a relatively simple time dependence. However, equation (13) tells us that if \(\rho\) commutes with \(A-H\), the evolving state will behave as if the time-independent 'effective' Hamiltonian \(A\) generates it:
\[\rho_{t}=e^{-iAt}\rho e^{iAt}. \tag{14}\]
This observation will be of central importance below.
The eigenvectors of \(H\) will also evolve with \(A\) as effective Hamiltonian: If \(|j\rangle\) is an eigenvector of \(H\) with eigenvalue \(\epsilon_{j}\), then \(|j;t\rangle=e^{-iAt}|j\rangle\) is an eigenvector of \(H_{t}\) with the eigenvalue \(\epsilon_{j}\). As a result, the occupations of the energy levels are constant over time:
\[\langle j;t|\rho_{t}|j;t\rangle=\langle j|\rho|j\rangle. \tag{15}\]
This means that the expected energy \(\langle H_{t}\rangle\), the energy uncertainty \(\Delta H_{t}\), and the normalized expected energy \(\langle H_{t}-\epsilon_{\textsc{min};t}\rangle\) and its 'dual' \(\langle\epsilon_{\max;t}-H_{t}\rangle\) are conserved quantities; see [13; 16] for a QSL involving the dual of the normalized expected energy.
Another important fact is that \(\rho_{t}\) is a Fubini-Study geodesic if \(A\rho+\rho A=A\); see Appendix A in [7]. If such is the case, the Mandelstam-Tamm QSL is saturated, and the system evolves between two states with fidelity \(\delta\) in time \(\bar{\tau}_{\textsc{MT}}(\delta)\). Interestingly, given an initial state \(\rho\) and a Hamiltonian \(H\), there is an elegant way to construct an \(A\) such that \([A-H,\rho]=0\) and \(A\rho+\rho A=A\): Write \(\rho=|u\rangle\langle u|\), let \(\epsilon=\langle u|H|u\rangle\), and define
\[A=(H-\epsilon)|u\rangle\langle u|+|u\rangle\langle u|(H-\epsilon). \tag{16}\]
Below we show how to disprove two hypotheses about QSLs with appropriate choices of \(\rho\) and \(H\), and \(A\) defined as in (16).
### The non-existence of a time-dependent Margolus-Levitin QSL
The Mandelstam-Tamm and Margolus-Levitin QSLs say that if one requires the state to follow a geodesic, one
cannot modify a time-independent Hamiltonian in such a way that the energy uncertainty takes on an arbitrarily large value without the normalized expected energy also doing so. Interestingly, this does not hold for time-dependent Hamiltonians. Below we give an example of a closed system whose state follows a geodesic and in which the energy uncertainty decouples from the normalized expected energy so that one can let the energy uncertainty assume arbitrarily large values while the normalized expected energy remains at a fixed, predetermined value.
_The consequence is that one can make the system evolve between two states with a given fidelity \(\delta\) in an arbitrarily short time and, at the same time, keep the normalized expected energy fixed at a finite value._
Consider a quantum system in a state \(\rho=|u\rangle\langle u|\). Let \(H\) be a Hamiltonian, to be specified, and define \(A\) as in (16). Further, let \(H_{t}=e^{-iAt}He^{iAt}\) and let \(\rho_{t}\) be the state at time \(t\) generated from \(\rho\) by \(H_{t}\). Then \(\rho_{t}=e^{-iAt}\rho e^{iAt}\), and \(\rho_{t}\) follows a Fubini-Study geodesic.
To specify \(H\) let \(|v\rangle\) be a unit vector perpendicular to \(|u\rangle\) and define the Pauli operators \(X\) and \(Z\) as
\[X = |u\rangle\langle u|-|v\rangle\langle v|, \tag{17}\] \[Z = |u\rangle\langle v|+|v\rangle\langle u|. \tag{18}\]
Fix the value \(E>0\) that the normalized expected energy should have, let \(\mu\) be a positive function on the interval \(0<\theta<\pi\), and define
\[H=\mu(\theta)(\sin\theta Z-\cos\theta X). \tag{19}\]
The largest and the smallest eigenvalues of \(H\), and thus of \(H_{t}\), are \(\mu(\theta)\) and \(-\mu(\theta)\), respectively, both of which are occupied by \(\rho_{t}\). Furthermore, the normalized expected energy and the energy uncertainty are
\[\langle H_{t}-\epsilon_{\text{min};t}\rangle=\mu(\theta)(1-\cos \theta), \tag{20}\] \[\Delta H_{t}=\mu(\theta)\sin\theta. \tag{21}\]
Since we want the normalized expected energy to be \(E\), we must define \(\mu\) as \(\mu(\theta)=E/(1-\cos\theta)\), implying that
\[\Delta H_{t}=E\cot(\theta/2). \tag{22}\]
Figure 1 shows how the normalized expected energy and the energy uncertainty depend on the angle \(\theta\).
Let \(\tau(\delta)\) be the first time the system reaches a state having fidelity \(\delta\) with the initial state \(\rho\). Since the state follows a Fubini-Study geodesic, the Mandelstam-Tamm QSL is saturated:
\[\tau(\delta)=\bar{\tau}_{\text{arr}}(\delta)=\frac{\arccos\sqrt{\delta}}{E \cot(\theta/2)}. \tag{23}\]
This evolution time can be made arbitrarily small by choosing \(\theta\) sufficiently close to \(0\). However, regardless of the value of \(\theta\), the normalized average energy is preserved with the prescribed value \(E\). We conclude that irrespective of a required fidelity \(\delta\) between the initial and the final states, a Hamiltonian exists that evolves the system between two states with fidelity \(\delta\) in an arbitrarily short time and along a trajectory such that the normalized expected energy is conserved with a prescribed value.
_Consequently, the Margolus-Levitin QSL does not obviously extend to closed systems._
In Figure 2, we have represented \(H\), \(A\), and \(\rho\) as Bloch vectors relative to \(X\), \(Y\), and \(Z\), with \(Y=i(|u\rangle\langle v|-|v\rangle\langle u|)\). The angle between \(H\) and the negative \(x\)-axis is \(\theta\). As time passes, the state and the Hamiltonian rotate around the \(z\)-axis with the same angular velocity. Note that \(\rho_{t}\) moves along the equator in the Bloch sphere and thus is a Fubini-Study geodesic. The dotted vectors represent the state and the Hamiltonian at a time \(t>0\).
The purple circle, formed by
Figure 1: Graphs illustrating the dependence of the normalized expected energy (red) and energy uncertainty (blue) on the angle \(\theta\). The requirement that the normalized expected energy be constant forces the energy uncertainty to grow toward infinity with decreasing angle.
Figure 2: Bloch vector representations of \(H\), \(A\), and \(\rho\). The vector representing \(\rho\) points along the positive \(x\)-axis, and the vector representing \(H\) makes the angle \(\theta\) with the negative \(x\)-axis. The purple circle represents the expected energy level to which \(\rho\) belongs. As time passes, the state and the Hamiltonian rotate around the \(z\)-axis with the same angular velocity. The dashed vectors represent \(H_{t}\) and \(\rho_{t}\) at a time \(t>0\). The expected energy level rotates with the state.
sphere with a plane perpendicular to the extension of the vector representing \(H\), represents the expected energy level to which \(\rho\) belongs. This circle rotates together with \(H_{t}\) and always lies in a plane perpendicular to the vector representing \(H_{t}\). The key observation is that this circle corresponds to the normalized expected energy \(E\) irrespective of the value of angle \(\theta\), and \(\rho_{t}\) will evolve together with that circle.
Most initial states will not evolve in such a well-behaved manner as those located on the equator of the Bloch sphere. In Figure 3, we have drawn the evolution curve of a state not on the equator. In the rotating frame picture (12), the evolution curve forms a circle around the \(x\)-axis. This is since \(A-H\propto X\).
### The Bhatia-Davies QSL
The example in the previous section shows that the normalized expected energy alone does not necessarily limit the evolution time from below for closed systems. In the example, however, an arbitrary width of the energy spectrum was permitted. If we require that the spectral width does not exceed a given value, the evolution time cannot be made arbitrarily small. This since the energy uncertainty cannot exceed the spectral width.
The Bhatia-Davies inequality [17] provides a tighter bound on the energy uncertainty than the spectral width. The Bhatia-Davies inequality states that the variance of any observable \(B\) is bounded from above according to
\[\Delta^{2}B\leq\langle b_{\text{max}}-B\rangle\langle B-b_{\text{min}}\rangle, \tag{24}\]
with \(b_{\text{max}}\) and \(b_{\text{min}}\) being the largest and the smallest occupied eigenvalues of \(B\). Consequently, the evolution time of an isolated system is bounded by \(\tau_{\text{BD}}(\delta)\) defined in (7), and the evolution time of a closed system is bounded by \(\bar{\tau}_{\text{BD}}(\delta)\) defined in (8).
Equality holds in the Bhatia-Davies inequality if and only if the state occupies at most two eigenvalues of \(B\). Since the state of an isolated system saturating the Mandelstam-Tamm QSL occupies only two energy levels [7; 18], the Mandelstam-Tamm and Bhatia-Davies QSLs are always saturated simultaneously for isolated systems.
The Mandelstam-Tamm and Bhatia-Davies QSLs generalize to closed systems as in (3) and (8), respectively, and a natural guess would be that also these QSLs are always saturated simultaneously. However, as we will see, a time-dependent Hamiltonian can evolve a state at a constant speed along a Fubini-Study geodesic in such a way that the state during the entire evolution occupies more than two energy levels. Such an evolution will saturate the Mandelstam-Tamm QSL but not the Bhatia-Davies QSL. This is because the Bhatia-Davies inequality will be strict over the entire evolution time interval, which means that the denominator in (8) is strictly greater than the denominator in (3).
### A non-saturation of the Bhatia-Davies QSLs
Let \(H\) be a Hamiltonian for a system with at least three distinct eigenvalues, and let \(\rho=|u\rangle\langle u|\) be any state occupying at least three of those. Define \(A\) as in (16), let \(H_{t}=e^{-iAt}He^{iAt}\), and let \(\rho_{t}\) be the state at time \(t\) generated from \(\rho\) by \(H_{t}\). Since \([A-H,\rho]=0\) and \(A\rho+\rho A=A\), the Mandelstam-Tamm QSL is saturated, and the system will evolve between two states with fidelity \(\delta\) in time \(\bar{\tau}_{\text{MT}}(\delta)\). Furthermore, since \(\rho_{t}\) always occupies at least three different energy levels,
\[\Delta^{2}H_{t}<\langle\epsilon_{\text{max;}t}-H_{t}\rangle\langle H_{t}- \epsilon_{\text{min;}t}\rangle. \tag{25}\]
Therefore, \(\bar{\tau}_{\text{MT}}(\delta)>\bar{\tau}_{\text{BD}}(\delta)\), and the Mandelstam-Tamm QSL is saturated but not the Bhatia-Davies QSL.
## III Summary
A common view is that the Margolus-Levitin quantum speed limit extends to an evolution time estimate for closed systems of the form \(\mathcal{L}(\delta)/\llll H_{t}-\epsilon_{\text{min;}t}\rangle\) where \(\mathcal{L}\) is a positive function that only depends on the fidelity \(\delta\) between the initial and final states and \(\llllll H_{t}-\epsilon_{\text{min;}t}\llll\) is the time average of the normalized expected energy. We have shown with a counterexample that this is not the case. More precisely, we have constructed a closed system that evolves between two states with fidelity \(\delta\) in an arbitrarily short time while keeping the normalized expected energy fixed at an arbitrary predetermined value.
We have also considered a QSL for isolated systems called the Bhatia-Davies QSL. This QSL extends straightforwardly to closed systems. We have shown that the Bhatia-Davies and Mandelstam-Tamm QSLs are always simultaneously saturated for isolated systems but that this need not be the case for closed systems.
Figure 3: An evolution curve starting from a state not on the equator of the Bloch sphere. In this case, \(\theta=30^{\circ}\) and \(E=1\). The left figure shows the evolution curve in the Schrödinger picture, and the right figure shows the same curve in the rotating frame picture. The warmer colors indicate more recent times, and the blue arrow represents the state at the final time. |
2308.14066 | Bi-Modality Medical Image Synthesis Using Semi-Supervised Sequential
Generative Adversarial Networks | In this paper, we propose a bi-modality medical image synthesis approach
based on sequential generative adversarial network (GAN) and semi-supervised
learning. Our approach consists of two generative modules that synthesize
images of the two modalities in a sequential order. A method for measuring the
synthesis complexity is proposed to automatically determine the synthesis order
in our sequential GAN. Images of the modality with a lower complexity are
synthesized first, and the counterparts with a higher complexity are generated
later. Our sequential GAN is trained end-to-end in a semi-supervised manner. In
supervised training, the joint distribution of bi-modality images are learned
from real paired images of the two modalities by explicitly minimizing the
reconstruction losses between the real and synthetic images. To avoid
overfitting limited training images, in unsupervised training, the marginal
distribution of each modality is learned based on unpaired images by minimizing
the Wasserstein distance between the distributions of real and fake images. We
comprehensively evaluate the proposed model using two synthesis tasks based on
three types of evaluate metrics and user studies. Visual and quantitative
results demonstrate the superiority of our method to the state-of-the-art
methods, and reasonable visual quality and clinical significance. Code is made
publicly available at
https://github.com/hustlinyi/Multimodal-Medical-Image-Synthesis. | Xin Yang, Yi Lin, Zhiwei Wang, Xin Li, Kwang-Ting Cheng | 2023-08-27T10:39:33Z | http://arxiv.org/abs/2308.14066v2 | Bi-Modality Medical Image Synthesis Using Semi-Supervised Sequential Generative Adversarial Networks
###### Abstract
In this paper, we propose a bi-modality medical image synthesis approach based on sequential generative adversarial network (GAN) and semi-supervised learning. Our approach consists of two generative modules that synthesize images of the two modalities in a sequential order. A method for measuring the synthesis complexity is proposed to automatically determine the synthesis order in our sequential GAN. Images of the modality with a lower complexity are synthesized first, and the counterparts with a higher complexity are generated later. Our sequential GAN is trained end-to-end in a semi-supervised manner. In supervised training, the joint distribution of bi-modality images are learned from real paired images of the two modalities by explicitly minimizing the reconstruction losses between the real and synthetic images. To avoid overfitting limited training images, in unsupervised training, the marginal distribution of each modality is learned based on unpaired images by minimizing the Wasserstein distance between the distributions of real and fake images. We comprehensively evaluate the proposed model using two synthesis tasks based on three types of evaluate metrics and user studies. Visual and quantitative results demonstrate the superiority of our method to the state-of-the-art methods, and reasonable visual quality and clinical significance. Code is made publicly available at [https://github.com/hustilnyi/Multimodal-Medical-Image-Synthesis](https://github.com/hustilnyi/Multimodal-Medical-Image-Synthesis).
## I Introduction
Multimodal medical imaging is widely considered to involve the incorporation of two or more imaging modalities, within the setting of a single examination using, for example, both functional and anatomical forms of magnetic resonance imaging (i.e., multiparametric MRI), or by performing ultrasound within MR, or x-ray computed tomography (CT) environment. Multimodal imaging could ideally provide a comprehensive examination of the targeted tissue, including the exact localization and metabolic activity of the target tissue, the tissue flow and functional changes within the surrounding tissues, and the pathognomic changes leading to eventual disease. Due to the high effectiveness and wide application of multimodal imaging, it has been seen in the rapid evolution and becoming a standard practice in the clinic for diagnosing many diseases [1].
The wide availability of multimodal medical images could greatly facilitate researchers in developing and validating advanced computer-aided medical diagnosis techniques. However, in practice tuples of corresponding images in different modalities data is very scarce and expensive to acquire due to the high cost and complex procedure for acquisition. The pressing need for data has even largely increased with the advent of deep neural networks, which are typically data-hungry and have become the standard approach in most machine learning tasks [2]. The limited amount and diversity of paired multimodal medical images could greatly restrict the capability of existing deep learning methods. Synthesizing multimodal images could be an attractive solution. Extensive studies have recently demonstrated that synthesizing medical images could successfully improve classification, registration and/or segmentation performance in medical imaging tasks [3-7]. These studies proved that although synthetic data is derived from data that is already available, it can still improve performance of medical tasks as the synthesizer could learn a complete and continuous image distribution. Therefore, the ability to synthesize high-quality and clinical meaningful multimodal medical images is highly desirable, while such problem remains a widely unsolved challenge. Bi-modality image synthesis is a fundamental task towards the ultimate goal of synthesizing co-registered multimodal images. We focus our study on bi-modality image synthesis in this paper.
There exists a large amount of studies for medical image synthesis in the literature, ranging from conventional physical model-based methods [8, 9] to the learning-based data-driven approaches [10-13]. Recent studies mainly employ generative adversarial networks (GANs) for their fast advances and impressive results. However, most existing works focus on either single modal image synthesis [14] or cross-model image-to-image translation [15]. The approaches for generating tuples of corresponding images in two modalities, from a common lowdimensional vector, are largely under-studied. In the literature of natural image synthesis, there exists several methods [16-18] for directly synthesizing multi-domain data from random noise vectors. Despite the success of these methods in handling natural image tasks, they can hardly achieve satisfactory results in the task of synthesizing bi-modality medical images due to the limited amount of bi-modality training data and high difficulties in capturing clinically meaningful characteristics of medical images.
In this work, we aim at a novel framework for generating high-quality, clinically meaningful bi-modality medical images using GANs. Specifically, the proposed framework consists of two generative modules to synthesize corresponding images in the two modalities in a sequential order. A complexity measurer is introduced to automatically determine the synthesis order. The modality that has a lower complexity is synthesized first via a decoder that maps a low dimensional latent vector to an image. And then the other modality with a higher complexity is synthesized later via an image-to-image translator which generates images with the guidance of previously generated counterparts in the other modality. The proposed synthesizer (i.e., the decoder and the image-to-image translator) is trained via semi-supervised learning. In
supervised training, the synthesizer learns the joint distribution of bi-modality images explicitly by directly minimizing the reconstruction loss between fake and corresponding real images. To avoid overfitting to the limited paired bi-modality images, in unsupervised training the decoder and the image-to-image translator learns the marginal distribution for the two modalities respectively via adversarial learning. That is for each modality, we minimize the Wasserstein distance between distributions of the fake and real images in this modality. Despite learning marginal distributions in the unsupervised process, the joint distribution of bi-modality images is implicitly approximated by enforcing a weight sharing constraint on the first few deconvolutional layers of the decoder and the translator. As a result, the coarse spatial layouts of the corresponding images in the two modalities are decoded in the similar way based on the first few deconvolutional layers, and detailed and unique information in each modality is decoded differently based on the respective deconvolutional layers at the back. The resulting synthesizer could generate as many tuples of corresponding medical images in the two modalities as the user require, by sampling from a probability distribution that we impose to the associated latent space.
We take two task drivers, i.e., synthesizing apparent diffusion coefficient (ADC) maps and-T2 weight (T2w) image for prostate cancer classification, and T1w-T2w brain image synthesis, to comprehensively evaluate the proposed model and compare it with the state-of-the-art methods [11, 17, 19, 20] based on four types of metrics: 1) Inception Score (IS) [21] and Frechet Inception Distance (FID) [22] which are widely-used metrics in the vision domain for evaluating the quality of synthetic images [23]; 2) the task-specific metric, i.e., classification accuracy, for assessing the effectiveness of synthetic images for clinically significant (CS) cancer vs. nonCS image classification; 3) Mutual Information Distance (MID) which measures the quality of multimodal synthetic images; and 4) a user study in which 3 radiologists with 1-year, 8year and 23-year reading experiences respectively are involved for assessing the clinically significance of synthetic multimodal images. Extensive experiments consistently demonstrate the superiority of our model to the state-of-the-art methods.
To summarize, the main contributions of this work include:
* We present a novel bi-modality image synthesis network based on semi-supervised learning. Our network utilizes supervised training to ensure correct spatial correlation between synthetic images of the two modalities and meanwhile provides a high visual realism and diversity via unsupervised training. Extensive experimental results demonstrate the superiority of our method to the state-of-the-art methods [11, 17, 19, 20].
* We introduce a novel method that can effectively assess the complexity for synthesizing images of a modality. We integrate the complexity measure into our image synthesis framework to automatically determine the synthesis order. The estimated complexity is consistent with both the experience and the final synthesis results.
* We conduct a very comprehensive evaluation based on two synthesis tasks, three types evaluation metrics and a user study to understand the performance of our method from various aspects, i.e., visual realism, diversity, correlations between corresponding bi-modality images, and clinical significance. In addition, we carefully analyze the impact of each proposed module, i.e., sequential architecture, semi-supervised learning and synthesis complexity measurer, and compare with the state-of-the-art methods via extensive experiments.
## II Related Work
Conventional method imaging synthesis mainly formulate a physical model of the observed data. These models can range from simple digital phantoms [8] to more complex methodologies attempting to mimic anatomical and physiological medical knowledge [9]. By modeling characteristics of the different acquisition devices, these methods can generate new high-quality images by sampling in an appropriate parameter space.
In recent years, we have witnessed an increasing popularity of the data-driven approaches for medical image synthesis. In this context, the underlying probability distribution that defines the manifold of real images is learned by machine learning techniques from a large pool of training images. Once trained, new images that lie on the manifold, i.e., realistic synthetic images, can be generated by sampling from the learned distribution. Various data-driven image synthesis approaches have been recently developed and successfully applied to medical applications. For instance, in [3, 6, 24, 25, 26] the authors demonstrated that synthesized data could improve the performance of classification and/or segmentation and/or registration of multimodal MRI images. In [15, 27], Nie et al. proposed a fully convolutional network with an adversarial loss and an image gradient difference loss to generate CT from an MRI image. However, these works main focus on single modal image synthesis or cross-model image-to-image translation. In this work, we aim at concurrently generating medical images of two modalities from common low dimensional vectors.
In terms of multimodal image synthesis, existing methods can be categorized into two classes: 1) parallel multi-modal image synthesis which generates images of two modalities using two GANs from common low dimensional vectors, as shown in Fig. 2(a), 2) sequential multi-modal image synthesis, which first generates images of one modality from low-dimensional vectors, followed by image-to-image translation that maps them to their counterparts in the other modality, as shown in Fig. 2(b), (c).
Examples of the first class include [12, 17, 28]. For instance, in [17] Liu and Tuzel proposed coupled GAN (CoGAN), in which two GANs were utilized to learn the marginal image distributions in different domains (e.g., color and depth images, face images with different attributes, etc.) respectively. Images of each domain are then synthesized by sampling from the corresponding marginal distribution. To guarantee the correct relationships between corresponding images in different domains, a weight-sharing constraint is enforced so that the several GANs decode high-level semantics in the same way. In [12] Chartsias et al. proposed a multimodal MR synthesis method by learning a shared modality-invariant
latent space in which all input modalities are embedded. Their method was evaluated on two public datasets, i.e., ISLES [29] and BRATS [30], to demonstrate its superiority to the state-of-the-art methods. However, these methods completely ignore the inherent differences of the task complexity arising from the acquisition characteristics of different modalities. For instance, as shown in Fig. 1(a),the ADC map which captures the functional and coarse anatomical information of a prostate has a low spatial resolution. In comparison, the T2w image which captures the detailed anatomical structure of a prostate contains many high-frequency texture information. As a result, synthesizing T2w images of a prostate using parallel GANs, e.g., CoGAN, could be more challenging than synthesizing the corresponding ADC images due to the much higher complexity of synthesizing fine texture of T2w images. As a result, the quality of synthetic T2w images is much worse than that of ADC maps (as shown in Fig. 1(b)).
To address the above issue, the second class of solutions, i.e., sequential bi-modality image synthesis, facilitates the consideration of different task complexities, providing a more flexible solution for multimodal medical image synthesis. In the sequential multimodal image synthesis framework, the modality with a lower synthesis complexity, is synthesized first. Images of the other modality are then generated via image-to-image translation. For instance, in [11] Costa et al. proposed to first generate a retinal vessel map via a GAN, followed by another GAN which synthesizes the corresponding retinal fundus image based on the synthetic vessel map. In [11], the sequential GANs were trained in a supervised manner (as shown in Fig. 2(b)), where the encodings (i.e., latent code) of real retinal images along with paired vessel maps are used. The constraint of the paired relationship between a vessel map and its corresponding retinal image is accomplished by minimizing both the reconstruction losses of vessel-fundus pairs and an adversarial loss. However, [11] requires tuples of corresponding images in different modalities/domains for training, which are challenging to obtain in practice. Limited training data makes the GANs only "see" very sparse distributed samples, yielding a great difficulty in modeling a complete and well-approximated joint distribution of true multimodal images and in turn lead to poor performance during the testing phase (as shown in Fig. 1(c)). One potential solution to address this issue is to remove the correspondence dependency via unsupervised training, as shown in Fig. 2(c). That is train the sequential GANs using unpaired multimodal images so that each GAN can learn the marginal distribution of a particular modality and in turn generate visually realistic images of this modality. However, unsupervised training cannot ensure correct relationships among different modalities (as shown in Fig. 1(d)).
## III Methodology
In this study, our goal is to employ the generative adversarial learning technique to model the joint distribution of bi-modality medical images \((I_{syn}^{1},I_{syn}^{2})\sim\mathbb{P}_{syn}(I_{syn}^{1},I_{syn}^{2})\) based on a given set of training images \(I_{real,m}^{1},I_{real,m}^{2}|m=1,2,...,M\), where \(I^{k}\) denotes the \(k\)-th modality, \((I_{1}^{k},I_{2}^{k})\) indicates a tuple of spatially aligned bi-modality images, \(M\) denotes the total number of training data. We de
Fig. 1: Examples of six (a) real pairs of ADC-T2w images, and six synthetic ADC-T2w images based on (b) CoGAN [17], (c) our sequential supervised GANs [11], (d) our sequential unsupervised GANs, (e) our sequential semi-supervised GANs. The top row of (a)–(e) shows the ADC maps and the bottom shows T2w.
Fig. 2: Framework of (a) parallel GANs, (b) supervised sequential GANs, (c) unsupervised sequential GANs. In supervised training, paired bi-modality real images \(A_{i}\) and \(B_{j}\) are provided for training, where \(A_{i}\) and \(B_{j}\) denote images in two modalities and \(i=j\). In unsupervised training, unpaired bi-modality real images \(A_{i}\) and \(B_{j}\) are provided for training and \(i\neq j\).
sire that the manifold defined by the learned distribution \(\mathbb{P}_{syn}(I^{1}_{syn},I^{2}_{syn})\) can well approximate the true manifold defined by \(\mathbb{P}_{real}(I^{1}_{real},I^{2}_{real})\) so that synthetic images sampled from \(\mathbb{P}_{syn}(I^{1}_{syn},I^{2}_{syn})\) are visually realistic and have a large diversity.
To achieve our goal, we propose a general bi-modality image synthesis framework based on sequential generative adversarial learning and semi-supervised learning, as shown in Fig. 3. It mainly consists of three modules: 1) a complexity measure which ranks the modalities in an increasing order of difficulty for synthesis \((i,j)\); 2) an encoder which maps a real image of modality \(i\) to a low dimensional latent vector \(v_{i}\); and 3) a synthesizer which first decodes the latent vector \(v_{i}\) to a fake image of modality \(i\) and then translates image \(I^{i}_{syn}\) to images of the modality \(j\). The generator is trained using a semi-supervised manner. Our framework is for bi-modality medical image synthesis but it can be easily generalized to synthesize images of more than two modalities.
In the following, we first describe our bi-modality image synthesis model architecture, followed by details of semi-supervised training.
### _Complexity Measurement for Sequential Synthesis_
In sequential synthesis, the modality easier to generate is synthesized earlier so that the difficulties of generating the other challenging modality can be greatly alleviated by conditioning on the prior information. How to determine the generation order so as to optimize the effectiveness of sequential synthesis is highly important; however, the method for quantitatively measure the synthesis complexity has not been studied to the best of our knowledge.
In this paper, we propose to measure the complexity using a parallel GAN based on the quadratic Wasserstein-2 distance (W2-distance) in a hierarchical feature space. That is, we first synthesize \(N\) images in each modality using a well-trained parallel GAN model, the architecture of each GAN is shown in Fig. 4. The two GANs for two modalities respectively share identical network design (without weight sharing) and take a common 128-\(d\) vector drawn from a predefined Gaussian distribution as input.
To measure the complexity of synthesizing two modalities, instead of directly calculating the W2-distance in the raw data space we follow the method in [22] and do the calculation in a hierarchical feature space based on the pre-trained Inceptionv3 network [31] on ImageNet [32] dataset to comprehensively measure the complexity in both shallow and in-depth visual features. Specifically, we model the feature distribution using a multivariate Gaussian distribution with mean and covariance. Accordingly, the distance between the distributions of the generated images' features and real images' features is calculated as Eq. (1). This distance reflects the generation complexity \(C\) as for the modality easy to synthesize, it is more likely to correctly model the complete data distribution and yield a
Fig. 4: Network architecture of the GAN-based complexity measurer.
Fig. 3: Framework of our sequential semi-supervised GANs for bi-modality medical image synthesis.
lower \(C\) value in Eq. (1), and vice versa.
\[C=\sum_{i=0}^{L}\left\{||\mu_{g}^{i}-\mu_{r}^{i}||_{2}^{2}+\text{Tr}\left(\Sigma_{ g}^{i}+\Sigma_{r}^{i}-2\left(\Sigma_{g}^{i}\Sigma_{r}^{i}\right)^{\frac{1}{2}} \right)\right\} \tag{1}\]
where \(\mu^{i}\) and \(\Sigma^{i}\) denote the mean and variance of a multivariate Gaussian distribution for features extracted using the \(i\)-th layer of the Inception-v3 model, \(L\) denotes the total number of layers, the subscripts \(g\) and \(r\) indicate the generated and real data distributions respectively, Tr sums up all the diagonal elements.
Note that ideally Inception-V3 model can be replaced by any CNN pretrained on a large dataset, here we choose Inception-V3 model because it has been widely demonstrated to be superior to the state-of-the-art methods [2, 33] for its better accuracy, fewer parameters and faster speed, and the pretrained model on ImageNet is public available.
We calculate the generation complexity for ADC (\(C_{\text{ADC}}\)) and T2w (\(C_{\text{T2w}}\)) respectively based on 500 synthetic ADC images and 500 synthetic T2w images according to (1). \(C_{\text{ADC}}\) and \(C_{\text{T2w}}\) are 182.9 and 230.4 respectively, indicating a lower complexity of generating ADC images than T2w images. The result is consistent with the experience. Intuitively, compared with ADC, T2w provides more detailed texture information of the anatomical structure of a prostate, thus its more difficult for a synthesizer to correctly model the distribution of real T2w images than ADC images. Accordingly, the distance between the distributions of generated T2w images and real T2w images is usually greater than that between the distributions of generated ADC images and real ADC images.
### _Semi-Supervised Training of the Sequential Synthesizer_
As shown in Fig. 3, we train the sequential GAN in a semisupervised manner, with the goal that supervised learning can explicitly encode the relationships among different modalities in the image-to-image translator and unsupervised learning can correctly model the marginal distributions of the two modalities so that the synthetic multimodal images are visually realistic and have a large diversity. Fig. 5 shows the architecture of the encoder (\(F_{\text{enc}}\)) and the sequential synthesizer (\(F_{\text{dec}}\) and \(T\)). \(F_{\text{enc}}\) first encodes a real image of modality 1 into a 128 latent encoding via two convolutional layers, followed by a reshape layer and two fully connected layers. Then the 128 latent encoding is decoded into a fake image of modality 1 size of 64 x 64 via the \(F_{\text{dec}}\). After that an autoencoder is applied as a translator \(T\) to map the fake image of modality 1 into its counterpart image of modality 2. The first two deconvolutional layers of the \(F_{\text{dec}}\) and \(T\) share weights to ensure that the spatial layouts of the corresponding images of the two modalities are consistent.
In the following, we detail the training strategies used in our study.
#### Iii-B1 Supervised Training:
The top part of Fig. 3 shows details of the supervised training process. We first utilize the encoder \(F_{\text{enc}}\) to obtain encodings of real images of modality 1, i.e., \(z=F_{\text{enc}}(I_{a}^{1})\), where \(I_{a}^{1}\) a denotes the \(a\)-th real image of modality 1. The decoder then reconstructs a fake image \(\hat{I}_{a}^{1}=F_{\text{dec}}(z)\) based on \(z\). After that, an image-to-image translator based on an autoencoder converts \(\hat{I}_{a}^{1}\) a to a fake image of modality 2 as \(\hat{I}_{a}^{2}=T(\hat{I}_{a}^{1})\). In supervised training, paired multimodal images are provided and thus for each pair of fake images \(\hat{I}_{a}^{1},\hat{I}_{a}^{2}\) we could find their corresponding true pairs of images \(\hat{I}_{a}^{1},\hat{I}_{a}^{2}\). Therefore, we can train the entire network, including both \(F_{\text{enc}}\) and the generator (i.e., \(F_{\text{dec}}\) and \(T\)) by minimizing the pixel-wise reconstruction loss \(L_{1}\) as:
\[L_{1}=\mathbb{E}_{I^{1},I^{2}\sim p(I^{1},I^{2})}|||I^{1}-\hat{I}||+||I^{2}- \hat{I}|| \tag{2}\]
where \(\mathbb{E}_{I^{1},I^{2}\sim p(I^{1},I^{2})}\) is the expectation over pairs (\(I^{1},I^{2}\)), sampled from the joint data distribution of real training pairs \(p(I^{1},I^{2})\), \(||x-\hat{x}||\) calculates the average of pixel-wise Manhattan distances between intensities of images \(x\) and \(\hat{x}\).
Ideally, the supervised training approach can make the generator (i.e., \(F_{\text{dec}}\) and \(T\)) efficiently capture the correct paired relationships via explicitly minimizing the \(L_{1}\) loss between synthetic and corresponding real data. However, only using supervised training approach would encounter a severe overfitting problem when the number of multimodal medical data is small. This is because the generator only'sees' a very sparse and small portion of the latent space which contains encodings of real data in the training process. Consequently, during the testing process when synthesizing multimodal data from noise vectors \(z\sim p(z)\) (where \(p(z)\) conforms a Gaussian distribution), rather than encodings, the quality of the synthesized images could be extremely poor, even though we constrain the distribution of encodings to conform the same distribution. To address this problem, we
Fig. 5: Network architecture of the encoder \(F_{\text{enc}}\) and the sequential generator (i.e., \(F_{\text{dec}}\) and \(T\)).
also guide the generator to learn the marginal distribution of each image modality based on unpaired multimodal images via unsupervised learning.
#### Iii-B2 Unsupervised Training:
The bottom part of Fig. 3 shows details of the unsupervised approach. Rather than training the generator using limited encodings from true images, the generator is trained using unlimited latent vectors drawn from \(z\sim p(z)\) (where \(p(z)\) conforms a Gaussian distribution) and unpaired multimodal images \(P(I^{1})\) and \(P(I^{2})\). We utilize two discriminators \(D^{1}\) and \(D^{2}\) to respectively approximate the W-distances \(W^{1}\) and \(W^{2}\) between the fake and real images as (3) for the two modalities respectively.
\[\begin{split} W^{x}=&\max\{\mathbb{E}_{I^{x}\sim p _{\text{dec}}[I^{x})}[D^{x}(I^{x})]\\ &-\mathbb{E}_{I^{x}\sim p_{\text{syn}}(\hat{I^{x}})}[D^{x}(I^{x}) ]-\lambda_{I^{x}}R_{I^{x}}\}(x=1,2)\end{split} \tag{3}\]
where \(I^{x}\) and \(\hat{I^{x}}=F_{\text{dec}}(z)\) are real and synthetic images respectively, \(R_{I^{x}}\) is used for enforcing the 1-Lipschitz constraint of \(D^{x}(I^{x})\), \(\lambda_{I^{x}}\) is the parameter for adjusting the impact of \(D^{x}(I^{x})\)[34]. Accordingly, the loss function of training the generator (i.e., \(F_{\text{dec}}\) and \(T\)) is:
\[L_{\text{unsup}}=W^{1}+W^{2} \tag{4}\]
Unsupervised training enables the generator focus on learning the marginal distribution of images of a single modality from unpaired multimodal images, leading to the generation of more visually realistic images of each modality. In addition, the weights of the first few layers of the decoder and the image-toimage translators are sharing, enabling the high-level semantic features between the corresponding images of the two modalities being decoded in the same way and in turn preserving inherent and coarse spatial correlations between images to different modalities. On the other hand, the weights of the last few layers are completely independent, capturing unique low-level detailed features of each modality.
#### Iii-B3 Semi-Supervised Training:
To train the synthesizer in a semi-supervised manner, we first train it in a supervised manner by inputting a batch of paired training images (i.e., batch size is 32) and minimizing the \(L_{1}\) loss defined in Eq. (2) using one iteration. Then we continue to train the decoder and image translator of the synthesizer in an unsupervised manner using another iteration by inputting a batch of unpaired-images (i.e., batch size is 32) and a 128-\(d\) random vector of Gaussian distribution and minimizing the W-distance defined in Eq. (3). We alternate the supervised and unsupervised training processes for 40K iterations. Semi-supervised training enables both the correct paired relationships via supervised training, and a high visual realism and diversity via unsupervised training.
## IV Experiments
### _Datasets and Applications_
In this study, we evaluate the proposed method using two medical tasks: ADC-T2w prostate image synthesis and T1wT2w brain image synthesis.
#### Iv-A1 Prostate Multi-Parametric Magnetic Resonance Imaging Dataset
The study was approved by our local institutional review board. In this study, we utilized T2w and ADC of mp-MRI sequences for the task of image synthesis. An ADC map is derived from a transverse diffusion-weighted image (DWI) which provides pathological information of tissues. The derivation of ADC from DWI can be found in [35]. The mp-MRI images are collected from two datasets: 1) a locally collected dataset including 156 patients data pathologically validated by a 12-core systematic TRUS-guided plus targeted prostate biopsy. Dataset details are listed in [36, 37], 2) a public dataset PROSTATEx (training) [38, 39, 40], including data of 204 MRI-targeted biopsy-proven patients. Among 360 patients data, 226 patients are with benign prostatic hyperplasia (BPH) or indolent lesions, which are collectively referred to as non-Clinically Significant (CS) prostate cancer (PCa), and 134 patients are with CS PCa. From the two datasets, a radiologist manually selects 533 original ADC-T2w images containing CS PCa and 1992 ADC-T2w images containing nonCS PCa. The dataset was further divided into the training set (483 CS from 116 distinct cancerous patients) and the test set (50 CS from 10 distinct patients that are different from those in the training set) as [41]. We applied non-rigid registration [36] to every pair of ADC and T2w images to minimize motion-induced misalignment errors between the two modalities. Then, for each image pair, we manually cropped a rectangular region of interest size of 64 \(\times\) 64 which encompasses the prostate area.
#### Iv-A2 Ixi T1w-T2w Brain Dataset:
The IXI dataset [42] consists of a variety of MR images from nearly 600 normal subjects from several hospitals in London. To make the numbers of training images and test images similar to those of the prostate dataset, we used 181 patients' data from the Hammersmith Hospital using a Philips 3T system and extracted two to three axial-plane T1w-T2w slices from each patient, yielding 533 co-registered T1w-T2w image pairs. The dataset was further divided into a training set (161 patients with 483 T1w-T2w image pairs) and a test set (20 patients with 50 T1w-T2w image pairs). We resized each of the 533 co-registered T1w-T2w image pairs from 256 \(\times\) 256 to 128 \(\times\) 128.
For both prostate and IXI dataset, we normalized the intensity of each image to [-1, 1].
### _Evaluation Metrics and Implementation Details_
#### Iv-B1 Evaluation Metrics
We evaluated the synthesis methods using three types of metrics: 1) Inception Score (IS) [21] and Frechet Inception Distance (FID) [22] which are two widely used metrics in computer vision domain to assess the quality of single modal synthetic images; 2) the classification accuracy to assess the effectiveness of the synthetic images for training a classification model; 3) Mutual Information Distance (MID) which measures whether the correlations between corresponding synthetic images in different modalities are correct or not. In addition, we also conducted a user study to examine the quality and clinical significance of our synthetic images. In the following, we first explain the first three types metrics. Details of the user study will be presented in Sec.IV-D2.
**Inception Score (IS):** It is proposed in [21] to simulate humans observations for determining the visual quality and diversity of synthetic images. Given a set of synthetic images \(\{I_{\text{syn},1},I_{\text{syn},2},...,I_{\text{syn},N}\}\), we calculate conditional label distribution \(p(y|I_{\text{syn},i})\) for each synthetic image \(i\) based on the Inception model [31], where y denotes the classification label of the Inception model. For each \(p(y|I_{\text{syn},i})\), we calculate its entropy to represent the image \(I_{\text{syn},i}\)'s visual quality. We also calculate the marginal distribution \(p(y)\) by averaging conditional distributions of all synthetic samples and then compute the entropy of \(p(y)\). A high entropy of \(p(y)\) denotes a diverse distribution of \(p(y|I_{\text{syn},i})\) among all images, reflecting a large diversity of the synthetic image set. By considering both the visual quality of each individual synthetic image and the diversity of the entire set, IS score is defined as:
\[\text{IS} =\exp\left(\sum_{1}^{N}\text{KL}\left(p(y|I_{\text{syn},i})\|p(y) \right)\right) \tag{5}\] \[=\exp\left(H(y)-\sum_{1}^{N}H(y|I_{\text{syn},i})\right)\]
where \((x)\) represents entropy of variable \(x\). A large IS denotes a high quality of the generated images.
**Frechet Inception Distance (FID):** It is proposed in [22] to represent the realistic degree of synthetic images. To calculate FID, we map images into a common feature space by the Inception model and model the distributions of the synthetic and real features using two continuous multivariate Gaussian models respectively. Then the FID is computed by calculating the Frechet distance between the two Gaussian models. A small FID denotes a high degree of reality of the generated images.
**Classification Accuracy:** We desire that despite being generated from real data, synthetic images could effectively augment the size and diversity of real data. In this study, we evaluate the performance gain in the task of prostate cancer classification achieved by using synthetic images to train a clinically significant (CS) vs. non-clinically significant (nonCS) classifier. Specifically, we follow the network design of [41] for constructing our prostate CS vs. nonCS classifier. For multimodal data, we directly concatenate images of different modalities as input and apply the same network for classification. We train the classifier using 483 synthetic multimodal images and evaluate the classifier using 50 real CS images and 50 real nonCS images. All the testing data are neither used for training the classifier nor used for training the synthesizer.
**Mutual Information Distance (MID):** In this study we define the metric Mutual Information Distance (MID),
\[\text{MID}=\|\mathbb{E}_{(I_{\text{syn}}^{1},I_{\text{syn}}^{2})\sim P_{ \text{syn}}}[\text{MI}_{\text{syn}}]-\mathbb{E}_{(I_{\text{real}}^{1},I_{ \text{real}}^{2})\sim P_{\text{syn}}}[\text{MI}_{\text{real}}]\| \tag{6}\]
which first computes the mutual information [43]\(\text{MI}_{\text{syn}}\) of synthetic image pairs \(I_{\text{syn}}^{1}\) and \(I_{\text{syn}}^{2}\) and the mutual information \(\text{MI}_{\text{real}}\) of real image pairs \(I_{\text{real}}^{1}\) and \(I_{\text{real}}^{2}\), then calculates the absolute difference between \(\text{MI}_{\text{syn}}\) and \(\text{MI}_{\text{real}}\). Even though the absolute value of \(\text{MI}_{\text{syn}}\) and \(\text{MI}_{\text{real}}\) could be different for different images modalities, \(\text{MI}_{\text{syn}}\) should be close to \(\text{MI}_{\text{real}}\) if synthetic image pairs are well aligned with each other, and vice verses. Thus, we believe MID can well reflect the correctness of spatial relationship between synthetic images of different modalities. A small MID denotes a high quality of the synthetic multimodal images in terms of correctly encoding the true correlations between images of different modalities.
#### Iv-B2 Implementation Details:
All the models were trained using the Adam optimizer with an initial learning rate of 1e-4 and the batch size of 32. We trained each model for 40K iterations. For training the classification network, the initial learning rate is 0.01, the factor of learning rate decays to 0.99 for every 30 steps, the batch size is 64, the optimizer is SGD with a weight decay of 1e-4. We trained the classifier for 10K iterations. Our method was implemented in Python with TensorFlow, and all the experiments were conducted on a single NVidia Titan X GPU.
To obtain a statistical result of the IS, FID and MID scores, we equally divided 500 synthetic data into 10 groups. We calculated the IS, FID and MID scores for each synthetic group, and then computed the mean and standard deviation of the scores. To measure the classification accuracy, we trained five classifiers using 483 synthetic positive samples and 483 real negative samples, and tested the classifiers on 50 real positive samples and 50 real negative samples.
### _Ablation Study_
In this subsection, we exam three factors which affect the performance of a GAN-based bi-modality image synthesis method: 1) parallel GANs vs. sequential GANs, 2) the order of the synthesis tasks, and 3) training strategies, i.e., supervised, unsupervised or semi-supervised. For all the following experiments, we synthesize 500 pairs of bi-modality images.
#### Iv-C1 Parallel GANs vs. Sequential GANs:
In this experiment we compare the performance of two structures, i.e., parallel and sequential, for synthesizing bi-modality images
in two medical tasks, i.e., prostate ADC-T2w image pairs and T1w-T2w brain images. To eliminate the effects arising from the training strategies, for both structures we adopt the unsupervised training strategy and the two network structures are illustrated in Fig. 2(a) and (c). For the sequential GANs, we experiment with two different orders for both tasks, i.e., z-ADC-T2w and z-T2w-ADC for prostate data and z-T1w-T2w and z-T2w-T1w for brain data, where z denotes a 128-_d_imensional noise vector randomly selected from a fixed Gaussian distribution. Table I reports the results of synthetic images based on different network structures. The first four rows compare the quality of each single modality using the metrics of IS and FID. Clearly, for all metrics and modalities sequential GANs achieves superior performance to the parallel GANs. The last two rows compare the accuracy of the classifiers which are trained using synthetic images generated from different GANs and tested using real images. For all cases, the classifiers trained using synthetic images from sequential GANs outperforms those based on parallel GANs.
Table II compares the performance of the two structures for synthesizing T1w-T2w brain images. Similarly, the performance of sequential GANs is consistently better for most metrics and modalities except for T1w using the IS metric. As the IXI dataset does not provide specific classification task (i.e., all data is from normal patients), we omit the classification accuracy in Table II. Fig. 6 shows a set of real T1w-T2w brain images and synthetic image pairs based on a parallel GAN and sequential GANs of two different orders. In general, the sequential GAN outperforms the parallel GAN for generating images of better quality for one modality and similar quality for the other.
#### Iv-B2 Complexity Measurement for Sequential Synthesis:
We further validate the effectiveness of the complexity measurement for sequential synthesis. According to the method described in Sec. III-A), the complexity of synthesizing ADC images and T2w images are 182.9 and 230.4 respectively, indicating a greater difficulty in generating T2w images than ADC images. The last two columns of Table I compare the quality of synthetic bi-modality images using different orders. Results show that generally speaking for most cases, the generation order z-ADCT2w achieves superior performance to the order z-T2w-ADC. These results validate two points: 1) the proposed method is effective in reflecting the complexity of a synthesis task, and 2) performing easier synthesis task first could achieve better performance for the bi-modality image synthesis task.
Similarly, we also exam the complexity of synthesizing T1w and T2w brain images which are very close to each other, indicating a similar difficulty in generating T1w and T2w images. By comparing the last two columns of Table II we also observe that the IS and FID of the images generated based on two different orders are similar, which are consistent with our previous conclusions.
#### Iv-B3 Comparison of Different Training Strategies:
Table III compares different training strategies on the task of synthesizing ADC-T2w prostate images. Our training set contains 483 paired ADC-T2w images with biopsy proven prostate cancers, the size of the images is 64 \(\times\) 64. For the supervised approach, we used all the 483 paired ADC-T2w images for training. For the unsupervised method, we randomly shuffled the paring relation of ADC and T2w images to form 483 unpaired ADC-T2w pairs for training. As for the semi-supervised method, we alternately trained the network in a supervised manner using
Fig. 6: Examples of (a) a real pair of T1w-T2w images, synthetic T1w-T2w images based on (b) parallel unsupervised GANs, (c) sequential unsupervised GANs (z-T1w-T2w), (d) sequential unsupervised GANs (z-T2w-T1w). The top row of (a)-(d) shows the T1w images and the bottom row shows the T2w images. The visualization results show that: 1) sequential GAN outperforms parallel GAN for generating images of better quality for one modality and similar quality for the other modality; 2) images of the modality synthesized later in the sequential GAN has a better visual quality than the other, i.e., T2w in (c) and T1w in (d) are more visually close to real images than the other.
the 483 paired ADC-T2w images and unsupervised manner using the 483 unpaired images. We believe the comparison is fair as identical 483 ADC and T2w images are utilized for training for all the three approaches. For each training strategy, we synthesize 500 ADC-T2w image pairs from 128-\(d\) noise vectors for the evaluation.
The first four rows report the quality of the synthetic images of each modality. The results in the fifth row are obtained by concatenating each pair of synthetic ADC-T2w images and input it into the network for calculating the FID score. This score reflects the distance between the joint distributions of features extracted from synthetic and real ADC-T2w images. The sixth row denotes the accuracy of the classifier trained based on the synthetic images and the last row shows the MID score which reflects the mutual information inconsistency between synthetic and real multimodal images. The results show that the unsupervised approach outperforms the supervised approach and semi-supervised training achieves the best performance for all metrics than the other two strategies. This is understandable as the supervised model is prone to overfit to limited encodings from the real training images and in turn perform poorly for synthesizing images from unseen 128-\(d\) noise vectors. In comparison, the unsupervised model takes 128-\(d\) noise vectors as input for training, and thus it sees a much large variety of input latent codes and in turn well learns the marginal distribution of each single modality. Semi-supervised training can capture both correct spatial correlation via supervised training and learn the marginal distribution of each single modality and thus can provide better visual quality, greater diversity and more accurate spatial consistency than the other two methods. In addition, we also consider the classifier trained based on 483 real positive ADC-T2w data and 483 real negative ADC-T2w data as a baseline. The baseline classifier achieves the accuracy of 93.40 \(\pm\) 0.40%, which is very close to that achieved by our synthetic images, demonstrating a similar effectiveness of our synthetic images as real images for training a classifier.
### _Comparison With the State-of-the-Art_
#### Iv-D1 Quantitative Evaluation
We compare our method with four state-of-the-art GAN-based image synthesis methods using the prostate image synthesis task, including Costa et al.'s method [11], CoGAN [17], CycleGAN [20] and pix2pix [19]. As described in Sec. II, Costa et al.'s method also employs a sequential architecture which first utilizes a GAN to synthesize a retinal vessel tree and then converts the generated tree into a retinal image via the U-Net [44]. The synthesizer of [11] is trained in a supervised manner. In this experiment, we adopted the same order z-ADC-T2w as our method to first synthesize ADC images and then map them to the corresponding T2w images for [11]. CoGAN [17] is the state-of-the-art model trained in an unsupervised manner for synthesizing multi-domain images. CoGAN adopts a parallel structure that consists of two parallel GANs to concurrently synthesize images of two modalities. In CoGAN, spatial correlation between two modalities are enforced by weight sharing between the decoders of the two GANs. CycleGAN and pix2pix are the state-of-the-art models for translating images of one modality to another modality. Both CycleGAN and pix2pix only learn to map encodings from real images of one modality to images of the other. As a result, given limited training images which is a common scenario in medical domain, the two methods cannot learn a dense mapping from one modality to another modality and in turn usually lead to blurry synthesis results in the test phase. Compared with those four methods, our method employs semi-supervised training which enables our method to well approximate the joint distribution of the two modalities. In addition, we propose a task complexity measurement approach which can automatically determine the synthesis order to ensure an optimized performance.
Results in Table IV show that our method outperforms all other comparison methods for all cases. As CycleGAN and pix2pix can only synthesize one modality based on the real input of the other modality, we take real ADC images as input and generate fake T2w images as the corresponding counterparts. Table IV reports only the results of FID and IS for synthetic T2w images for these two methods. Fig. 1 displays some exemplar synthetic multimodal images by our method, CoGAN (which achieves the second-best performance among the comparison methods) as well as the real data. As shown in Fig. 1, the visual quality of the synthetic T2w images by CoGAN is obviously worse than ours due to the ignorance of task complexity differences.
means the worst quality and 3 means the best quality).
Compared with Test I, Test II imposes higher challenges on synthetic images as it could be easier for a radiologist to distinguish real from fake by exhibiting both real and fake data in a common window for comparison. Each test (i.e., Test I and Test II) was performed by all the three radiologists. Table V shows the average results of the two tests. For Test I, the FPR results achieved by CoGAN are consistently lower than those achieved by Ours for all radiologists, denoting that the radiologists are more likely to consider our results as real images. Therefore, the synthetic images from our method are more visually realistic and of greater clinical significance than those from CoGANs.
In Test II, we average scores for synthetic images of each category (i.e., real, Ours, CoGAN). The results show that the scores achieved by our method are 5.6% \(\sim\) 50.2% lower than those of real data, indicating that there remains differences between synthetic images and real images to radiologists. These differences mainly due to the clarity of synthetic T2w images compared with real data and the ambiguity of the shape of prostate glands, peripheral zones, central zones and bladder in synthetic images. When comparing the results of Ours and CoGANs, the average score of Ours is 5.0% \(\sim\) 22.3% higher than that of CoGAN's, demonstrating the superiority of our method to CoGAN for bimodality medical image synthesis.
## V Conclusion
In this work, a sequential generative model is presented for synthesizing spatially-aligned pairs of 2D images in two modalities. The complexity of synthesizing images in each modality is assessed and the one with a lower complexity is synthesized first, followed by the one with a higher complexity generated via an image-to-image translator. The sequential generative model learns the joint distribution of multimodal images via semi-supervised learning. Extensive experimental results demonstrate that our model can generate visually realistic and clinically meaningful bi-modality medical images. The synthetic images can effectively augment training data and in turn greatly improve the accuracy of the CNN-based classifier. In addition to classification, our synthetic images which contain simulated lesions should be also beneficial to lesion detection and segmentation tasks. There are several studies in the literature exploring weakly-supervised CNN for prostate cancer detection and segmentation from mp-MRI images [37, 47, 48]. That is, training a CNN-based detector or segmenter using images containing image-level labels, i.e., cancerous vs. noncancerous. By augmenting positive training images using our synthetic prostate ADC-T2w images, we could make the detector or segmenter'see' more possible cancer-related visual features and in turn improve the performance of the weakly-supervised CNN. We will investigate this in our future work. Our future work also includes extending the 2D multimodal image synthesizer to 3D, and synthesizing images of more than two modalities.
|
2310.19708 | Combining Language Models For Specialized Domains: A Colorful Approach | General purpose language models (LMs) encounter difficulties when processing
domain-specific jargon and terminology, which are frequently utilized in
specialized fields such as medicine or industrial settings. Moreover, they
often find it challenging to interpret mixed speech that blends general
language with specialized jargon. This poses a challenge for automatic speech
recognition systems operating within these specific domains. In this work, we
introduce a novel approach that integrates domain-specific or secondary LM into
general-purpose LM. This strategy involves labeling, or "coloring", each word
to indicate its association with either the general or the domain-specific LM.
We develop an optimized algorithm that enhances the beam search algorithm to
effectively handle inferences involving colored words. Our evaluations indicate
that this approach is highly effective in integrating jargon into language
tasks. Notably, our method substantially lowers the error rate for
domain-specific words without compromising performance in the general domain. | Daniel Eitan, Menachem Pirchi, Neta Glazer, Shai Meital, Gil Ayach, Gidon Krendel, Aviv Shamsian, Aviv Navon, Gil Hetz, Joseph Keshet | 2023-10-30T16:35:55Z | http://arxiv.org/abs/2310.19708v3 | # Combining Language Models for Specialized Domains: A Colorful Approach
###### Abstract
General purpose language models (LMs) encounter difficulties when processing domain-specific jargon and terminology, which are frequently utilized in specialized fields such as medicine or industrial settings. Moreover, they often find it challenging to interpret mixed speech that blends general language with specialized jargon. This poses a challenge for automatic speech recognition systems operating within these specific domains. In this work, we introduce a novel approach that integrates domain-specific or secondary LM into general-purpose LM. This strategy involves labeling, or "coloring", each word to indicate its association with either the general or the domain-specific LM. We develop an optimized algorithm that enhances the beam search algorithm to effectively handle inferences involving colored words. Our evaluations indicate that this approach is highly effective in integrating jargon into language tasks. Notably, our method substantially lowers the error rate for domain-specific words without compromising performance in the general domain.
D. Eitan, M. Pirchi, N. Glazer, S. Meital, G. Ayach, G. Krendel, A. Shamsian, A. Navon, G. Hetz, J. Keshet, aiOla Research language modeling, decoding, speech recognition, jargon language model
language modeling, decoding, speech recognition, jargon language model
## 1 Introduction
Specialized jargon is crucial in numerous tasks and real-world scenarios, facilitating precise and efficient communication among experts or professionals. Jargon language is prevalent in sectors such as law, industry, business, and healthcare. The sentence "The patient suffered from _formication_ due to _neuropathy_ caused by _diabetes mellitus_" is an example from healthcare jargon language where the jargon term are highlighted in italics. In the subsequent sections, we will use the term _mixed speech_ to refer to spoken expressions that combine general language with specialized jargon.
Jargon language poses significant challenges for automatic speech recognition (ASR) and natural language processing (NLP) systems, which are predominantly trained on more general language datasets. While some jargon words and terms are publicly accessible (e.g., flight control or some medical terms) and hence can be used in training or fine-tuning LMs, many jargon words are used exclusively within industrial companies. As a result, they are not available for training in the widely used general-purpose language models.
One common approach to address this challenge involves interpolating between multiple LMs, each specializing in a different domain and estimated using distinct corpus [1, 2, 3, 4]. However, this approach suffers from three limitations. First, it often relies on fixed interpolation coefficients, which may be sub-optimal [5, 6, 7]. Second, it does not address mixed speech within a sentence, resulting in low probability scores for such sentences from all LMs. Third, utilizing the average outputs of several LMs can lead to an over-smoothing effect [2].
The problem of incorporating several LMs also relates to topic-model LMs [8, 9, 10]. Topic-model LM is a mixture model where each mixture component is represented by a topic-dependent word probability, and each mixture weight corresponds to a topic proportion probability. These methods assume an underlying parametric distribution and are often implemented using a unigram word probabilities [10].
In this study, we introduce a novel method for integrating multiple LMs by dynamically switching between general and jargon-specific models, mitigating the limitations of previous interpolation methods. We demonstrate the effectiveness of our approach in the realm of ASR, which has seen remarkable advancements in recent years. This development can be attributed to the emergence of self-supervised representation techniques [11, 12]. However, precise language modeling is crucial for achieving high accuracy rates in ASR systems.
We describe an efficient procedure for implementing our method, based on the beam search algorithm. As our approach requires only minor changes to the original algorithm, it is easy to integrate into existing systems. Our approach provides a practical solution for handling domain-specific jargon in speech recognition, with important implications for a wide range of ASR applications that rely on the accurate transcription of specialized terminology. Importantly, our approach can easily extend and be utilized in other NLP tasks, such as MT [13, 14] and text generation [15].
This paper makes the following contributions: (i) it introduces a novel approach for incorporating general and domain-specific LMs; (ii) it develops an efficient and scalable algorithm, grounded in beam search techniques, for approximating the proposed method; and (iii) it undertakes a set of experiments to empirically validate the efficacy of our approach.
## 2 Method
A language model is defined as the probability of a sequence of words. Denote the sequence of words by \(\mathbf{y}=(y_{1},\ldots,y_{T})\). Each word \(y_{t}\) is associated to one or more of the \(C\) vocabularies \(\mathcal{V}^{1},\ldots,\mathcal{V}^{C}\). For simplicity of presentation we will discus our solution for combining two LMs namely a general and jargon models. We denote the input sequence by \(\mathbf{x}=(x_{1},\ldots,x_{L})\). This sequence can be a speech utterance for ASR [11, 12], a sequence of word tokens for (conditional) language generation [15], or a sequence of words from a different language in the case of MT [13]. Let \(\mathcal{Y}(\mathbf{x})\) denote the set of all valid output sequences for input \(\mathbf{x}\). Given an input \(\mathbf{x}\), our goal is to obta in the maximum a posteriori (MAP),
\[\mathbf{\hat{y}}=\arg\max_{\mathbf{y}\in\mathcal{Y}(\mathbf{x})}\mathrm{P}( \mathbf{y}\mid\mathbf{x}) \tag{1}\]
In this work, we focus on additive scoring rules as commonly used in MAP decoding of neural sequence-to-sequence models. Concretely, at each position \(t\) we wish to obtain,
\[\hat{y}_{t}=\arg\max_{y_{t}}\mathrm{P}(y_{t}\mid\mathbf{y}_{<t},\mathbf{x}) \tag{2}\]
where \(\mathbf{y}_{<t}=(y_{1},\ldots,y_{t-1})\).
We introduce an additional sequence \(\mathbf{c}=(c_{1},...,c_{T})\) where \(c_{t}\in[C]\), that controls the source from which \(y_{t}\) is derived, i.e., \(c_{t}=j\) if \(y_{t}\in\mathcal{V}^{j}\). We refer to \(\mathbf{c}\) as the coloring variable. Our approach modifies Eq. (1) to account for the additional coloring variable and jointly optimize \(\mathbf{y},\mathbf{c}\). Assuming \(c_{t}\) is independent of the colored history \((\mathbf{y},\mathbf{c})_{<t}=((y_{1},c_{1}),\ldots,(y_{t-1},c_{t-1}))\) we write,
\[\begin{split}(\hat{\mathbf{y}},\hat{\mathbf{c}})& =\arg\max_{\mathbf{y},\mathbf{c}}\prod_{t}\mathrm{P}(y_{t},c_{t} \mid(\mathbf{y},\mathbf{c})_{<t},\mathbf{x})\\ &=\arg\max_{\mathbf{y},\mathbf{c}}\prod_{t}\mathrm{P}(c_{t}\mid( \mathbf{y},\mathbf{c})_{<t})\mathrm{P}(y_{t}\mid\mathbf{c}_{\leq t},\mathbf{y }_{<t},\mathbf{x})\\ &=\arg\max_{\mathbf{y},\mathbf{c}}\prod_{t}\mathrm{P}(c_{t}) \mathrm{P}(y_{t}\mid\mathbf{c}_{\leq t},\mathbf{y}_{<t},\mathbf{x})\;,\end{split} \tag{3}\]
where the last equality follows from our independence assumption. For each time \(t\), our method optimizes for the upcoming word and its source lexicon, conditioned on the "colored" history. In the experiments, we fixed coloring probabilities \(\mathrm{P}(c_{t}=j)=\mathrm{P}(c=j)=1/C\) for each \(j\in[C]\). We choose to use a uniform distribution for the colors as a reasonable assumption, given the scarcity of mixed speech texts.
In order to understand the advantages of our approach, let's examine the challenge of integrating jargon LMs into a more general LM. Assuming the general LM is optimized for standard language, and the jargon model is optimized for domain-specific language, any interpolation method will fall short when it comes to mixed speech. However, our approach dynamically switches between the two LMs, allowing for an accurate estimation of the likelihood of mixed sentences. Additionally, our method assigns the source lexicon for each word in the decoded text. This supplementary information can be advantageous for downstream NLP tasks.
## 3 Implementation Details
In this section, we describe an approach to utilize the beam search algorithm and efficiently scaling with the number of lexicons, denoted by \(C\).
It may not be possible to solve the exact exhaustive search of the decoding in Eq. (3). Therefore, beam-search decoding approximation is often utilized [16, 17].
In our settings, the input for the beam search process is the acoustic model's output, i.e., the probability scores (logits) for each token (characters in our case) at each time step. To accommodate the different lexicon sources, we identify each token score with the corresponding tokens in all lexicons. During the beam search, each word's first letter can come from either of the lexicons, once the first letter has been set, only letters from the same lexicon can be appended, thus alleviating the computational cost.
We now provide a detailed description of the implementation details and modifications to the beam search algorithm.
Figure 1: Illustration of the probability calculation in language models for mixed speech using different approaches: (A) static interpolation, and (B) dynamic interpolation — both methods facilitate interpolation between LMs. Conversely, in approach (C), which is our proposed method, each LM is responsible for handling either the general or the jargon segment exclusively.
For simplicity and ease of exposition, we consider having two lexicons, a general and a jargon-specific, denoted \(\mathcal{V}^{G},\mathcal{V}^{J}\), respectively. Our method, however, trivially extends to multiple lexicons. We denote the characters set by \(\mathcal{A}\) of size \(|\mathcal{A}|=K\). To achieve word coloring and identify jargon terms, we modify the character set of the jargon lexicon. We denote the jargon characters set \(\mathcal{A}^{\prime}\), and identify each \(\ell^{\prime}\in\mathcal{A}^{\prime}\) with \(\ell\in\mathcal{A}\) in a one-to-one manner by color each character in \(\mathcal{A}\). We further identify each character \(\ell\in\mathcal{A}\) with an integer \(i_{\ell}\in\{1,2,\dots,K\}\) such that \(i_{\ell}=i_{\ell^{\prime}}\). The importance of having \(\mathcal{A}\) and \(\mathcal{A}^{\prime}\) as disjoint sets lies in the fact that the n-grams of the models are also necessarily disjoint in this scenario. This makes it straightforward to merge the original n-gram models simply by appending the n-grams. We let \(\varnothing\) denote the empty sequence, and \(S=S(\mathbf{x})\in\mathbb{R}^{L\times(K+1)}\) the output from the acoustic model (\(K\) characters and one blank symbol denoted \(-\)). \(S\) is arranged such that the \(i_{\ell}\)th column corresponds to \(\ell\in\mathcal{A}\). Furthermore, let \(\mathrm{P}^{-}\), \(\mathrm{P}^{+}\) and \(\mathrm{P}^{\text{tot}}\) denote the _blank_, _non-blank_ and _total_ probabilities, with \(\mathrm{P}^{\text{tot}}=\mathrm{P}^{-}+\mathrm{P}^{+}\). We denote \(\tilde{\mathbf{y}}\) the joint sequence \((\mathbf{y},\mathbf{c})\), and let \(\circ\) denote the concatenation operation. Last, we denote by \(y_{-1}\) the last character of the sequence \(\mathbf{y}\).
We define three functions \(GetNextChars\), \(ScoreBeam\) and \(GetBestBeams\). The function \(GetNextChars(\tilde{\mathbf{y}})\) return a set of possible characters given \(\tilde{\mathbf{y}}=(\mathbf{y},\mathbf{c})\). At each time \(t\), it returns the characters corresponding to lexicon \(c_{t-1}\) if the current beam ends with a sub-word. Otherwise, it returns all possible characters. The function \(ScoreBeam(\tilde{\mathbf{y}},\ell,c)\) returns the probability of seeing \((\ell,c)\) as an extension of the beam \(\tilde{\mathbf{y}}\). Using \(ScoreBeam\), we define \(\mathrm{P}^{\text{test}}(\tilde{\mathbf{y}}^{\prime})=ScoreBeam(\tilde{ \mathbf{y}},\ell,c)\) for \(\tilde{\mathbf{y}}^{\prime}=(\mathbf{y}\circ\ell,\mathbf{c}\circ c)\). Finally, the function \(GetBestBeams(B,M)\) returns the top \(M\) beams from \(B\) according to \(\mathrm{P}^{\text{tot}}\cdot\mathrm{P}^{\text{test}}\).
Alg. 1 describes our modification to the standard CTC beam search algorithm. The modifications are marked _green_. Our method involves making slight adjustments to the original CTC beam search algorithm. Furthermore, the number of beams in our approach should remain the same as the original CTC beam search, as \(|GetNextChars(\tilde{y})|=K\) for each sub-word. As a result, our approach can be easily scaled with the number of lexicons \(C\).
## 4 Experiments
In this section, we compare our method with natural baselines for combining language models. In all experiments, we used two lexicons, a general and a domain-specific or secondary LM. We denote the probability density functions under the two LMs as \(\mathrm{P}_{G}\) and \(\mathrm{P}_{J}\).
**Datasets.** We assess our approach using four datasets: (i) _Industrial English_: \(\sim\!2.5\) hours, \(267\) audio files, featuring Australian English utterances on mechanical equipment conditions (jargon) and inspector discussions (general) with machinery background noise. (ii) _Industrial Thai_: \(\sim\!\!1\) hour, \(90\) Thai audio files, similar to the Industrial English dataset. (iii) _Medical_[18]: \(\sim\!\!55\) hours of simulated patient-physician interviews. We segmented recordings into 30-second audio files, using \(\sim\!\!1.5\) hours for our experiment (0.5 hours for validation, 1 hour for testing). (iv) _Medical_[19]: \(1,700\) brief doctor-patient conversations. We extracted 400 sentences for medical terms, created a 3-gram LM, and augmented the dataset with 401 audio files (totaling \(80\) minutes). For evaluation, we split it into a validation set (149 files, \(\sim\!\!25\) minutes) and a test set (252 files, \(\sim\!\!55\) minutes).
**Comparison Methods.** We evaluate several baseline approaches for combining language models: (i) _Linear Interpolation_: Combines language models using \(\lambda\mathrm{P}_{J}+(1-\lambda)\mathrm{P}_{G}\) for \(\lambda\in[0,1]\). (ii) _Log-Linear Interpolation_[1]: Interpolates log probabilities: \(\mathrm{P}_{J}^{\lambda}\mathrm{P}_{G}^{(1-\lambda)}\). (iii) _Bin Estimation Method_[2]: Maps word probabilities from different LMs into a single probability by binning the space and calibrating the output. (iv) _Bayes Approach_[20]: A dynamic interpolation method based on Bayes' theorem, assigning interpolation weights for each word. (v) _General LM_: Uses only the general LM. (vi) _Jargon LM_: Uses only the jargon LM. (vii) _Ours_: Our proposed approach as described in Section 2.
**Hyperparameter Search (HP).** We performed grid searches with validation splits to optimize hyperparameters. For all methods, we explored \(\alpha\) and \(\beta\) in \(\{0.5,0.75,1.0,1.25,1.5\}\), representing language model contribution and text length
scaling. As well as unknown word penalties in \(\{-10,-50\}\) for each LM. For linear and log-linear interpolation, we tuned \(\lambda\in\{0.25,0.5,0.75\}\). For the bin estimation method [2], we considered the number of bins in \(\{53,100\}\) (where \(53\) was optimal [2]). In our method, we explored an unknown sub-word penalty in \(\{-7,-5,-3,-1,0\}\).
### Incorporation Domain-Specific Jargon
Here we evaluate our method on the task of incorporating domain-specific jargon into general LMs. We used the (i) Industrial EN, the (ii) medical dataset [18] and (iii) medical dataset [19]. We employed a pre-trained XLSR-Wav2Vec 2.0 [21] based model fine-tuned using the Common Voice dataset [22]1. We use a \(3-\)gram English LM2 as the general LM. Furthermore, we constructed domain-specific \(3-\)gram LMs using held-out data. (iv) Industrial TH dataset containing Thai sentences with specialized terms. We used a pre-trained Thai XLSR-Wav2Vec 2.0 based model [23] fine-tuned using the Thai Common Voice corpus V8 [22]. We trained two \(3\)-gram LMs; The first was trained on a standard Thai, while the second contained factory terms.
Footnote 1: [https://huggingface.co/jonatagsrosman/way2vec2-large-xlsr-53-english](https://huggingface.co/jonatagsrosman/way2vec2-large-xlsr-53-english).
Footnote 2: [https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tlt-jarvis/models/speechtotext_english_lm](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tlt-jarvis/models/speechtotext_english_lm).
The results are presented in Table 1. Our method achieves a significant reduction in the WER and CER compared with the natural baselines for combining language models. Notably, the significant error reduction in the Industrial dataset which was recorded in a challenging acoustic environment.
### Qualitative Examples
To gain insights into the successes and failures of our method, this section presents a collection of qualitative examples that showcase the efficacy of our proposed approach. The examples, presented in Table 2, are selected from the medical dataset proposed in [19]. The first two examples demonstrate how our method accurately produces the ground truth (GT) transcription. On the other hand, the linear interpolation model fails to produce some of the domain-specific terms; We highlight these terms in the GT column for clarity. The last example presents a failure case: both our approach and the linear interpolation method fail to produce the domain-specific word _zyprexa_.
## 5 Conclusions
In this work, we introduce a novel method for combining language models, demonstrating its effectiveness in ASR with domain-specific jargon. We present an efficient algorithm based on the common beam search, requiring only minor modifications to integrate into ASR systems. Our approach offers a practical solution for accurate transcription of specialized terminology in diverse ASR applications.
|
2305.02820 | Semantic Space Grounded Weighted Decoding for Multi-Attribute
Controllable Dialogue Generation | Controlling chatbot utterance generation with multiple attributes such as
personalities, emotions and dialogue acts is a practically useful but
under-studied problem. We propose a novel framework called DASC that possesses
strong controllability with a weighted decoding paradigm, while improving
generation quality with the grounding in an attribute semantics space.
Generation with multiple attributes is then intuitively implemented with an
interpolation of multiple attribute embeddings, which results in substantial
reduction in the model sizes. Experiments show that DASC can achieve high
control accuracy in generation task with the simultaneous control of 3 aspects
while also producing interesting and reasonably sensible responses, even in an
out-of-distribution robustness test. | Zhiling Zhang, Mengyue Wu, Kenny Q. Zhu | 2023-05-04T13:35:27Z | http://arxiv.org/abs/2305.02820v2 | # Semantic Space Grounded Weighted Decoding for Multi-Attribute Controllable Dialogue Generation
###### Abstract
Controlling chatbot utterance generation with multiple attributes such as personalities, emotions and dialogue acts is a practically useful but under-studied problem. We propose a novel controllable generation framework called DASC that possesses strong controllability with weighted decoding paradigm, while improving generation quality with the grounding in an attribute semantics space. Generation with multiple attributes is then intuitively implemented with an interpolation of multiple attribute embeddings. Experiments show that DASC can achieve state-of-the-art control accuracy in 3-aspect controllable generation tasks while also producing interesting and reasonably sensible responses, even if in an out-of-distribution robustness test. Visualization of the meaningful representations learned in the attribute semantic space also supports its effectiveness.1
Footnote 1: Code and data at [https://github.com/blmoistawinde/DASC](https://github.com/blmoistawinde/DASC).
## 1 Introduction
Personalized dialogue systems are promising NLP applications for human-computer interaction and emotional companionship. We would expect such systems to have personalities (gender, age, hobbies, etc.), exhibit emotions, take dialogue acts and even adopt sophisticated strategies Liu et al. (2021), which necessitates the research efforts on _Controllable Text Generation_. Specifically, the simultaneous controlling of _multiple attributes_ as discussed above is vital to handle the complexity of these chatbots, and can significantly ameliorate their expressiveness, human-likeness, and explainability. However, despite great recent progress in controllable text generation Dathathri et al. (2020); Keskar et al. (2019); Krause et al. (2021), multi-attribute control is still under-explored.
In this paper, we explore a novel task of _Multi-Attribute Controllable Dialogue Generation_. The numerous combinations of attributes can make the available data for each setting scarce, which poses a great challenge for this task. We thus resort to the paradigm of _Weighted Decoding_ as our starting point, which has achieved great success in previous single-attribute control tasks Arora et al. (2022). Weighted decoding methods learn a token-level attribute classifier, which predicts the probability of the text conveying the desired attribute given the generation of each token in the vocabulary. Then the predicted probabilities are used to re-weight the token generation during decoding to induce the attribute. Although these methods have shown strong controllability in single-attribute control, they will have issues when extended to the multi-attribute case by multiplying several attribute predictions from multiple classifiers. Extra parameters proportional to the large vocabulary size \(|V|\) will be introduced, which can grow severalfold further according to the number of attributes. The consequent large number of parameters will not only make the model inefficient, but also harm the generation quality. The model can be prone to overfit since the data for each attribute combination are usually small, which increases the risk of degeneration Holtzman et al. (2020).
To overcome these limitations, we propose **D**ialog **A**tribute **S**pace **C**ontroller **(**DASC**). We establish an attribute semantic space where each token in the vocabulary is projected to the space through _Attribute Token Embedding_ shared across attributes. The language models' hidden states are also converted to _Attribute Context Embedding_ in the space through attribute-specific layers. We then assign higher weights for the neighboring tokens of the current context embedding in the attribute space during decoding, which is implemented with a dot-product-based attribute classifier. Consequently, DASC can inherit the strong controlla
bility of weighted decoding, while also achieving a natural solution of multi-attribute control with the interpolation of multiple attribute embeddings in the space. Moreover, the shared attribute token embedding also alleviates over-parameterization, and improves the robustness of the model.
We experiment on an attribute-rich open-domain dialogue dataset Xu et al. (2022) for the simultaneous control of 3 attribute aspects: Gender Style (male, female, neutral), Emotion (8 classes),and a simple division of Dialogue Act (question VS non-question). As exemplified in Figure 1, compared to previous methods, DASC achieves strong controllability while avoiding low-quality generations in the compositional controlling task. Visualization of the attribute token embeddings exhibits specific patterns that benefit the controlling, compared to the general LM token embeddings. A robustness test that requires the models to generate responses 8 times with any single emotion further shows that the controllability of DASC can generalize in this out-of-distribution setting.
Our contributions are as follows:
1. We propose semantic space grounded weighted decoding for controllable dialogue generation, which can intuitively solve the multi-attribute control task with the interpolation of embeddings in the space.
2. Experiments show that the proposed method can achieve state-of-the-art accuracy on the simultaneous control of 3 aspects while also preserving competitive generation quality in both conventional test settings and out-of-distribution robustness tests.
3. We also demonstrate more potential applications like blending two emotions or adopting emotional support strategies in the generated response.
## 2 Related Work
Controllable generation has gained wide research interest recently. PPLM Dathathri et al. (2020) proposed a plug-and-play framework to control the generation with an extra attribute classifier, but it requires costly gradient updates. Later research progress can be roughly divided into 3 categories. _Reranking_ methods leverage attribute classifiers to either simply rank the full generation candidates (adopted in many mature systems like LaMDA Thoppilan et al. (2022)), or partial generations for the guidance of future outputs Yang and Klein (2021). _Integrated_ methods integrate attribute-related trainable parameters into the generation model for finetuning, such as discrete control codes Keskar et al. (2019) or continuous prompt prefix Qian et al. (2022). _Weighted Decoding_ methods leverage token-level attribute classifiers to guide each decoding step. For example, Krause et al. (2021) and Liu et al. (2021) utilized one/two additional class conditional language models to provide the attribute discrimination. Director Arora et al. (2022) integrates the attribute classifier as simple linear layers on top of LM hidden states, so that it exhibited efficient and effective performance in single-attribute control.
Multi-attribute controllable generation is relatively under-explored now. Lin and Riedl (2021) proposed to extend weighted decoding for the multi-attribute case with the simple product of multiple attribute conditional language models. Gu et al. (2022) proposed a VAE-based method combined with an intersection-searching algorithm for multi-aspect controllable generation, but their method cannot simply apply to conditional generation tasks like dialogue generation.
Controllable generation techniques are especially important in dialogue systems and the ap
Figure 1: An example of multi-attribute controllable dialogue generation. The baseline system doesn’t leverage any control signal and produced a dull response, while the previous method for controlling generated repetitive and illogical text. DASC successfully gives a response both fluent and attributed.
plications of several controlling aspects have been studied. For example, we may condition the generation with dialogue acts for the genuine reflection of the desired behavior (Wen et al., 2015), add emotions in the response to enhance the expressiveness of the bot (Zhou et al., 2018), and also impose personal profiles like gender (Su et al., 2020) and persona (Zhang et al., 2018) to establish a human-like companion. Most works separately add control on a single aspect, and some pioneering works have started to explore the composition of these attribute controls (Chen et al., 2022).
## 3 Method
In this section, we will first define our task and weighted decoding method for controllable generation as background. Then we will introduce the proposed **DASC** framework.
### Task Definition
Given a dialogue **context**\(C\) and **attributes**\(A=(a_{1},a_{2},...,a_{K})\), _Controllable Dialogue generation_ aims to generate a **response**\(R=(r_{1},r_{2},...,r_{N})\) that both cohere with the context and convey the attributes. 2 There can be multiple **aspects** grouping the attributes, where in this work we will focus on _Gender Style, Emotion, and Dialogue Act_. An aspect covers multiple related attributes, such as _happiness, sadness_ for the Emotion aspect. Each attribute can take three values: 1 means to use the attribute, 0 means not to use and \(\phi\) means the attribute has no effect on the response.
Footnote 2: In our work, we make a pre-assumption that attributes are provided by a dialogue policy, and do not include end-to-end scenarios.
### Weighted Decoding for Controllable Generation
Generally, dialogue generation can be formulated with the standard conditional language modeling objective:
\[L_{CLM}=-\sum_{n=1}^{N}logP(r_{n}|r_{1:n-1},C) \tag{1}\]
We can use a transformer-based encoder-decoder architecture like BART (Lewis et al., 2020) to model this, where the encoder encodes the context into hidden states as condition for the decoder to generate the response. We will omit \(C\) below for brevity.
In controllable dialogue generation, we additionally introduce attributes in the generation condition. Suppose we are generating with a single attribute \(a\), then the objective is to model \(P(r_{n}|r_{1:n-1},a)\). Using Bayes' rule, this can be converted to:
\[P(r_{n}|r_{1:n-1},a)\propto P(r_{n}|r_{1:n-1})P(a|r_{1:n-1},r_{n})^{\alpha} \tag{2}\]
where \(\alpha\) is a hyperparameter that can adjust the _control strength_. This means that we can decompose the generation probability into the standard CLM probability weighted by the prediction of another token-wise attribute classifier during decoding. Methods established on such decomposition are thus called **Weighted Decoding** models.
Director (Arora et al., 2022), a representative weighted decoding method, implements the attribute classifier as a linear layer on top of the decoder hidden states. It performs the binary classification whether the full generation will reflect the desired attribute (e.g. happy or not) at each step. For tokens in the sentence from training set, they can be trained with the attribute of the whole sentence using Binary Cross Entropy (BCE) loss. We denote this token-level loss as \(L_{clf-t}\).
\[\begin{split} L_{clf-t}&=BCE(P(a|r_{1:n-1},r_{n})) \\ &=BCE(\sigma([W_{a}h_{n}]_{r_{n}}))\end{split} \tag{3}\]
where \(h_{n}\in\mathbb{R}^{d}\) is the hidden state for the \(n\)-th token, \(W_{a}\in\mathbb{R}^{|V|\times d}\) is the learnable weight matrix for attribute prediction given the generation of each token in the vocabulary, and \([*]_{r_{n}}\) denotes the index selection with the next token \(r_{n}\). Note that it only gathers the attribute logits with \(r_{n}\) according to the ground truth response. However, for the other \(|V|-1\) tokens in the vocabulary \(V\), they have no label and cannot get trained. Therefore, it uses an extra regularizer to train the prediction on these tokens' to be close to 0.5 with MSE loss.
When dealing with multi-attribute control, we can extend Eq. (2) by introducing the product of multiple attribute classifiers, assuming the conditional independence of attributes:
\[P(r_{n}|r_{1:n-1},a)\propto P(r_{n}|r_{1:n-1})\prod_{\begin{subarray}{c}k=1 \\ a_{k}\neq\phi\end{subarray}}^{K}P(a_{k}|r_{1:n})^{\alpha} \tag{4}\]
The product of probabilities is usually implemented with the summation of logits:
\[\delta(r_{n}|r_{1:n-1},a)=\delta(r_{n}|r_{1:n-1})+\alpha\sum_{ \begin{subarray}{c}k=1\\ a_{k}\neq\phi\end{subarray}}^{K}\delta(a_{k}|r_{1:n}) \tag{5}\]
We may implement such an extension with multiple forward passes through an attribute-conditioned language model Lin and Riedl (2021) or one pass of multiple models Liu et al. (2021). Here we introduce a relatively simple and efficient implementation in a similar form as Director, where we just add \(K\) linear classifier heads to make the prediction of multiple attributes. We will refer to this simple extension as M-Director, or just Director for simplicity. Note that such a model will still introduce \([d,|V|,K]\) extra parameters. Given that \(|V|\) is usually as large as tens of thousands, this model will have numerous parameters, which makes the model inefficient to train or infer, and also prone to overfitting.
### Dialogue Attribute Space Controller
We hypothesize that such typical methods of weighted decoding may not be the most effective approach to learn the token-level attribute semantics, especially in multi-attribute cases. The learning objective is imposed on discrete token-level, and only the single token in the target sentence gets a distinctive training signal, and other tokens are regularized equally. This is not usually reasonable, as some tokens similar to the target token should also have high probabilities given the attribute while other tokens different from it are less likely to be generated. For example, in a _happy_ response "Nice to meet you", "glad" will also be a reasonable alternative for the first word with the same emotion, while "sad" is not acceptable, but their attribute label in training will both be 0.5.
We can implement this intuition in a high-dimensional space. On one hand, each token has an embedding that encodes its attribute semantics (_Attribute Token Embedding_, \(ATEMB\)). On the other hand, the hidden states from the LM (\(h_{n}\)) are also projected to the same space with attribute-specific linear layers (\(W^{k}\in\mathbb{R}^{p\times d}\)) to get _Attribute Context Embedding_. Formally, \(\hat{h}_{n}^{k}=\hat{W}^{k}h_{n}\). Then the representations in the space will convey the semantics, and we call it _Attribute Semantic Space_.
To leverage this latent space for weighted decoding, for each \(\hat{h}_{n}^{k}\), we find its attribute-related tokens according to embedding similarity in the space, and assign them higher weights during decoding. Specifically, it is accomplished with a dot-product based token-level attribute classifier.
\[\delta(a_{k}|r_{1:n})\,=\,\hat{h}_{n}^{k}\,\cdot\,\,ATEMB(r_{n}) \tag{6}\]
In this case, when a token is trained with high probability for certain attribute, its neighbors in the attribute space will also have higher probabilities. This alleviates the limitation of previous weighted decoding methods, and eliminates the need for regularizers on other tokens. Further, when applying this to multi-attribute weighted decoding, we get:
\[\delta(r_{n}|r_{1:n-1},a) =\delta(r_{n}|r_{1:n-1})\] \[+\alpha K(\frac{1}{K}\sum_{\begin{subarray}{c}k=1\\ a_{k}\neq\phi\end{subarray}}^{K}\hat{h}_{n}^{k})\,\cdot\,ATEMB(r_{n}) \tag{7}\]
where the parenthesis part in the second term can be interpreted as the average/equal-weight interpolation of multiple attribute context embeddings. 3 This formulation suggests that if the attribute space is properly learned and represented, the embedding interpolation will precisely reflect the semantics of the desired attributes, and then DASC can realize reasonable attribute combinations.
Footnote 3: It is possible to assign different weights for each embedding in interpolation, and we leave it for future works.
To assist the learning of attribute embeddings, we introduce another linear layer on top of the attribute context embedding at each step to directly
Figure 2: Framework comparison between M-Director and DASC. M-Director uses a classifier head to conduct binary attribute hit classification for each token in the target sentence, and impose regularization for other tokens. DASC projects both LM hidden state and the target token to the attribute space, and uses their dot product for the classification of attribute hit. For each parameterized model component, we show its shape in square brackets.
predict the attributes of the complete response. This can help better align the attribute context embeddings to the corresponding region for its attributes. We denote the new the sentence-level classification loss as \(L_{clf-s}\). For clarity, we give its formulation in the single-attribute case, which can be simply extended to multi-attribute scenarios with the summation over all non-empty attributes.
\[\begin{split} L_{clf-s}&=BCE(P(a|r_{1:n-1}))\\ &=BCE(\sigma(v_{a}\cdot\hat{h}_{n}))\end{split} \tag{8}\]
where \(v_{a}\in\mathbb{R}^{p}\) is the learnable weight for attribute prediction. Compared with \(L_{clf-t}\) (Eq. (3)), it is a sentence-level classification task independent of \(r_{n}\), which can also be interpreted as predicting the prior probability of the attribute before deciding the next token to generate, and thus the parameters do not scale with \(|V|\). Then the final loss is:
\[\begin{split} L_{train}=L_{LM}+\beta(L_{clf-s}+L_{clf-t})\end{split} \tag{9}\]
where \(\beta\) is a hyperparameter that controls the weight of attribute-related losses.
We name the proposed framework as Dialogue Attribute Space Controller (**DASC**). The illustration of DASC and its comparison with M-Director is shown in Figure 2. DASC introduce fewer parameters than M-Director (\([d,|V|,K]\)). Since we usually set \(p\) close to \(d\) (\(d<<|V|\)), the parameters of attributes projections will be much smaller. And when we deal with \(K>1\), the shared token embeddings across attributes will also save parameters, while the parameters of attribute predictor are almost negligible. As we will validate in Sec. 4.5, the suitable amount of parameters also helps DASC achieve a better balance between controllability and generation quality.
## 4 Experiments
In this section, we will conduct experiments to examine the following hypotheses: (1) DASC can achieve strong controllability while also preserving good generation quality in multi-attribute controllable generation. (2) Both the learned meaningful representations in the attribute semantic space, and the reasonable parameter amount benefit DASC's performance. (3) DASC can also be flexibly extended for other control tasks like the composition of multiple emotions or adopting certain strategies for emotional support.
### Experiment Settings
We conduct experiments on the _self_ split of the DuLemon dataset (Xu et al., 2022), which is a Chinese open-domain dialogue dataset that is rich in personalized content so that we can find the various attributes we would like to control. We split the data to train/dev/test set with 352999, 2439, 2412 utterances each. Since the original dataset do not contain annotations of control attributes, we leverage automatic methods to label the dataset. For gender style (male, female, neutral), we use the dataset released by Su et al. (2020) to train a MacBERT classifier, which achieved accuracy=94.98%. For emotion, we follow Zhou et al. (2018) and use the NLPCC2013 and NLPCC2014 dataset (8 emotion classes) to train another MacBERT classifier, which has an accuracy of 93.96%. For the question dialogue act (question VS non-question), we simply use a heuristic for labeling: if the sentence contains a question mark(?) we will consider it a question and otherwise non-question. We then use these 3 methods to assign each response in the dataset with the 3 aspects of attributes (13 in total).
#### 4.1.1 Compared Methods
We compare the proposed DASC framework with representative methods from different types of controllable generation methods. We use the fnlp/bart-base-chinese Shao et al. (2021) model as the backbone for all compared methods.
BaselineSimply fine-tuning the backbone on the dataset without utilizing the control attributes.
RerankUsing top-\(p\) sampling (Holtzman et al., 2020) on the baseline model to produce 5 response candidates for each context, and attribute classifiers (here are the same separate models we've used for auto-annotations) to rerank the candidates. Following Thoppilan et al. (2022), we use the sum of predicted probabilities in each aspect for ranking.
CtrlWe re-implemented Keskar et al. (2019)'s method for dialogue generation by defining 3 groups of special control codes for each aspect, and appending the corresponding 3 attribute tokens to each dialogue context during fine-tuning.
DirectorThe multi-attribute extension of Director (Arora et al., 2022) discussed in Sec. 3.2.
All models are fine-tuned on the dataset for 6 epochs, and the decoding method is top-\(p\) sampling with \(p=0.5\). Director and DASC use control
weight \(\alpha=1\) and classifier loss weight \(\beta=0.1\). When conducting multi-aspect control for Director and DASC under weighted-decoding paradigm (Eq. (5)), we set the variables for the desired attribute as 1, and other variables as \(\phi\).
#### 4.1.2 Evaluation
Automatic EvaluationTo evaluate the controllability, we use the same attribute classifiers as those used for labeling the dataset to calculate the accuracy of attributes in the generation (Acc\({}_{G}\), Acc\({}_{E}\), Acc\({}_{Q}\) for gender, emotion and question, respectively). For the generation quality, we use BertScore (BScore) (Zhang et al., 2020) to evaluate generation's similarity to reference response, and Distinct-2 (Li et al., 2016) for diversity.
Human JudgementWe sampled 100 contexts from the test set for human evaluation. Since the distribution of the original test set is extremely skewed, we've specified a constraint for more balanced distribution over all emotions during sampling, so as to ensure the representativeness of the evaluation (21 none, 16 sadness, 16 disgust, 16 happiness, 16 like, 5 anger, 5 surprise, 5 fear). We invited 2 volunteers who are proficient in Chinese to evaluate each generation from 3 perspectives. **Attribute Accuracy**: if the response conveys the given attribute. **Sensibleness\({}_{(1-4)}\)**: if the response is fluent, coherent with the context, and accords with commonsense. **Interestingness\({}_{(1-4)}\)**: whether the response is specific, novel and can encourage more interesting conversation continuation. The annotators have gone through extensive training to understand the requirements of the evaluation. We've also provided guidelines on our annotations UI, so that they can follow them throughout the course.
### Results
Automatic evaluation results are shown in Table 1. We can see that the Rerank method failed to show strong controllability because the base model struggles to produce attributed ranking candidates without finetuning with the control attributes. The CTRL model leveraged the attributes in finetuning, and achieved better control accuracy and BertScore, but it doesn't produce more diverse responses overall. Both Director and DASC exhibit the best controllability, and DASC produces more diverse and reasonable responses according to Distinct-2 and BertScore.
We then show human judgement results in Table 2. The inter-annotator agreement for Acc\({}_{G}\), Acc\({}_{E}\) and Acc\({}_{Q}\) are 0.65, 0.55 and 0.64 in Cohen's \(\kappa\), which indicates moderate to substantial agreement. The agreement of \(Interestingness\) and \(Sensibleness\) is 0.48 and 0.44 in Pearson's \(r\). The evaluation on attribute accuracies is similar to the automatic results, except that the accuracy of gender slightly decreases as human evaluators don't take neutral content that fits common gender prototypes as correctly reflecting the gender style, like soldier for male and baby-carer for female (Bolukbasi et al., 2016). The annotators also check questions without a question mark, which explains the slight difference in Acc\({}_{Q}\).
Overall, the rankings of controllability still hold according to human evaluation, with DASC performs the best. Baseline, Rerank and CTRL have slightly better _Sensibleness_ than weighted decoding methods, which agrees with the commonly observed controllability-quality trade-off in previous literature (Dathathri et al., 2020; Yang and Klein, 2021; Qian et al., 2022). All controllable generation methods achieved higher _Interestingness_ score than baseline, which supports the benefits of controllable generation. DASC achieved the best _Interestingness_ given similar attributes accuracy as Director, indicating the effectiveness of attribute semantic space, which can establish better representations of attribute semantics and a more reasonable approach to compose the control attributes in weighted decoding.
\begin{table}
\begin{tabular}{c c c c c c} \hline & BScore & Dist-2 & Acc\({}_{G}\) & Acc\({}_{E}\) & Acc\({}_{Q}\) \\ \hline Baseline & 68.18 & 19.25 & 68.49 & 46.31 & 69.61 \\ Rerank & 69.23 & 19.28 & 75.46 & 54.93 & 82.42 \\ CTRL & **71.09** & 18.91 & 85.32 & 77.49 & **100.00** \\ Director & 69.54 & 21.40 & 95.81 & **86.73** & **100.00** \\ DASC & 70.42 & **21.94** & **95.85** & 86.07 & **100.00** \\ \hline \end{tabular}
\end{table}
Table 1: Automatic evaluation results on DuLemon test set. The best results are in bold, while the second results are underlined.
\begin{table}
\begin{tabular}{c c c c c c} \hline & Acc\({}_{G}\) & Acc\({}_{E}\) & Acc\({}_{Q}\) & Interest & Sensible \\ \hline Baseline & 0.80 & 0.55 & 0.64 & 2.04 & **3.46** \\ Rerank & 0.81 & 0.62 & 0.82 & 2.13 & 3.44 \\ CTRL & 0.85 & 0.82 & **0.97** & 2.24 & **3.46** \\ Director & 0.87 & 0.87 & 0.96 & 2.25 & 3.26 \\ DASC & **0.88** & **0.88** & **0.97** & **2.37** & 3.28 \\ \hline \end{tabular}
\end{table}
Table 2: Human Judgement on DuLemon test set.
### Robustness Test
In previous experiments, the control attributes provided to the model come from the reference response. Therefore, models may coincidentally hit the desired attributes when generating the most likely response to the context, without truly reliable controllability for arbitrary given attributes. Hence, we further conduct experiments to test the robustness of the controllable generation methods in out-of-distribution scenarios.
Specifically, we sampled 100 contexts from the test set, and give the models each of the 8 emotions as the generation condition, paired with the original gender and question act4. We then use greedy decoding to generate response for each \(context,attributes\) pair and conduct similar automatic and human evaluation on the 800 generations.
Footnote 4: We do not change these 2 attributes as they are sometimes determined given the context.
Table 3 shows the robustness test results.5 Compared with Table 1, we can see that the emotion accuracy of Rerank and CTRL dropped significantly, which shows that their controllability is not generalizable. Another notable phenomenon is the abnormal _Distinct-2_ achieved by Director. We then further analyze their performance with human evaluation (excluding Rerank as it fails to control attributes). We found that Director frequently generate ungrammatical, illogical and repetitive long responses (like the second response in Figure 1). Director's loss in emotion accuracy is also higher than DASC, indicating that it may overfit the training distribution given its large parameters, and thus performs worse in this out-of-distribution setting. Compared to CTRL, DASC has lower _Sensibleness_ but higher _Interestingness_, when it also has a significant advantage in diversity and controllability.
Footnote 5: BertScore is not reported here, as the model can be directed towards attributes different from the ground truth, invalidating the similarity-based metric as a proxy for generation quality.
### Space Visualization
For a clear understanding of how the proposed attribute semantic space can help controllable generation, we visualize them in 2D space with t-SNE [20]. First, we visualize the attribute token embeddings of some representative attribute-related tokens, and also compare them with the corresponding embedding in the original LM (Figure 3). Comparing the two figures, we can see that (1) The token embeddings from different aspects are more separable in the attribute space (see points with different colors), while tokens in the same aspect are closes despite the difference in other linguist features like part-of-speech (like 'handsome' and'male'). (2) The token embeddings from different attributes of the same aspect are also distinguished in the attribute space (like'male'-'female', 'love'-'miserable'). These characteristics of the learned attribute space enable DASC to successfully control the generation of distinctive attributes and compose attributes from different aspects.
Next, we also visualize the attribute context embedding. Specifically, we take the responses with certain attribute in the dev set of the dataset, feed them into the model and average attribute context embeddings at each decoder token as sentence-level representations, and pair them with the sentence-level attribute annotations for analysis. For brevity, we only show the visualization with emotion labels in Figure 4, and provide those with gender and question act labels in Appendix. We can see that the context embeddings from sentences with different emotions are clearly separated in the space, which supports the strong controllability of DASC with multiple attributes.
### Parameter Analysis
As analyzed in Sec. 3.3, DASC can use a relatively smaller amount of parameters to implement weighted decoding for multi-attribute controllable generation. Here we study the effect of parameter numbers by adjusting the dimension of the attribute space \(p\), and comparing with baseline and M-Director which uses no/large amount of parameters for attribute control respectively. We use BertScore to evaluate the generation quality and the average control accuracy on 3 aspects to reflect controllability.
Results are shown in Table 4. Comparing DASC with different \(p\), we can see that larger amount of
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & Dist-2 & Acc\({}_{E}\) & Interest & Sensible \\ \hline Rerank & 17.55 & 17.00 & - & - \\ CTRL & 21.07 & 43.38 & 1.91 & **3.00** \\ Director & _34.73_ & 61.88 & 1.62 & 2.27 \\ DASC & **26.71** & **65.38** & **2.08** & 2.82 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Robustness test results.
parameters can generally improve the model's controllability, but even a relatively small \(p\) (\(p\)=512) is already capable to achieve high control accuracy. For generation quality, a moderate number of parameters can achieve the best BertScore, but smaller ones do not significantly degrade the performance. Director, which additionally uses nearly twice the parameters of the base model (210.98M vs. 116.26M), may have been over-parameterized and thus harms its generation quality.
### Case Study
Besides multi-aspect control as shown in Figure 1, we also show a proof-of-concept application that DASC can naturally blend two emotions in one generated response. We can simply achieve this by setting both attributes' value as 1 instead of \(\phi\). The results are shown in Figure 5 and Figure 8. We can see that DASC can successfully generate responses with either single emotion or the combination of both emotions, where the later can produce potentially more vivid response.
\begin{table}
\begin{tabular}{l c c c} \hline Method & \#params & BScore & Avg Acc \\ \hline baseline & - & 68.18 & 61.47 \\ DASC (\(p\)=512) & 15.94M & 70.18 & 92.56 \\ DASC (\(p\)=1024) & 31.88M & 70.12 & 92.72 \\ DASC (\(p\)=2048) & 63.75M & **70.42** & 93.97 \\ DASC (\(p\)=4096) & 127.50M & 70.26 & **94.42** \\ Director & 210.98M & 69.54 & 94.14 \\ \hline \end{tabular}
\end{table}
Table 4: Effect of the number of extra parameters for controllability and generation quality.
Figure 4: The t-SNE visualization of attribute context embedding of responses with different emotions.
Figure 5: DASC generates different responses to the same context given different emotions and their composition as control attributes. (Translated from Chinese, original text in Figure 7)
Figure 3: Comparison of two sets of token embeddings with t-SNE visualization: those from the language model (left) and from the attribute semantic space (right).
### ESConv Experiment
To further explore the potential of DASC, we also experimented on another dataset ESConv Liu et al. (2021). It is an English dataset that aims to provide emotional supports to help seekers with 8 defined strategies. Here we use the human annotated strategy labels as the control attributes, and experimented with 3 methods: **Baseline**, **CTRL** and **DASC**. We report the automatic metric **Distinct-2** and human evaluated **Strategy Accuracy**, **Usefulness\({}_{(1-4)}\)** and **Sensibleness\({}_{(1-4)}\)**. In Table 5, we can see that the control of relatively complex strategies is harder, and thus the accuracy is lower than the previous experiment (Table 2). Nevertheless, DASC still achieves reasonable control accuracy and outperforms other methods on all metrics. These results suggest that DASC is language-agnostic and can be effectively applied to many kinds of attribute controls. We provide more details and generation examples in Appendix.
## 5 Conclusion
In this paper, we propose DASC, a novel framework for multi-attribute controllable dialogue generation. It is established on the weighted decoding paradigm for strong controllability and further grounds it in an attribute semantic space, which enables the simultaneous control of multiple attributes with the interpolation of multiple attribute embeddings. Experiments on the control of gender style, emotion, dialogue act and emotional support strategies show that DASC can achieve strong controllability while also preserving high quality in its generation, even in out-of-distribution scenarios. DASC's capability of flexibly imposing control on different aspects and composing multiple attributes may open the possibility of various dialogue applications.
## Limitations
Some limitations of the proposed methods remain to be addressed in future research.
First, our experiment settings assume that the desired attributes are available for generation, which would require a separate dialogue policy to decide the attribute label provided to the model. Therefore, our model cannot be directly applied to end-to-end dialogue models, and may also be affected by the potential error propagation from the dialogue policy model. Since the intended use of DASC is to serve as a component of pipeline-style dialogue systems, these common issues in such systems are out of the scope of this work.
Moreover, we require attribute-annotated dataset to train the model. Therefore, we may not be able to train an effective model in scenarios where attribute labels are scarce or hard to solicit.
Last but not least, DASC is not directly applicable for controllable generation with free text as control signal, such as persona descriptions Zhang et al. (2018), which might limit its application range, though we may simply combine DASC with other techniques like concatenating the descriptions to achieve this goal, which will require further explorations.
## Ethics Statement
The proposed method is utilized for the control of gender style. As we've noticed and discussed in Sec. 4.2, the model may resort to gender stereotypes for generating responses in that gender. The potential reason is that the dataset used to train the classifier already contains gender-biased labels, and such biases are exploited by the classifier, and passed to the generation model through the automatic annotated labels. To avoid such effects, we may carefully clean the dataset for such biased labels Gehman et al. (2020), or mine such biased tokens and penalize them during weighted decoding.
Though the proposed method is mainly intended for improving the interestingness of the chatbot, and endowing the model with abilities like emotional support, such method may also be applied for vicious application. For example, they may introduce toxicity as an attribute and encourage the model to generate more toxic responses. Therefore, the application range of such techniques should be carefully restricted.
We adhere to the license of the used datasets.
|
2301.01606 | Predicting Learning Interactions in Social Learning Networks: A Deep
Learning Enabled Approach | We consider the problem of predicting link formation in Social Learning
Networks (SLN), a type of social network that forms when people learn from one
another through structured interactions. While link prediction has been studied
for general types of social networks, the evolution of SLNs over their
lifetimes coupled with their dependence on which topics are being discussed
presents new challenges for this type of network. To address these challenges,
we develop a series of autonomous link prediction methodologies that utilize
spatial and time-evolving network architectures to pass network state between
space and time periods, and that models over three types of SLN features
updated in each period: neighborhood-based (e.g., resource allocation),
path-based (e.g., shortest path), and post-based (e.g., topic similarity).
Through evaluation on six real-world datasets from Massive Open Online Course
(MOOC) discussion forums and from Purdue University, we find that our method
obtains substantial improvements over Bayesian models, linear classifiers, and
graph neural networks, with AUCs typically above 0.91 and reaching 0.99
depending on the dataset. Our feature importance analysis shows that while
neighborhood and path-based features contribute the most to the results,
post-based features add additional information that may not always be relevant
for link prediction. | Rajeev Sahay, Serena Nicoll, Minjun Zhang, Tsung-Yen Yang, Carlee Joe-Wong, Kerrie A. Douglas, Christopher G Brinton | 2023-01-03T17:53:04Z | http://arxiv.org/abs/2301.01606v1 | # Predicting Learning Interactions in Social Learning Networks: A Deep Learning
###### Abstract
We consider the problem of predicting link formation in Social Learning Networks (SLN), a type of social network that forms when people learn from one another through structured interactions. While link prediction has been studied for general types of social networks, the evolution of SLNs over their lifetimes coupled with their dependence on which topics are being discussed presents new challenges for this type of network. To address these challenges, we develop a series of autonomous link prediction methodologies that utilize spatial and time-evolving network architectures to pass network state between space and time periods, and that models over three types of SLN features updated in each period: neighborhood-based (e.g., resource allocation), path-based (e.g., shortest path), and post-based (e.g., topic similarity). Through evaluation on six real-world datasets from Massive Open Online Course (MOOC) discussion forums and from Purdue University, we find that our method obtains substantial improvements over Bayesian models, linear classifiers, and graph neural networks, with AUCs typically above 0.91 and reaching 0.99 depending on the dataset. Our feature importance analysis shows that while neighborhood and path-based features contribute the most to the results, post-based features add additional information that may not always be relevant for link prediction.
Deep learning, graph neural networks, link prediction, online social networks, social learning networks.
## I Introduction
Online education has exploded in popularity over the past few years, with estimates of up to 80% of students having taken an online course [2]. The advent of the COVID-19 outbreak has significantly increased the number of online learners since 2020, which in turn has demonstrated online platforms' viability as an additional tool in physical classrooms. This growth has not been without challenges, however; online learning has raised concerns about its apparent lack of quality control, extraordinarily low teacher-to-student ratios, and scarcity of high-quality teachers [2]. The COVID-19 pandemic has highlighted the lack of quality tools for both students and teachers across online learning providers, making navigation of these massive communities a daunting or impossible task.
One way course providers have attempted to mitigate these problems is by establishing online forums where students can learn from each other, thus compensating for a lack of personalized instruction by posting questions, replying with answers, and otherwise exchanging ideas. Massive Open Online Courses (MOOCs), as well as Q&A sites like Piazza, Quora, and StackOverflow, rely on forums extensively, generating a plethora of data about how users interact with one another online for learning purposes. These forums generate Social Learning Networks (SLNs) within communities of student users that evolve over time, facilitating peer-to-peer knowledge transfer in the absence of instructor intervention. Data-driven studies on the SLNs emerging from online learning forums have analyzed the benefits of social learning [3, 4] geared towards the ultimate goal of improving learning outcomes by, for example, proposing methods for instructor analytics [5] and news feed personalization [6].
In this work, we are motivated by the following research question: _Can link formation between learners in an SLN be predicted in advance?_ Such predictions would enable several new ways of improving online learning and forum experiences (e.g., encouraging early formation of learner groups or recommending that learners respond to newly-posted questions that they are expected to answer/contribute to later), thus helping to reduce the gap between in-person and online instruction.
SLNs, however, pose two key challenges that differentiate them from standard time-evolving social networks [44]. First, the SLN for an online course forms around the specific educational processes of that course [48, 8]. With an SLN, users connect as a result of specific learning needs, and in response to events that are exogeneous to the discussion forum, e.g., the instructor releasing new content/assessments. On the other hand, homophily and pre-existing relationships are known to play a strong role in the evolution of standard social networks over time, which can provide initialization information for predicting learner interactions. An online SLN
tied to a specific course, on the other hand, exhibits a "cold start" from a state of little-to-no observable network. Second, links in SLNs are defined much more arbitrarily compared to other graphs [6]. On social media sites, links between users are typically quantified with concrete metrics such as 'friendships' or 'follows,' where the connection between two users is explicit and typically optional. In an SLN, by contrast, a link between two users should indicate a transfer/sharing of knowledge. Explicit connection metrics do not typically exist, and even if they did, they do not imply the users have shared information. As a result of these challenges, the prediction of link formation in SLNs cannot be easily solved using previous methods designed for general time-evolving graphs [43].
In this work, we develop a link prediction methodology, specifically tailored for addressing the challenges associated with SLNs, which analyzes a set of features describing (i) learner pairs in an SLN and (ii) the evolution of learner interactions over time. Our methodology is deep learning-based, allowing consideration for both time-variable features and latent learner characteristics. We evaluate our methodology on data collected from four MOOC discussion forums from Coursera and two courses at Purdue University. We then investigate how our methodology can be used to make recommendations that may enhance the timing and quality of replies to discussion posts, thus encouraging interactions and improving learner experience in discussion-based forums.
### _Related Work_
The link prediction problem has been studied extensively in the context of online and digitally-enabled social networks, due to its usefulness in generating recommendations such as friendships, follows, or other forms of interactions [8, 9, 10, 11]. Several methods have been proposed for this problem, beginning with unsupervised approaches and eventually transitioning to supervised methods in the past few years. In terms of unsupervised methods, [13] proposed using features based on node proximity and properties, while [14] and [15] applied a model to incorporate additional contextual and temporal features. On the other hand, supervised approaches have proposed random walk algorithms using labels to increase the likelihood of traversing formed links [16], while [17] and [18] proposed deriving features from exogenous sources and training models on them to predict future link formation. Previous work has additionally considered using supervised and unsupervised methods simultaneously for exploratory learning environments [19]. However, these works do not consider characteristics unique to social _learning_ networks. Specifically, the potential dependence on discussion topics, and the need for time-series modeling is not explicitly modeled. Research into SLNs until this point has been largely theoretical, although [20] provides a first look into the application of deep learning-based link prediction algorithms in a classroom setting. Additionally, unsupervised approaches have demonstrated recent popularity for problems related classification of student behavior [12]. Although the central focus of our research is concerned with SLNs, unlike these works, our strictly supervised models specifically consider student _social_ characteristics for large classrooms.
Other works on online social networks have considered problems related to link formation, e.g., predicting the strength/repetition (rather than existence) of future links [21, 22, 23], predicting link types [12], or examining the effects of student confusion on SLNs [24]. The methods used and developed include linear regression/classification on network features and user demographics [21, 25], latent variable modeling of learner interaction frequencies [12], and dynamic models to account for the disappearance and strengthening of links over time [18]. Our models utilize some similar network features, but we consider the different prediction objective of pinpointing when links will form. In fact, given its high observed quality, we consider a time-series version of [26] as a potential model.
An SLN is fully described by several datasets that each capture the a subset of student behavior inside the associated course. Recent papers choose to focus on one or a couple of these datasets: e.g. Student video-watching behavior [5], student performance [27, 28], student physical behavior [29], or discussion forum data [30, 31, 32, 33]. Our work is evaluated on a similar dataset to [32] in that it provides information gathered on student message passing behavior in a discussion forum. The models created in these other works fundamentally differ from our focus on individual student relationships. [30] focuses on making group predictions from clusters of similar students, while [33] models changes in student behavior at critical points (e.g., exams and holidays).
Some recent works have focused on other aspects of different types of SLNs, e.g., MOOCs [12, 21, 35], Q&A sites [22, 36], and enterprise social networks [37, 38]. Our work is perhaps most similar to [2, 21] in that we study prediction for SLNs using topological features. The prediction objectives in these other works, however, are fundamentally different than our focus of predicting interactions between learners in that they seek to predict course grades via video-watching behaviors [35] and student knowledge-state via learner post and reply frequencies [36].
### _Our Methodology and Contributions_
In this work, we propose a novel framework specifically tailored to perform link prediction in SLNs. Fig. 1 summarizes the main components of our methodology, which are further outlined in the following discussion.
#### Ii-B1 Input Feature Computation
We begin by extracting the discussion data from the considered forum to construct the SLN (Sec. II-A). Next, we engineer a set of features for each learner pair (Sec. II-B). Here, we define three groups of features that we consider: (i) neighborhood-based features that are determined from common neighborhoods, (ii) path-based features based on paths between learners, and (iii) post-based features that are determined from latent topic analysis of learner posts. Because a specific definition of what constitutes link formation between two users in an SLN does not exist, a key question when quantifying an SLN is how best to model learner interactions without loss of accuracy [6]. We address this through inference from forum data, with consideration for both quality of interaction [26] and timing.
#### I-A2 Prediction Model
The second component of our framework shown in Fig. 1 is the prediction model (Sec. II-C). We consider three different classes of predictors: (i) linear classifiers, (ii) graph neural networks (GNN), and (iii) gradient-based deep neural network classifiers (specifically, Bayesian neural networks, fully connected neural networks, convolutional neural networks, recurrent neural networks, and convolutional recurrent neural networks). The success of Bayesian models in static link prediction problems [40] motivates us to consider their performance in the time-evolving SLN setting, while GNNs offer efficient learning over graphs without explicit feature engineering [46]. However, we develop our core methodology around deep learning-based classifiers, because, as we will show, explicit feature modeling paired with various layer types, which can extract spatial or temporal patterns from the SLN features, result in more robust and accurate SLN link prediction.
#### I-A3 Evaluation and Analytics
To assess the quality of our models, we train and evaluate our considered prediction models on four MOOC discussion forums and two Piazza discussion forums, using an unsupervised method as a baseline (Sec. II-C1). Through our evaluation, we also generate four types of analytics. The first analytic is feature importance, which quantifies the importance of each considered feature group. The second and third analytics quantify time-dependent model parameters, including closeness between time of link prediction and actual link formation as well as the relationship between features and the timing and quality of formed links. The fourth analytic explores the effects of varying classification architectures, where we anaylize the importance of different architectures in different course types (e.g., quantitative vs. humanities). In addition to these analytics, we provide visualizations for instructors to interact with the results of our proposed framework and respond to changes in the course SLN. These visualizations encapsulate our analytics, allowing for interpretation by those not familiar with our model.
**Summary of Contributions:** In summary, our contributions are (i) developing a link prediction framework for SLNs, which learns based on topological and post-based features of user discussions (Sec. II), (ii) demonstrating that the combination of our features with spatial pattern-capturing neural networks obtains the most robust SLN link prediction quality over six datasets, with AUCs above \(0.90\) in each case (Sec. III), and (iii) developing a set of analytics for SLN link formation based on our link prediction framework (Sec. IV).
## II Social Learning Network Methodology
In this section, we formalize our SLN link prediction methodology. We first quantify an SLN from forum data (Sec. II-A) and define the particular features that are used as model inputs (Sec. II-B). We then develop unsupervised predictor, linear classifiers, GNNs, and deep learning classifiers (Sec. II-C) for link prediction.
### _SLN Graph Model_
In order to define our features, we must first describe how link creation in an SLN model is inferred and quantified from online forum data.
#### Ii-A1 Online forums
The format of online forums differs by host site and by classroom needs. We identify two main types of forum structures to account for in our methodology.
**MOOC forum structure:** A large online forum such as those hosted on Coursera is typically comprised of a series of threads, with each thread in turn being comprised of one or more posts. Each post is written by a single user. A post, in turn, can have one or more comments attached to it. Given the observation that SLN forum users do not abide by the designation of post vs. comment consistently [6], we will not distinguish between them, instead referring to them both as posts. This structure of thread posts is depicted in Fig. 1(a).
**Q&A forum structure:** Another format, implemented by Piazza, forces a "Question/Answer" thread structure. The forum is constructed from a series of questions and their responses, with allowance for follow-up questions and responses. In contrast to traditional forums, a response on Piazza may have contributions from multiple users in the same block, rather than requiring a new comment from each user. Any question may have comments attached to it in the form of "follow-ups", which can in turn generate new responses. Using the observation listed above from [6] again, we do not distinguish between types of follow-up responses and label all responses after the initial question as posts. This alternate structure of thread posts is depicted in Fig. 1(b).
#### Ii-A2 Quantifying SLN link creation
A link \((u,v)\) is observed between learner \(u\) and another learner \(v\) if, in a specific time interval, both \(u\) and \(v\) contribute to a post in the same thread (e.g., by either creating the initial post or contributing via a
Fig. 1: Summary of the application of our SLN link prediction framework in post-based courses.
follow-up post). We use this as the criterion for establishing the link \((u,v)\) in the SLN because it signifies the fact that learner \(u\) and learner \(v\) have exchanged ideas and interacted in the same thread within a specific time interval.
To model the evolution of an SLN, we group its posts into different time intervals. Specifically, we divide all posts in a given thread into \(L\) equally spaced intervals. Fig. 2 illustrates this procedure for two example threads. We use \(y_{uv}(i)\) as an indicator variable for the formation of link \((u,v)\): \(y_{uv}(i)=1\) if a link between \(u\) and \(v\) has been created in any interval up to and including \(i\), and \(y_{uv}(i)=0\) otherwise. Thus, as in most social networks [38][16], links persist over time in our SLN model. The SLN graph structure in any given interval \(i\) is then comprised of nodes corresponding to the learners \(u\) and edges \((u,v)\) corresponding to links between them. For the purpose of predicting future responses, we consider this interaction to be bidirectional, i.e., the resulting SLN is an undirected graph. Formally, we define \(\mathcal{G}(i)=[y_{uv}(i)]\) as the binary adjacency matrix of the SLN during interval \(i\); since links are bidirectional, \(\mathcal{G}(i)\) is symmetric.
We can also define subgraphs of \(\mathcal{G}(i)\) focusing on particular students. Fig. 3 visualizes the neighborhood for an individual, randomly selected student at a particular time instance, where first and second degree connections are considered. In addition to capturing detailed link-formation behavior evaluated later in this study, evaluating a visual representation from the perspective of a single student provides an intuition for individual student contributions and demonstrates the presence of "hub" students. The lack of multiple paths between students highlights the underlying sparse nature of \(\mathcal{G}(i)\), requiring users to traverse one long path rather than choose from several short connections. Additionally, the relative small false positive rate (denoted by blue links in Fig. 3) demonstrates our framework's efficacy for link prediction, as we will describe further in Sec. III-C.
Two particular subsets of \(\mathcal{G}(i)\) are of interest in the link prediction problem. We define
\[\Omega=(u,v):u,v\in N(\mathcal{G}),u\neq v, \tag{1}\]
i.e., all possible learner pairs in the SLN. We then define two subsets of \(\Omega:\mathcal{G}(L)\), which is the set of formed links at the final time \(i=L\) (i.e., with \(y_{uv}(L)=1\)), and \(\mathcal{G}^{c}(L)=\Omega\setminus\mathcal{G}(L)\), the complement graph of un-formed links (i.e., \(y_{uv}(L)=0\)). Note that \(|\mathcal{G}^{c}(L)|\gg|\mathcal{G}(L)|\) for each dataset (i.e., most learners are never linked). This large class imbalance between formed and unformed links informs our link prediction framework in Sec. II-C.
### _SLN Feature Engineering_
We now define our features, computed for each learner pair \((u,v),u\neq v\). These quantities serve as the inputs to our prediction algorithms in Sec. II-C.
**Neighborhood-based Features**: These features, as well as path-based features discussed next, are extracted from the topology of the graph. Letting \(N(\mathcal{G})\) be the set of nodes in the SLN \(\mathcal{G}\) and \(\Gamma_{u}(i)\subseteq N(\mathcal{G})\) denote the set of neighbors of \(u\) at time \(i\), the neighborhood-based features qualitatively measure the "similarity" of \(u\) and \(v\)'s neighborhoods [7]. They are quantified as follows:
1. _Jaccard coefficient_: \[\Join_{uv}=|\Gamma_{u}(i)\cap\Gamma_{v}(i)|/|\Gamma_{u}(i)\cup\Gamma_{v}(i)|\]
2. _Adamic-Adar index_: \[\Join_{uv}=\sum_{n\in\Gamma_{u}(i)\cap\Gamma_{v}(i)}1/\text{log}|\Gamma_{n}(i)|\]
3. _Resource allocation index_: \[\Re_{uv}=\sum_{n\in\Gamma_{u}(i)\cap\Gamma_{v}(i)}1/|\Gamma_{n}(i)|\]
4. _Preferential attachment score_: \[\Pr_{uv}=|\Gamma_{u}(i)|\cdot|\Gamma_{v}(i)|\]
We let \(\mathbf{b}_{uv}\) denote the vector of these features for pair \((u,v)\). Note that a larger value of each of these features, roughly speaking, indicates that \(u\) and \(v\) share more common, low degree neighbors than they do with others.
**Path-based Features**: These features measure the proximity of \(u\) and \(v\) in the SLN. They are as follows:
1. _Shortest path length_ (\(\Lp_{uv}\)): The length of the shortest path between \(u\) and \(v\).
2. _Number of paths_ (\(\Np_{uv}\)): The number of shortest paths (i.e., of length \(\Lp\)) between \(u\) and \(v\).
We let \(\mathbf{a}_{uv}\) denote the vector of these features. Note that as \(\Lp\) decreases, \(u\) and \(v\) become more closely connected, while a larger \(\Np\) indicates more redundancy in these paths.
**Post-based Features**: Besides topology-based attributes, learners' interests in different course topics will also influence their probability of forming links in an SLN. In particular, we would expect those with similar topic interests to be more likely to post in the same thread, i.e., form links. We thus compare the topics of different learners' posts to compute another feature that shows the learners' similarity in interests.
To do this, we apply the Latent Dirichlet Allocation (LDA) algorithm [39] on the dictionary of all course words (i.e., all unique words used in all the considered posts of a course) to extract a set, \(\mathcal{K}\), of latent topics across posts, and a model of
Fig. 2: Example of how posts in two different forum structures are divided into time periods and how SLN link creation between the learners authoring these posts is modeled. Fig. 1(a) (left): model for a Coursera forum. Fig. 1(b) (right): model for a Piazza forum.
posts as a probability vector of these topics. In our application, we view each post as a separate "document," since learners are likely to discuss many distinct topics over time. For each learner, \(u\), we obtain the latent topic vector of their posts through time \(i\) as the average of their post vectors through \(i\). We denote the set of topics for learner \(u\) that exceed a minimum threshold of coverage across their posts through time \(i\) as \(K_{u}(i)\). With this, we define the last feature which captures the number of common topics between learners \(u\) and \(v\):
7) _Number of common topics_ (To): \(|K_{u}(i)\cap K_{v}(i)|\)
We use \(c_{uv}\) as the time-series version of To, i.e., the number of common topics discussed by \(u\) and \(v\).
### _Link Prediction Methodology_
As discussed in Sec. II-B, the features extracted from the graph topology contain spatially and temporally correlated patterns between learner pairs. Therefore, we employ prediction models that are capable of exploiting these patterns for accurate link prediction. In this capacity, we consider the efficacy of four distinct deep learning architectures for our proposed framework: (i) the fully connected neural network (FCNN), which offers effective latent space prediction; (ii) the convolutional neural network (CNN), which is highly effective for processing spatially correlated patterns; (iii) the long-short-term memory (LSTM) based recurrent neural network (RNN), which is desirable for time-series modeling; (iv) the convolutional recurrent neural network (CRNN), which extracts both spatial and temporal correlations. As baselines to these methods, and to demonstrate the necessity of the aforementioned classifiers and their corresponding architectures, we compare our proposed deep learning prediction framework to five traditional prediction models: an unsupervised predictor, two linear prediction models (support vector machines and linear discriminant analysis), a graph neural network [45], and a Bayesian neural network [40].
For a given pair of users \((u,v)\), the input feature vector into each of the following models is given by \(\textbf{e}_{uv}=[\textbf{b}_{uv},\textbf{a}_{uv},c_{uv}]\) while the target output is the link state \(y_{uv}(i)\in[0,1]\). In the following, we describe the latent state of each model as well as their corresponding training procedures.
#### Ii-C1 Unsupervised Predictor
We begin by using a simple prediction algorithm as a benchmark for the parameter-based models described below. Choosing the feature most associated with link formation, we follow [16] and turn the resource allocation index (Re) feature into an unsupervised predictor. To do this, we compute Re for each \((u,v)\in\Omega\), normalize the vector of values to \([0,1]\), and use this as \(\hat{y}_{uv}(i)\).
#### Ii-C2 Linear Classifiers
Next, we consider two relatively simple linear models for SLN link prediction: linear discriminant analysis (LinDA) and support vector machines (SVMs). Both models attempt to find a separating linear hyper-plane between learners who did and did not form links. However, both models are learned using different methodologies. Specifically, LinDA uses every sample during training and assumes samples in each class follow the same distribution and have the same covariance matrix whereas SVM makes no prior assumptions on the data's distribution and aims to find a decision boundary using the points that result in the highest error.
#### Ii-C3 Graph Neural Networks (GNN)
GNNs are a class of neural networks for learning over datasets expressed as graphs. They have been employed to perform link prediction on a variety of graph topologies [45, 46]. A potential advantage of GNNs in our setting would be obviating much of the feature engineering in Sec. II-B, as they can learn directly from the graph structure. Thus, we compare the efficacy of GNNs to our proposed method for predicting link formation in SLNs. Specifically, we adopt a two-layer convolutional GraphSAGE model [47], where node attributes of the SLN are self-generated during training. Here, the adjacency matrix of the SLN is used as input into the GraphSAGE model at a given time in order to predict future links.
#### Ii-C4 Deep Learning Classifiers
One potential limitation of linear classifiers is their small parameter space, which prevents learning intricate non-linear relationships between input features extracted from an SLN. GraphSAGE GNNs aim to address this challenge, but they lose the ability to model
Fig. 3: A snapshot of the SLN graph model for a single user (represented by a unique ID string) and their close neighborhood. The visual demonstrates the lack of multiple paths between users, underlying the sparse nature of the graph.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline Forum & Course Title & Beginning & Duration & Users & Threads & Learner Pairs & Posts \\ \hline \hline ml & Machine Learning & 4/29/13 & 12 & 4263 & 4217 & 73315 & 25481 \\ \hline algo & Algorithms: Design and Analysis I & 9/22/14 & 13 & 3013 & 4656 & 50006 & 16276 \\ \hline shake & Shakespeare in Community & 4/22/15 & 5 & 958 & 1389 & 66217 & 7484 \\ \hline comp & English Composition I & 7/01/13 & 8 & 1862 & 1286 & 20083 & 8255 \\ \hline fi9 & Python for Data Science & 8/20/19 & 18 & 115 & 669 & 17000 & 2013 \\ \hline a20 & Python for Data Science & 1/17/20 & 17 & 290 & 1129 & 44964 & 4955 \\ \hline \end{tabular}
\end{table} TABLE I: Descriptive metrics on our six considered forum datasets. The title, beginning date (m/dd/yy), duration (weeks), number of users, threads, learner pairs, and posts by the end. All courses were broken into 20 time instances.
explicit features between node pairs. To mitigate each of these shortcomings, we propose a deep learning approach on specifically engineered features in which various characteristics of \((u,v)\) (e.g., spatial and time-varying properties) are expected to be learned for stronger prediction performance.
Specifically, we propose five deep architectures for link prediction: the Bayesian neural network (BNN), the fully connected neural network (FCNN), the convolutional neural network (CNN), the recurrent neural network (RNN), and the convolutional recurrent neural network (CRNN). Each model (excluding the Bayesian Neural Network) applies the Rectified Linear Unit (ReLU) activation function, given by \(\sigma(a)=\text{max}\{0,a\}\), in its hidden layers followed by a two-unit output layer, which applies the softmax activation function, which allows for a probabilistic interpretation of link formation for a learner pair \((u,v)\). The model architecture for each of our considered models are discussed below. The hyper-parameter selection of each model was empirically determined to best fit the diverse datasets utilized in Sec. III.
**Bayesian Neural Network (BNN):** The Bayesian Network (BNet) model [40] defines the probability density of latent variable \(\mathbf{z}_{uv}\) as a Gaussian:
\[P(\mathbf{z}_{uv}|\mathbf{e}_{uv})=\mathcal{N}(\mathbf{w}^{T}\mathbf{e}_{uv}, \sigma^{2}), \tag{2}\]
where \(\mathbf{w}\) is the weight vector and \(\sigma^{2}\) is the variance, both to be estimated when the model is trained. From this, \(y_{uv}\) is estimated according to
\[P(y_{uv}=1|\mathbf{z}_{uv})=\sigma(\mathbf{\phi}^{T}\mathbf{z}_{uv}+b), \tag{3}\]
where \(\mathbf{\phi}\) and \(b\) are a vector and scalar, respectively, to be estimated during training, and \(\sigma(\cdot)\) is the logistic sigmoid function given by \(\sigma(\cdot)=1/(1+e^{-(\cdot)})\).
Our BNN architecture is composed of a hidden layer encoding the latent variable \(\mathbf{z}_{uv}\). This hidden layer has 10
\begin{table}
\end{table} TABLE II: Summary statistics – SNR, mean and standard deviation (s.d.) – for the network features of the two link groups. The top row for each feature corresponds to formed links (\(y_{uv}(L)=1\)), and the bottom to non-formed links (\(y_{uv}(L)=0\)). Taken individually, the neighborhood-based features Re and Ad have the strongest correlations with link formation, while the topic-based To tends to have the least.
units, each represents a normal distribution with weight \(\mathbf{w_{i}}\) and variance \(\sigma^{2}\). Following this hidden layer is a dense output layer with softmax activation function given in [40].
**Fully Connected Neural Network (FCNN):** FCNNs are considered a higher dimensional non-linear extension of link classifiers. Such models can potentially represent more sophisticated non-linear relationships for better link prediction. Our fully connected multi-layer artificial neural network is composed of two hidden layers each containing 128 units.
**Convolutional Neural Network (CNN):** In addition to FCNN models, we also consider deep convolutional neural networks (CNNs), which in addition to providing a large parameter space for learning, capture spatial characteristics between features for each learning pair \((u,v)\). In the domain of link prediction, capturing spatial correlations between signal features is especially important since the majority of features (e.g., \(\mathbf{b}_{uv}\) and \(\mathbf{a}_{uv}\)) are extracted from the topology of the SLN graph. Our proposed CNN for link prediction is composed of two convolutional layers with 64 \(3\times 1\) feature maps and 32 \(2\times 1\) feature maps, respectively, followed by a 32-unit fully connected layer.
**Recurrent Neural Network (RNN):** BNNs, FCNNs and CNNs, as well as linear classifiers, do not explicitly model the evolution of latent space variables over time based on \(\mathbf{e}_{uv}\). This could potentially provide useful information for modeling an SLN, particularly so that the predictor could respond to sudden changes in the input relative to the prior state. This may occur, for example, when the topic of the course shifts, which could be reflected in a sudden change in \(c_{uv}\).
To address this challenge, we consider a long-short-term memory (LSTM) based RNN with input \(\mathbf{d}_{uv}=[\mathbf{e}_{uv},\mathbf{h}_{uv}(i-1)]^{T}\), where \(\mathbf{h}_{uv}(0)=0\) and \(\mathbf{h}_{uv}(i-1)\) is the output vector from the previous time. We then define the interaction gate, relationship gain gate, and relationship fading gate vectors at each time interval, \(i\), as
\[\mathbf{g}_{uv}(i) =\psi(\mathbf{W}_{g}\mathbf{d}_{uv}(i)+\mathbf{b}_{g}), \tag{4}\] \[\mathbf{i}_{uv}(i) =\sigma(\mathbf{W}_{i}\mathbf{d}_{uv}(i)+\mathbf{b}_{i}),\] (5) \[\mathbf{f}_{uv}(i) =\sigma(\mathbf{W}_{f}\mathbf{d}_{uv}(i)+\mathbf{b}_{f}), \tag{6}\]
respectively. Here, \(\psi(\cdot)\) and \(\sigma(\cdot)\) are the tanh and sigmoid functions, respectively, and the matrices \(\mathbf{W}_{g}\), \(\mathbf{W}_{i}\), and \(\mathbf{W}_{f}\) as well as the vectors \(\mathbf{b}_{g}\), \(\mathbf{b}_{i}\), and \(\mathbf{b}_{f}\) contain parameters that are estimated during the model training procedure. Formally, the latent cell state, \(\mathbf{z}_{uv}(i)\), is updated as
\[\mathbf{z}_{uv}=\mathbf{g}_{uv}(i)\odot\mathbf{i}_{uv}(i)+\mathbf{z}_{uv}(i-1 )\odot\mathbf{f}_{uv}(i), \tag{7}\]
where \(\odot\) denotes element-wise matrix multiplication. An output gate, \(\mathbf{o}_{uv}(i)\), is then used to determine the factor to which each element of \(\mathbf{z}_{uv}(i)\) should be used in the definition of \(\mathbf{h}_{uv}(i)\):
\[\mathbf{o}_{uv}(i)=\sigma(\mathbf{w}_{o}\mathbf{d}_{uv}(i)+\mathbf{b}_{o}), \mathbf{h}_{uv}(i)=\sigma(\mathbf{z}_{uv}(i)\odot\mathbf{o}_{uv}(i)). \tag{8}\]
With this, \(y_{uv}(i)\) is estimated as
\[P(y_{uv}(i)=1|\mathbf{z}_{uv}(i))=\sigma(\mathbf{h}_{1}(i)), \tag{9}\]
where \(\mathbf{h}_{1}(i)\) is the first element of \(\mathbf{h}(i)\). Our implemented RNN is composed of 64-cell LSTM layer followed by 128-unit fully connected layer.
**Convolutional Recurrent Neural Network (CRNN):** Convolutional recurrent neural networks contain both convolutional layers and recurrent LSTM layers. Although such models are typically computationally costly to train, they capture both spatial and time-varying correlations between learner pair feature vectors, thus providing the advantages of high parameter deep learning models with CNNs and RNNs. Our proposed CRNN architecture consists of two convolutional layers, containing \(64\)\(3\times 1\) and \(32\)\(2\times 1\) feature maps respectively, followed by a \(32\)-cell LSTM layer, and a \(32\) unit fully connected layer.
Fig. 4: Cumulative distribution functions (CDFs) for each of the seven feature vectors from s20. CDFs of non-formed links are marked in blue, and CDFs of formed links are shown in orange. These demonstrate that there is (a) an observable difference in distribution between the two populations for each feature and (b) an inverse relationship between number of shortest paths and shortest path length.
#### Ii-A5 Deep Learning Parameter Training
We train each deep learning algorithm using the Adam optimizer as well as the categorical cross entropy loss function, which for our link prediction setup is given by
\[\mathcal{L}=-\frac{1}{N}\sum_{n=1}^{N}\sum_{j=1}^{2}y_{j}\text{log}(\hat{y_{j}}), \tag{10}\]
where \(N\) is the total number of samples being used to calculate the loss and \(\hat{y}\) is the probability of link formation. Each model uses a batch size of 64 as well as a learning rate of 0.001. Finally, each model is trained using 300 epochs, which is sufficient for convergence on each dataset but simultaneously allows for convergence at slightly different optima, resulting in robust and reliable evaluation when used with k-fold cross validation as further discussed in Sec. III-B.
## III Link Prediction Evaluation
In this section, we begin by describing our considered courses along with their corresponding datasets (Sec. III-A) as well as our model evaluation procedure (Sec III-B). We then evaluate our framework's performance for predicting link formation (Sec. III-C) and examine the time-accuracy of our prediction model (Sec. III-D).
### _Datasets_
We consider the SLNs formed in six courses: four Coursera-based MOOC courses and two traditional courses offered at Purdue University. The four MOOC courses - "Machine Learning" (ml), "Algorithms: Design and Analysis, Part 1" (algo), "English Composition I" (comp), and "Shakespeare in Community" (shake) - were selected to represent a diverse set of subjects: two quantitative in nature and two in the humanities. In addition, we also consider the course "Python for Data Science" hosted through Purdue University over two semesters: "Fall 2019" (f19) and "Spring 2020" (s20). The availability of data from two offerings of a single course provides a unique opportunity to evaluate behavior in a single course over multiple semesters. The s20 dataset is of particular interest because of its relation with the COVID-19 pandemic. Specifically, this course was held in-person from January - March, allowing students to begin forming in-person links, which carried into their relationship in the course's SLN. However, with the pandemic forcing a transition to fully online learning, link formation between students became completely dependent on discussion forum communication. The inclusion of the f19 and s20 datasets, which differ both in size and in format, demonstrate our framework's broad applicability to different online course formats in dynamic environments. Table I shows detailed metrics of the six considered datasets.
Fig. 5 summarizes the graph topology at the termination of each course under evaluation in terms of five social network metrics: number of nodes, number of edges, shortest path lengths (i.e., the \(\mathbb{L}_{\texttt{pv}v}\) feature), degree per node, and user clustering coefficients. The diverse nature of each course is evident from each of the shown metrics and particularly from the varying number of edges and nodes. We observe the largest differences between the Purdue f19 and s20 courses versus the MOOC courses: the f19 and s20 courses are significantly smaller in nodes/edges and also have significantly larger degree per node and clustering coefficients. We also observe the difference in both the number of edges and the average degree per node between the f19 and s20 courses, which demonstrates the increase in student utilization of discussion forums in the absence of in-person instruction.
Next, we describe the SLNs in terms of the features in Sec. II-B. We make several observations on associations with link formation within and across datasets before evaluating the link-prediction portion of our proposed framework.
#### Iii-B1 Data Preparation
To obtain a representative set of student behavior from a course, and to ensure that data gathered from each source is uniformly formatted, we filter each considered dataset. Specifically, we remove the instructors from the list of learners and remove all links formed between learners and instructors, since we are interested in developing models targeted towards peer-to-peer interaction, with the goal of requiring less direct instructor intervention. Furthermore, interactions before the beginning of a course are removed; only links formed during a course are considered. Both course-hosting sites offer an option for full anonymity to learners - posts made with anonymity are ignored, as we cannot make meaningful connections with unknown users. Enrolled learners who did not access the forum (i.e., an empty adjacency matrix), are not considered to remove confusion - a lack of behavior excludes a helpful metric for predicting future behavior. Such students would likely benefit from more traditional intervention. After filtering, less than 2% of the learner pairs in each dataset demonstrated a formed link. This underscores an extreme sparsity of learner pairs for link prediction; the methodology applied to avoid overfitting will be discussed further in Section III-B.
#### Iii-B2 Topic extraction
To obtain the post similarities \(c_{uv}(i)\), we must first extract the topics, \(\mathcal{K}\), and distributions for each post according to the LDA algorithm discussed in Sec. II-B. Prior to building the dictionary of topics, all URLs, punctuations, and stopwords are removed from each post's text and all words are stemmed. Table III summarizes the topic extraction results for each dataset using \(|\mathcal{K}|\) = 20 topics; the top three words shown are from the five topics that have the highest supports across posts. We find that \(|\mathcal{K}|=20\) produces a set of topics that have reasonably large supports across posts while retaining granular information, i.e., able to convey differences between student posts. In our manual inspection, larger values of \(|\mathcal{K}|\) lacked the support to generate informative features, while smaller values of \(|\mathcal{K}|\) resulted in too much intersection between topics for a good understanding of content.
### _Model Evaluation Procedure_
To evaluate the models proposed in Sec. II, we use the following metrics, training procedures, and evaluation criteria.
#### Iii-B1 Metrics
We use three metrics to evaluate prediction performance. First, we compute the overall Accuracy (ACC),
or the fraction of predictions over all time that are correct. For iteration \(k\), it is obtained as:
\[\frac{1}{|\Omega_{e}^{k}|\cdot L}\sum_{(u,v)\in\Omega_{e}^{k}}\sum_{i=1}^{L} \mathbb{1}\{y_{uv}(i)=\bar{y}_{uv}(i)\}, \tag{11}\]
where \(y_{uv}(i)\in\{0,1\}\) is the binary prediction made based on \(\tilde{y}_{uv}(i)\) and \(\mathbb{1}\) is the indicator function. Second, we compute the Area Under the ROC Curve (AUC), which assesses the tradeoff between true and false positive rates for a classifier [5]. Third, we define a metric called Time Accuracy (TAC) to be the fraction of links that are predicted to form within a fixed window \(w\) of when they actually form (among those that eventually form). Letting \(n_{uv}=\text{min}_{i}\{y_{uv}(i)=1\}\) be the actual time at which link \((u,v)\in\Omega_{k}^{f}\) forms and \(\tilde{n}_{uv}=\text{min}_{i}\{\tilde{y}_{uv}(i)=1\}\) the predicted time, the TAC is defined as
\[\frac{1}{|\Omega_{k}^{f}|}\sum_{(u,v)\in\Omega_{k}^{f}}\mathbb{1}\{|\tilde{n}_ {uv}-n_{uv}|\leq w\} \tag{12}\]
for iteration \(k\), where \(\Omega_{f}^{k}\subset\Omega_{e}^{k}\) is the set of correctly predicted links in the test set that will eventually form. We compute the mean and standard deviation of each metric across three evaluation iterations.
#### Iv-B2 Training and Testing
\(k\)-fold cross validation is used to evaluate each predictor with \(k=10\). Following Sec. III-A, we again consider the link sets \(\mathcal{G}(L)\) and \(\mathcal{G}^{c}(L)\). Our objective is to train models capable of accurate link prediction despite the large class imbalance between \(\mathcal{G}(L)\) and \(\mathcal{G}^{c}(L)\) that will be observed during training and inference. To achieve this, we take an equal proportion of samples from both \(\mathcal{G}(L)\) and \(\mathcal{G}^{c}(L)\) to form each training fold, which, in turn, retains the overall class imbalance in the training set during each training iteration. The corresponding testing set of each training fold contains the same class imbalance. After each training fold, we calculate the metrics of interest on the respective testing set of the validation run. This sampling, along with the utilization of the AUC measurement, allows us to quantify the false alarm versus true positive rate, since the prediction accuracies on a poorly trained model could be very high due to the large class imbalance.
In each of the \(k\) iterations, we consider a set of time intervals from which the model parameters are estimated considering each pair \((u,v)\in\Omega_{k}^{r}\), using the procedures in Sec. III-B2. Then, for each \((u,v)\in\Omega_{k}^{e}\), the inputs are used to make a prediction \(\tilde{y}_{uv}(i)\in[0,1]\) of the link state \(y_{uv}(i)\).
### _Link Prediction Evaluation_
Table IV gives the overall performance of the baseline, linear, GNN, and deep learning models in terms of the AUC and ACC metrics. Overall, we see that the CNN consistently outperforms the other predictors for each considered dataset. In addition, the GNN achieves strong (comparable to the CNN)
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline \multicolumn{2}{c}{Model} & \multicolumn{1}{c}{ml} & \multicolumn{1}{c}{algo} & \multicolumn{1}{c}{shake} & \multicolumn{1}{c}{comp} & \multicolumn{1}{c}{f19} & \multicolumn{1}{c}{algo} \\ \hline \hline \multirow{2}{*}{Re} & AUC & 0.5005 \(\pm\) 0.0004 & 0.5188 \(\pm\) 0.0322 & 0.5061 \(\pm\) 0.0034 & 0.5167 \(\pm\) 0.0266 & 0.5689 \(\pm\) 0.0401 & 0.5258 \(\pm\) 0.0121 \\ & ACC & 0.5995 \(\pm\) 0.0054 & 0.8338 \(\pm\) 0.0104 & 0.8296 \(\pm\) 0.0073 & 0.8349 \(\pm\) 0.0082 & 0.9524 \(\pm\) 0.0057 & 0.9599 \(\pm\) 0.0020 \\ \hline \multirow{2}{*}{BNet} & AUC & 0.9053 \(\pm\) 0.0106 & 0.9488 \(\pm\) 0.0058 & 0.8603 \(\pm\) 0.0095 & 0.8684 \(\pm\) 0.0116 & 0.7413 \(\pm\) 0.0546 & 0.7495 \(\pm\) 0.0269 \\ & ACC & 0.9175 \(\pm\) 0.0066 & 0.9805 \(\pm\) 0.0019 & 0.9472 \(\pm\) 0.0035 & 0.9492 \(\pm\) 0.0026 & 0.9600 \(\pm\) 0.0053 & 0.9672 \(\pm\) 0.0013 \\ \hline \multirow{2}{*}{FCNN} & AUC & 0.9766 \(\pm\) 0.0003 & 0.9706 \(\pm\) 0.0039 & 0.9670 \(\pm\) 0.0059 & 0.9714 \(\pm\) 0.0084 & **0.8991 \(\pm\) 0.0367** & 0.8844 \(\pm\) 0.0330 \\ & ACC & 0.9782 \(\pm\) 0.0027 & 0.9871 \(\pm\) 0.0029 & 0.9853 \(\pm\) 0.0019 & 0.9850 \(\pm\) 0.0022 & **0.9688 \(\pm\) 0.0037** & 0.9729 \(\pm\) 0.0022 \\ \hline \multirow{2}{*}{SVM} & AUC & 0.9122 \(\pm\) 0.0027 & 0.9523 \(\pm\) 0.0050 & 0.8982 \(\pm\) 0.0071 & 0.8618 \(\pm\) 0.0071 & 0.8437 \(\pm\) 0.0343 & 0.8203 \(\pm\) 0.0113 \\ & ACC & 0.9137 \(\pm\) 0.0026 & 0.9755 \(\pm\) 0.0035 & 0.9608 \(\pm\) 0.0031 & 0.9462 \(\pm\) 0.0022 & 0.9670 \(\pm\) 0.0040 & 0.9700 \(\pm\) 0.0015 \\ \hline \multirow{2}{*}{LinDA} & AUC & 0.8486 \(\pm\) 0.0056 & 0.8361 \(\pm\) 0.0064 & 0.7521 \(\pm\) 0.0116 & 0.7331 \(\pm\) 0.0123 & 0.6940 \(\pm\) 0.0146 & 0.6692 \(\pm\) 0.0205 \\ & ACC & 0.8674 \(\pm\) 0.0051 & 0.9425 \(\pm\) 0.0018 & 0.9117 \(\pm\) 0.0050 & 0.9084 \(\pm\) 0.0056 & 0.9582 \(\pm\) 0.0046 & 0.9602 \(\pm\) 0.0026 \\ \hline \multirow{2}{*}{RNN} & AUC & **0.9880 \(\pm\) 0.0011** & **0.9808 \(\pm\) 0.0026** & **0.9807 \(\pm\) 0.0054** & **0.9770 \(\pm\) 0.0071** & **0.8030** & 0.8343 \(\pm\) 0.0373 & 0.8329 \(\pm\) 0.0349 \\ & ACC & **0.9890 \(\pm\) 0.0010** & **0.9902 \(\pm\) 0.0013** & **0.9906 \(\pm\) 0.0019** & **0.9877 \(\pm\) 0.0030** & 0.9653 \(\pm\) 0.0040 & 0.9710 \(\pm\) 0.0024 \\ \hline \multirow{2}{*}{CNN} & AUC & **0.9881 \(\pm\) 0.0019** & **0.9817 \(\pm\) 0.0029** & **0.9754 \(\pm\) 0.0057** & **0.9763 \(\pm\) 0.0055** & **0.9187 \(\pm\) 0.0318** & **0.9221 \(\pm\) 0.0169** \\ & ACC & **0.9894 \(\pm\) 0.0015** & **0.9916 \(\pm\) 0.0009** & **0.9888 \(\pm\) 0.0025** & **0.9882 \(\pm\) 0.0022** & **0.9711 \(\pm\) 0.0033** & **0.9740 \(\pm\) 0.0015** \\ \hline \multirow{2}{*}{GRNN} & AUC & 0.9680 \(\pm\) 0.0094 & 0.9704 \(\pm\) 0.0087 & 0.9608 \(\pm\) 0.0066 & **0.9725 \(\pm\) 0.0070** & **0.9803 \(\pm\) 0.0468** & 0.8845 \(\pm\) 0.0347 \\ & ACC & 0.9713 \(\pm\) 0.0090 & 0.9846 \(\pm\) 0.0
prediction performance on the four MOOC datasets, but it performs poorly on both f19 and s20, achieving AUCs of 0.74 and 0.56 in f19 and s20, respectively. This behavior is consistent with observations in prior work [46] that GNNs require large datasets for effective generalization - a characteristic that MOOCs are able to provide (with at least 1,000 users in each case, see Table I) whereas the Purdue courses, f19 and s20, are not. Our explicit feature engineered methodology paired with a CNN classifier, on the other hand, is more robust against variations in SLN course size and type in comparison to the GNN.
Of particular interest is the s20 dataset and its performance relative to the other five datasets. Because s20 was held partially in-person prior to the COVID-19 outbreak in March 2020, the behavior represented includes both in-person and online interactions. Furthermore, it contains a rapid change in behavior midway through the semester that models must account for. It follows from the high accuracies and AUCs demonstrated by each deep-learning model on this dataset that our prediction model can be applied to hybrid-online courses with a similar level of accuracy to fully online courses. It also suggests that our proposed model is responsive to large-scale shifts in student behavior. From Table IV, we see neither the GNN nor the other baseline models are capable of capturing either of these desirable characteristics. As a result, we find that our proposed framework is capable of increasing both course quality and learner interactions during the pandemic; an attribute that can be leveraged to improve instruction in a post-pandemic course offering.
Considering all courses, the CNN model has slightly higher performance across the metrics and datasets, reaching average AUCs between 0.92 and 0.99 and average ACCs between 0.97 and 0.99. The AUC of Re is nearly random, but demonstrates a high accuracy in all cases because of the large class imbalance present. Similarly, the linear classifiers demonstrate high ACC values because of the large class imbalance as well. Although the Bayesian model consistently outperforms the baseline models, the lower accuracy and AUC relative to the CNN and CRNN models confirms our hypothesis from Sec. II that capturing spatial and temporal variance leads to improvement in the model. More specifically, the evolution of the state of an SLN between different time periods, both temporally and spatially, is important to predicting learner interactions; this aspect is effectively included in the LSTM-based CRNN. We further observe that the CNN model, capturing spatial variance, and the RNN capturing temporal variance, each perform similarly to the CRNN model for several datasets. This suggests that while spatial and temporal variance both individually assist in prediction, their combined usage may not result in significant performance improvements.
Although an accurate prediction is most informative on the efficacy of a connection between learners, recommendations may also be supported by false predictions. If a high-accuracy model falsely predicts that two users will connect, we may infer that the formation of a link between these two users would be beneficial based on model parameters. Conversely, there is a strong correlation between false negative predictions and weak links between learners, implying that the benefits of forming a connection between two such users would be trivial compared to other, more highly-weighted connections.
### _Early Detection of Link Formation_
The models proposed in Sec. III-C consider the ability to predict link formation in subsequent time intervals up until the end of the course. However, it does not consider links that will form at an earlier or later interval. These occurrences of a delay between link formation and prediction can lend additional information of importance to learners: if we can predict in advance which learners may form connections, we may encourage them to connect sooner, potentially resulting in a stronger connection or faster replies from learners expected to have delayed responses. On the other hand, if we find that a link forms much sooner than predicted by our model, this may indicate that learners would benefit from re-connecting on the current topic later in the course.
To study these cases, we evaluate the TAC metric from Sec. III-B for our RNN, CNN, FCNN, and CRNN models; i.e., we measure whether links form within a given window \(w\) of when they are predicted to. Note that the TAC metric was only calculated for the deep learning models, since they were consistently the best performing link formation predictors. The granular value of 20 time intervals used to generate the SLN graph model gives the predictive model access to more frequently updated features, and allows the model to respond quickly to changes in SLN behavior. Fig. 6 shows the TAC values as \(w\) is increased from 0 to 20 for several of our proposed deep learning prediction models. The sharp increase of each TAC curve for small \(w\) of each model - with the exception of the RNN - indicates that many links form close to when they are predicted to form, reinforcing our observations of model quality from other performance metrics in Sec. III-C. A window of \(w=2\), for example, is already sufficient for all six forums to reach a TAC of 0.5 or above.
Observing Fig. (e)e, which represents the TAC curve of the f19 dataset, it is clear that our TAC metric demonstrates a lower accuracy for small datasets but the performance of individual models has more variation. This is largely attributed to the smaller number of learner pairs contained in the f19 dataset with which to train the model compared to a MOOC forum. However, with the exception of the RNN, we can observe the same curve shape and sharp initial increase present for larger datasets, indicating that TAC is both a consistent and useful evaluation metric of model performance. We can further observe in the ml, f19, and s20 datasets that the RNN model fails to correctly predict links consistently across datasets within a small interval of when they actually occur, further suggesting that spatial features play a more important role in the problem of link prediction, which is further discussed in Sec. IV-D.
Furthermore, there are very few links with large \(w\), once again reinforcing the results of other performance metrics. The small quantity of links with large \(w\) in each forum present a significant opportunity to recommend early formation of links (when predictions are early) and potential times for learners to reconnect (when predictions are late). Though there is less
room for change on links with smaller \(w\), learners may be more willing to act on recommendations in these cases since they induce less modification to actual behavior [6]; after all, a learner may be reluctant to reach out to others on the basis of outdated threads or on the assumption that they will eventually collaborate.
## IV Link Formation Analytics
In this section, we consider several descriptive analytic tools and visualizations for instructors. We first describe the evolution of model parameters during prediction (Sec. IV-A). We then examine the correlations between features (Sec. IV-B) and analyze their individual and collective impact on prediction (Sec. IV-C). Finally, we analyze the importance of the predictor's architecture in Sec. IV-D.
### _Time-Series Variable Evolution_
Because the hidden layers of deep-learning models cannot be understood intuitively, we provide an alternate form of visualizing their behavior. It is possible to observe the decisions made by the deep learning model during prediction by investigating changes in state for each model gate over time, and making inferences about the final prediction from these observations. The stability exhibited by the gates over time supports the viability of early link formation prediction from Sec. III-D. To demonstrate this, we consider an example of how the CRNN LSTM layer parameters specified in Sec. II-C for deep learning prediction models evolve over time.
By examining the relationship fading gate, \(\mathbf{f}\), in particular, we are able to demonstrate how the inputs from time interval \(i-1\) affect the model output at time interval \(i\), i.e., how much information is carried over from interval to interval. To do so, we choose a link \((u,v)\in\mathcal{G}(L)\) at random from algo, and feed \(\mathbf{e}_{uv}(i)\) into the trained model for \(L=20\) to generate the predictions \(\tilde{y}_{uv}(i)\). The prediction has high accuracy on the chosen link, which forms within one time interval of when it is predicted to form.
The neuron activation values for the gates \(\mathbf{g}\), \(\mathbf{i}\), \(\mathbf{f}\), \(\mathbf{o}\) and the state \(\mathbf{z}\) and output \(\mathbf{h}\) are additionally considered and shown in Fig. 7. The vertical axis is the vector dimension (i.e., neuron number), and the horizontal is the time instance \(i\). A few of the input gate dimensions, \(\mathbf{g}\), change at about the time the link is formed (around \(i=17\)). These changes propagate through the network, causing the output, \(\mathbf{h}\), as well as some dimensions of the intermediate gates (e.g., \(\mathbf{f}\), \(\mathbf{i}\), and \(\mathbf{o}\)) to change around \(i=17\) as well, thus forming an accurate prediction. The fact that \(\mathbf{i}\) and \(\mathbf{f}\) in particular tend to take extreme values indicates that the input, \(\mathbf{g}\), and prior state, \(\mathbf{z}\), are either fully passed or blocked.
We also observe that several dimensions in \(\mathbf{z}\) evolve gradually over time, with several non-zero dimensions in \(\mathbf{f}\) passing information across multiple time periods. This result helps explain why models using an LSTM layer in conjunction with other methods perform better than the Bayesian model: passing information from one time interval to another increases the prediction quality compared to only updating the input features at each time interval.
### _Feature Correlations_
Investigating the relationship between individual features provides insights into the shape of an SLN in a different capacity than the predictions made by our deep-learning models, and it provides an analytical tool with which instructors can monitor an online classroom. Table II summarizes the distributions of \(\mathcal{G}(L)\) (top row) and \(\mathcal{G}^{c}(L)\) (bottom row), with the top 5% of outliers removed. We show the means and standard deviations (s.d.) of each feature for both groups, as well as the signal-to-noise ratio (SNR) for each feature. The large difference in magnitude for both mean and s.d. between formed and unformed links indicates a clear difference in behavior between these two groups. The large gap in values reinforces the results of our predictive algorithms discussed in Sec. III-B. The SNR measures how effectively a feature can distinguish between the two groups, with a higher magnitude indicating more efficacy [41]. We make a few impactful observations for link prediction from these statistics:
_(i) Infrequent short paths_: The length and number of shortest paths between learners are both negatively associated with link formation. The former is consistent with the intuition that learners who are closer together (i.e., smaller shortest path lengths) are more likely to form links. The latter, however, indicates that links are more likely to form when fewer such shortest paths exist, i.e., the paths should be unique. An interesting analogy can be drawn here to the small world phenomenon, where users can discover short paths in a social network even when only one or a few exist [7]; in other words, the presence of fewer short paths makes each of those neighboring connections more important and more likely to foster link creation.
_(ii) Low-degreed shared neighbors_: In order of increasing SNR, Ja, Re and Ad are each positively associated with link formation. Each of these measures the common neighborhood of two learners, with increasing penalty placed on the degrees
Fig. 6: TAC with different windows \(w\). The TAC curves all exhibit sharp increases initially, indicating many links form around the time they are predicted to. The links at higher \(w\), on the other hand, indicate potential for recommending early link formation and future reconnection.
of these neighbors (i.e., Ja does not include degree at all, while Re is inversely proportional to it). The fact that Ad has the highest SNR, then, implies that shared neighbors with fewer links are more prone to facilitate link formation, which is consistent with the the point above on unique paths being more predictive.
_(iii) Low ceiling feature values_: Taking the statistics present in Table II in conjunction with each feature's cumulative distribution function (CDF), shown in Fig. 4, it is evident for several features including To and Pr that no learner pairs reach the maximum possible value for the feature. Most notably with respect to To, the maximum number of shared topics between two connected users is always less than 15 of the 20 extracted topics. Given the highly connected nature of "hub" students that possess a large number of shortest path connections, it would be expected that the maximum number of shared topics would be 20. This discrepancy in number of shared topics suggests that hub students connect frequently with less-engaged students, but rarely interact with each other, creating smaller student ecosystems within the course centered around their knowledge dissemination. Another possibility is a difference in student knowledge state/engagement on particular topics, indicating that learners are more motivated to post about topics they are confident in or interested in learning and avoid topics they are not.
_(iv) Topology vs. post properties_: Pr and To are both positively associated with link formation, as one would expect: those with higher degrees (Pr) and focusing on similar topics (To) should be more likely to interact in the discussions. Surprisingly, though, these features have lower SNRs than the other neighborhood-based features, indicating that the network topology drives link formation in an SLN more than individual learner properties like a learner's tendency to post, for example, or topic interest. Furthermore, the SNR of To is higher in the less densely populated courses (f19 and s20), indicating that clearer signals may emerge around topics when there is less overall volume of discussion in the forums. This is consistent with the performance differential of the GNN model in link prediction on the large vs. small datasets, since it does not learn from topic features.
_(v) Quantitative vs. humanities courses_: Among the four MOOC courses, Pr is higher in comp and shake (particularly shake) than in ml and algo. This is consistent with humanities courses tending to invite more open-ended discussions, whereas quantitative courses have questions requiring explicit answers [6]. More learners would then be motivated to post in the forums of humanities courses - in fact, such participation may be a course requirement - leading to more links forming. Table I confirms the intuition that even with a smaller class size, comp and shake have a higher ratio of learner pairs to learners. The distinction between quantitative and humanities courses also helps explain which settings temporal behavior is helpful for link prediction, as we will discuss in Sec. IV-D.
### _Feature Importance Analysis_
Recall in Sec. II-B that we define three groups of features: (i) Nei, which quantify the overlap between learner neighborhoods, (ii) Path, which are the length and number of shortest paths, and (iii) Post, or the similarity in what learners discuss. To complement the correlation analysis in Table II that was done for each feature individually, we now analyze the contribution of each feature type to the prediction quality of our CRNN model, by evaluating it using different input feature combinations.
To evaluate smaller groups of features using our CNN and CRNN models, a modification in model architecture is required. Our implementation of the CRNN model for computing links with all features contained both a \(3\times 1\) kernel layer and a \(2\times 1\) kernel layer. To classify samples using a subset of less than five of the seven features, the second convolutional layer using a \(2\times 1\) kernel was removed, leaving a single convolutional layer with a \(3\times 1\) kernel before the fully connected and output layers. This eliminates the issue of convolving a \(1\times 1\) output shape with an additional \(2\times 1\) kernel without requiring zero-padding. Determining the individual and combined effects of each feature group allows identification of potentially redundant features, which can improve computational speed when updating predictions in real time.
Fig. 7: Neuron activations of each gate \(\mathbf{f}_{uv}(i),\mathbf{g}_{uv}(i),\mathbf{h}_{uv}(i),\mathbf{i}_{uv}(i), \mathbf{o}_{uv}(i)\), and \(\mathbf{z}_{uv}(i)\) over time of the LSTM layer inside the CRNN model for two particular links \((u,v)\) in \(algo\). The fact that several gate dimensions are non-zero indicates that information is propagating across multiple time periods for prediction. The top row demonstrates activations for a link formed late in the course, and the bottom row demonstrates activations for an early-formed link.
Table V shows the results when each course is broken into 20 time periods. None of the combinations reach the performance of the original model with all input variables in Table IV, indicating that each feature group contributes to the prediction quality. The \(\texttt{Nei}+\texttt{Path}\) and \(\texttt{Path}+\texttt{Post}\) combinations show the highest overall performance across all six forums, indicating that the combination of \(\texttt{Nei}+\texttt{Path}\) has a confounding effect on the model - we would expect both Nei-based groups to share a higher AUC. Combining these values with the SNRs in Table II indicates that the Nei features contribute the most to model accuracy, followed by Post and then Path.
If we compare the individual feature groups, we generally find that the Nei features perform the best, followed by Path, and then Post. This is consistent with the behavior of these features within groups as well. This ordering of Post and Path is opposite of the SNR magnitudes from Table II: here, the single feature To outperforms the combined impact of Path. Given that Table II is concerned with the eventual formation of links but not the time at which they form, we conjecture that in the absence of Nei, Post is more important to pinpointing the time of link formation while Path is more important to whether they form at all. After all, the timing of particular topic coverage should influence when learners interested in those topics connect.
### _Model Architecture Analysis_
Here, we first analyze the importance of spatial pattern preserving convolutional layers and temporal pattern preserving recurrent layers for link prediction in SLNs. We find that, in general, classification models that incorporate only spatial pattern dependencies (CNN) outperform models that only incorporate time dependencies (RNN), as shown in Table IV. This is consistent with Table V, where we find that SLN topology features (i.e., neighborhood and path-based features), which explain spatial relationships between links, are the most important for accurate link prediction. However, we also find that incorporating time dependencies into link prediction models (e,g., RNN and CRNN) obtains strong performance in large courses such as MOOCs, whereas these models become less accurate on small courses such as f19 and s20. Interestingly, although the RNN accurately predicts whether links will form (as shown in Table IV), they do not accurately predict when the links will form as shown from the TAC curves in Fig. 6, particularly on ml, f19, and s20. This behavior is consistent with such quantitative courses requiring short answers in fast time intervals whereas the humanities courses typically involve threads of discussion that persist over longer periods of time [8]. In Fig. 7, we further explored the efficacy of recurrent layers by visualizing the various gates of the CRNN in the algo course, where we saw that information propagates from multiple time periods to aid link prediction after spatial patterns have been identified. This reinforces that recurrent layers may carry long-term information for link prediction, but convolutional layers are more robust for in SLNs on both large and small courses.
In addition, as shown in Table IV, convolutional GNNs achieve strong link prediction performance on each of the MOOC datasets. Rather than employing our explicitly defined model features, GraphSAGE embeds features across the SLN topology that exploit spatial patterns, hence resulting in strong performance for these datasets captured by the GNN's convolutional layers. However, for the GNN to learn such discriminative features, it may require a large graph to train on [46], thus making the model less effective for smaller courses such as f19 and s20. Our proposed framework, in which we explicitly model features between node pairs, on the other hand, is better able to learn and generalize on the smaller datasets. More generally, these results indicate that in the SLN domain, informed feature engineering (i.e., using spatial features) paired with corresponding layers (i.e., convolutional) results in better trained models with less data than that required by GNNs. This is useful for generating analytics in the early stages of courses before a significant amount of links have formed (i.e., before interaction data has been observed) on the forums [5, 6].
## V Conclusion
In this work, we developed a link prediction framework specifically tailored to operate in social learning networks (SLNs) based on neighborhood-based, path-based, and post-based modeling features. Through evaluation in six different courses, we demonstrated our framework's ability to perform accurate link prediction in a variety of learning environments. In particular, we examined the efficacy of our framework on a course forced online after approximately eight weeks of traditional instruction due to the COVID-19 pandemic. In addition, we considered the SLNs formed in four Massive Open Online Courses (MOOCs) as well as one traditional undergraduate course, with a heavy reliance on student participation in an online discussion forum, offered through Purdue University.
While our work establishes an initial framework and results for link prediction in SLNs, many avenues remain for exploring the challenges of link prediction in this new type of online social network. One is additional feature engineering: other features that we did not consider - such as learners'
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline \multicolumn{2}{c}{Set} & \multicolumn{2}{c}{ml} & \multicolumn{2}{c}{algo} & \multicolumn{2}{c}{shake} & \multicolumn{2}{c}{comp} & f19 & s20 \\ \hline \hline \multirow{2}{*}{\(\texttt{Nei}+\texttt{Path}\)} & AUC & **0.9487 \(\pm\) 0.0241** & **0.9647 \(\pm\) 0.0091** & 0.8978 \(\pm\) 0.0303 & **0.9609 \(\pm\) 0.0093** & **0.8945 \(\pm\) 0.0330** & **0.9035 \(\pm\) 0.0261** \\ & ACC & **0.9528 \(\pm\) 0.0196** & **0.9884 \(\pm\) 0.0035** & **0.9693 \(\pm\) 0.0071** & **0.9801 \(\pm\) 0.0044** & **0.9695 \(\pm\) 0.0064** & **0.9732 \(\pm\) 0.0027** \\ \hline \multirow{2}{*}{\(\texttt{Nei}+\texttt{Post}\)} & AUC & 0.9398 \(\pm\) 0.0011 & 0.9399 \(\pm\) 0.0015 & 0.8541 \(\pm\) 0.0024 & 0.8922 \(\pm\) 0.0078 & 0.6735 \(\pm\) 0.0519 & 0.6346 \(\pm\) 0.0118 \\ & ACC & 0.9446 \(\pm\) 0.0008 & 0.9753 \(\pm\) 0.0006 & 0.9314 \(\pm\) 0.0050 & 0.9482 \(\pm\) 0.0029 & 0.9538 \(\pm\) 0.0015 & 0.9627 \(\pm\) 0.0011 \\ \hline \multirow{2}{*}{\(\texttt{Path}+\texttt{Post}\)} & AUC & 0.9332 \(\pm\) 0.0034 & 0.9455 \(\pm\) 0.0058 & **0.9255 \(\pm\) 0.0096** & 0.9444 \(\pm\) 0.0078 & **0.8832 \(\pm\) 0.0358** & 0.8848 \(\pm\) 0.0175 \\ & ACC & 0.9418 \(\pm\) 0.0031 & 0.9659 \(\pm\) 0.0028 & **0.9650 \(\pm\) 0.0038** & 0.9736 \(\pm\) 0.0039 & **0.9679 \(\pm\) 0.0051** & **0.9736 \(\pm\) 0.0022** \\ \hline \hline \end{tabular}
\end{table} TABLE V: Performance of the CRNN Model with selected input feature groups. The top two highest performing groups for each course metric are bolded. The combinations of \(\texttt{Nei}+\texttt{Path}\) and \(\texttt{Path}+\texttt{Post}\) outperform \(\texttt{Nei}+\texttt{Post}\) consistently, indicating that while neighborhood features are most important for prediction, the other feature types contribute significantly to link prediction as well.
background knowledge, level of education, and personal goals - may also be associated with link formation, and may allow further improvements in link prediction quality. As demonstrated here, our proposed framework is applicable across multiple datasets; thus, additional evaluation variants on forums or classes with different structures, such as those present in K-12 education, may be beneficial.
|
2310.17725 | Gaps in the Main-Sequence of Star Cluster Hertzsprung Russell Diagrams | The presence of gaps or regions of small numbers of stars in the main
sequence of the Hertzsprung Russell Diagram (HRD) of star clusters has been
reported in literature. This is interesting and significant as it could be
related to star formation and/or rapid evolution or instabilities. In this
paper, using Gaia DR3 photometry and confirmed membership data, we explore the
HRD of nine open clusters with reported gaps, identify them and assess their
importance and spectral types. | Priya Hasan | 2023-10-26T18:36:43Z | http://arxiv.org/abs/2310.17725v1 | # Gaps in the Main-Sequence of Star Cluster Hertzsprung Russell Diagrams
###### Abstract
The presence of gaps or regions of small numbers of stars in the main sequence of the Hertzsprung Russell Diagram (HRD) of star clusters has been reported in literature. This is interesting and significant as it could be related to star formation and/or rapid evolution or instabilities. In this paper, using Gaia DR3 photometry and confirmed membership data, we explore the HRD of nine open clusters with reported gaps, identify them and assess their importance and spectral types.
HRD, star clusters, Gaia DR3, stellar evolution
## 1 Introduction
The Hertzsprung Russell Diagram (HRD) of star clusters is the holy grail to understanding stellar evolution and populations. It is a snapshot of stellar lives as a plot of color (temperature) versus magnitude (luminosity). The precise position of a star can be used to find various parameters of a star including its size, metallicity, and evolutionary state. The HRD traces stars at various phases of evolution along the main sequence and as they turn off to the giant branch and beyond. The HRD has been used to find the distances, ages and reddening of star clusters.
The European Space Agency Gaia mission has provided unprecedented sub-milliarcsecond parallax precision for over a billion stars (Prusti, 2016; Gaia Collaboration et al., 2021) that can be utilized to study the precise locations of individual stars on the HRD as well as populations of stars. The accurate, all-sky data produces an HRD that shows previously unknown features.
Gaps or regions of low density of stars in the HRD have been reported by various authors (Hawarden, 1971; Bohm-Vitense and Canterna, 1974; Kjeldsen and Frandsen, 1991; Rachford and Canterna, 2000) and could be important milestones of stellar evolution. In this paper, we present a detailed study of main sequence gaps in the HRD of a sample of nine clusters of ages ranging from \(\log\,t=7.09-9.63\) and at distances \(889-2773\) pc using Gaia DR3 data. We use membership data and parameters from Cantat-Gaudin et al. (2020). We identify the gaps, assess their statistical significance using the \(\chi^{2}\) test and identify their spectral types.
## 2 Reported Gaps in Literature
Main sequence gaps in HRD were reported in literature (Kjeldsen and Frandsen, 1991; Sagar and Joshi, 1978; Bohm-Vitense and Canterna, 1974; Rachford and Canterna, 2000) and are listed in Table 1. A gap was also found by Jao et al. (2018) in Gaia DR2 data at \(G\approx 10\). The gap is very narrow (\(\approx\) 0.05 mag) and is near the region in the HRD where M dwarf stars transition from partially to fully convective, near spectral type M3.0V.
## 3 Cluster Sample
We selected a sample of nine clusters as clusters with confirmed gaps from literature. These are NGC 2169, NGC 2360, NGC 1778, NGC 6939, NGC 3680, NGC 2682, Trumpler 1, NGC 2420 and NGC 6134. We used the following cluster parameters Cantat-Gaudin et al. (2020) shown in Table 2 to convert magnitudes to absolute scale. The table shows the coordinates of these clusters (RA and Dec), the angular diameter (\(r50\)) which is the radius that contains half the number of members from the same reference, the logarithm of age \(log\ t\), the extinction \(A_{V}\), distance modulus \(DM\) and the distance to the cluster in parsecs.
## 4 Analysis
We use membership data from Cantat-Gaudin et al. (2020) for our sample of nine clusters. As described in Donada et al. (2023), we find the absolute magnitude and color:
\[G=G-\mu-0.89*A_{V}\]
\[(BP-RP)_{0}=(BP-RP)-0.89/1.85*A_{V}\]
. We plot the color magnitude diagrams (Fig. 1) and the luminosity functions (Fig. 2) to identify possible gaps in the HRD listed in Table 3.
The likelihood that the observed gap represents a chance variation can be estimated as follows. For the identified gaps, we calculate \(\chi^{2}=\frac{(N-N_{o})^{2}}{N}\), where \(N\) is the expected number
\begin{table}
\begin{tabular}{l r r r r r} \hline
**Structure** & \(M_{V}\) & \((B-V)_{0}\) & \(\Delta M_{V}\) & \(\Delta(B-V)_{0}\) & **Sp Type** & **Temperature (K)** \\ \hline Mermilloid & 0.0 & -0.12 & 0.25 & & B8V & 12300 \\ Canterna Gap & 1.0 & -0.05 & 0.20 & & A1V & 9330 \\ A-bend & 1.3 & -0.02 & 0.7 & 0.05 & A2V & 9040 \\ A-group & 1.5 & 0.00 & 0.7 & 0.05 & A3V & 8750 \\ M11 gap & 1.7 & 0.05 & 0.5 & 0.5 & A4V & 8480 \\ Bohm-Vitense Gap & 2.8 & 0.25 & 0.3 & 0.05 & F0V & 7350 \\ NGC 6134-IC4651 gap & 4.5 & 0.5 & 1.0 & 0.15 & G2V & 5800 \\ \hline \end{tabular}
\end{table}
Table 1: Main Sequence Gaps (Kjeldsen and Frandsen, 1991)
Figure 1: HRD of the sample of 9 clusters.
Figure 2: Luminosity functions of the sample of 9 clusters.
of stars and \(N_{0}\) is the observed number of stars as described in Hawarden (1971). We find the expected number as the average of the numbers before and after the gap. \(\chi^{2}\) is related to \(p\) that is the probability that the gap is an chance event. For a \(\chi^{2}\)=4.0, with one degree of freedom, the \(p\) value is 0.05. This means that the probability of the gap being significant is 1-0.05 = 0.95, that is 95 %. This implies that a smaller value implies a higher chance of the gap being significant. Table 3 lists the gaps found in our sample with their spectral Types and significance. We notice the gaps we found are of similar spectral types as described in Table 1.
## 5 Conclusions
In this paper, we use Gaia DR3 data and membership data of Cantat-Gaudin et al. (2020) to study gaps in the main sequence of the HRD of star clusters. We use the \(\chi^{2}\) test to find the significance of the gaps. We compare the spectral types of earlier detections and find that they agree with our present results. Gaps were reported by Jao et al. (2018) in Gaia DR2 data for M dwarfs. In our sample, the membership data used is available only till apparent \(G\) magnitude 18 and does not include M dwarfs, we go to a spectral type of upto G, therefore we don't find that in our data. A more detailed study of HRD of star clusters is necessary to characterise these gaps and study them in more detail.
## Acknowledgments
This work has made use of data from the European Space Agency (ESA) mission _Gaia_ (https: //www.cosmos.esa.int/gaia), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement.
\begin{table}
\begin{tabular}{l l l l l l l l} \hline
**Cluster** & **RA** & **Dec** & **Ang.Dia** & **log \(t\)** & \(A_{V}\) & **DM** & **Distance** \\ & (deg) & (deg) & (deg) & & & (mag) & (mag) & (pc) \\ \hline NGC 2169 & 92.13 & 13.95 & 0.076 & 7.09 & 0.85 & 10.15 & 1072 \\ NGC 2360 & 109.44 & -15.63 & 0.154 & 9.01 & 0.39 & 10.25 & 1122 \\ NGC 1778 & 77.03 & 37.02 & 0.112 & 8.25 & 0.87 & 11.11 & 1663 \\ NGC 6939 & 307.9 & 60.65 & 0.123 & 9.23 & 0.85 & 11.3 & 1815 \\ NGC 3680 & 171.39 & -43.24 & 0.149 & 9.34 & 0.1 & 10.15 & 1072 \\ NGC 2682 & 132.85 & 11.81 & 0.167 & 9.63 & 0.07 & 9.75 & 889 \\ Trumpler 1 & 23.92 & 61.28 & 0.031 & 7.46 & 1.63 & 12.22 & 2773 \\ NGC 2420 & 114.6 & 21.58 & 0.053 & 9.24 & 0.04 & 12.06 & 2587 \\ NGC 6134 & 246.95 & -49.16 & 0.156 & 8.99 & 0.87 & 10.36 & 1182 \\ \hline \end{tabular}
\end{table}
Table 2: Cluster parameters (Cantat-Gaudin et al., 2020)
\begin{table}
\begin{tabular}{l l l l l l l l l} \hline \hline
**Cluster** & \(G_{median}\) & \(G_{BP-Rppmean}\) & \(N\) & \(N_{0}\) & \(\chi^{2}\) & **p** & **T(K)** & **Sp Type** \\ \hline NGC 2169 & 2.2 & 0.3 & 4 & 0 & 4.0 & 0.05 & 6852 & F8V \\ & 5.6 & 1.45 & 7.5 & 3 & 2.7 & 0.1 & 3631 & K1V \\ \hline NGC 2360 & 0.8 & 0.12 & 17.4 & 12 & 1.67 & 0.196 & 8550 & A3V \\ & 1.7 & 0.3 & 21.6 & 17.8 & 0.67 & 0.41 & 7500 & A8V \\ & 2.8 & 0.45 & 24.3 & 18.5 & 1.38 & 0.24 & 7030 & F1V \\ \hline NGC 1778 & -1.11 & -0.03 & 3 & 0 & 3 & 0.08 & 9700 & A0V \\ & 1.5 & 0.14 & 14 & 7 & 7 & 0.008 & 8550 & A3V \\ \hline NGC 6939 & 3.0 & 0.5 & 80 & 70 & 1.25 & 0.26 & 6720 & F3V \\ \hline NGC 3680 & 5 & 0.95 & 8.5 & 4 & 2.38 & 0.1229 & 5280 & K0V \\ \hline NGC 2682 & 3.07 & 0.7 & 67.5 & 54.5 & 2.5 & 0.1138 & 6040 & F9V \\ & 4.2 & 0.77 & 80 & 55.6 & 7.44 & 0.0064 & 5880 & G1V \\ \hline Trumpler 1 & -2.0 & -0.08 & 2.5 & 1 & 0.9 & 0.34 & 10400 & B9.5V \\ & -0.25 & 0.018 & 5 & 2 & 1.8 & 0.18 & 9200 & A1V \\ & 1 & 0.121 & 6 & 5 & 0.16 & 0.69 & 8550 & A3V \\ & 1.75 & 0.25 & 4.7 & 3 & 0.617 & 0.43 & 7800 & A7V \\ \hline NGC 2420 & 3.25 & 0.55 & 54 & 44 & 1.85 & 0.17 & 6640 & F4V \\ & 4.25 & 0.8 & 59 & 53 & 0.61 & 0.43 & 5770 & G2V \\ \hline NGC 6134 & 2.5 & 0.5 & 30.6 & 27 & 0.42 & 0.62 & 6720 & F2V \\ & 3.15 & 0.6 & 37.4 & 31 & 1.09 & 0.29 & 6400 & F5V \\ & 3.6 & 0.675 & 39 & 34.4 & 1.09 & 0.29 & 6150 & F8V \\ & 4.09 & 0.748 & 39.5 & 33.5 & 0.91 & 0.34 & 5920 & G0V \\ & 4.62 & 0.83 & 44.6 & 28 & 6.17 & 0.012 & 5660 & G5V \\ \hline \hline \end{tabular}
\end{table}
Table 3: Details of Gaps found
## Further Information
### ORCID identifiers of the authors
0000-0002-8156-6940 (Priya Hasan)
### Conflicts of interest
The authors declare no conflict of interest.
|
2303.00361 | Development and Evaluation of a Narrow Linewidth Laser System for 171Yb+
E2 Transition | We report the construction and characterization of a narrow-linewidth laser
system to interrogate the E2 clock transitions at 436 nm of ytterbium ions
trapped in end-cap traps. The 871 nm seed laser at the fundamental frequency is
referenced to a 10 cm long notched ULE cavity. The output of the laser system
is delivered to a narrow-linewidth femtosecond fiber comb, which has been
referenced to an ultrastable 698 nm laser, with a phase noise-canceled fiber
link. The beat between the laser and the comb shows a sub-Hz linewidth, and
with a stability better than 2E-15@1~100 s. The performance of the
self-developed wavelength extension ports at 871 nm of the narrow linewidth
erbium-doped fiber comb with single-point frequency-doubling technique is also
verified. | Yani Zuo, Shiying Cao, Shaoyang Dai, Yige Lin, Tao Yang, Baike Lin, Fei Meng, Weiliang Chen, Kun Liu, Fasong Zheng, Tianchu Li, Fang Fang | 2023-03-01T09:41:52Z | http://arxiv.org/abs/2303.00361v1 | Development and Evaluation of a Narrow Linewidth Laser System for \({}^{171}\)Yb\({}^{+}\) E2 Transition
###### Abstract
We report the construction and characterization of a narrow-linewidth laser system for the interrogation of the E2 clock transitions at 436 nm of ytterbium ions trapped in end-cap traps. The 871 nm seed laser at the fundamental frequency is referenced to a 10 cm long method ULE cavity. The output of the laser system is delivered to a narrow-linewidth femtosecond fiber comb, which has been referenced to an ultrastable 698 nm laser, with a phase noise-canceled fiber link. The beat between the laser and the comb shows a sub-Hz linewidth, and with a stability better than 2E-15@1-100 s. The performance of the self-developed wavelength extension ports at 871 nm of the narrow linewidth erbium-doped fiber comb with single-point frequency-doubling technique is also verified.
Optical clock, narrow linewidth laser, optical frequency measurement, ytterbium-171, optical frequency comb
## I Introduction
With the development of optical frequency comb and ultra-stable laser technologies, optical clocks based on either optical lattice trapped atoms or trapped ions, whose uncertainty exceeds the current SI second definition realization cesium fountain clocks [1][2], are the most promising candidates for metrology advances such as the unit redefinition [3]. And compact optical clocks allow for chronometric leveling between distant locations and many other fundamental physical tests [4].
Among diverse optical clock schemes, the ytterbium ion optical clock has a broad application prospect [5][6][7][8]. 171-isotope ytterbium ion has two clock transitions (E2 and E3) accepted as secondary representations of the second. The E3 transition of 171-isotope ytterbium ion is insensitive to external field perturbations, exhibits strong relativistic effect, and thus has linewidth at the level of nHz. The ytterbium ion has a relatively large atomic mass and the cooling laser wavelength is close to the dissociation wavelength of the dark Hydride molecular ion, so the ytterbium ions have long storage time. Also, lasers for manipulating ytterbium ions are accessible and relatively stable. These advantages make it possible to establish ultra-low uncertainty optical clock, to carry out more precision measurements.
Cavity-stabilized narrow linewidth lasers [9][10] are the local oscillators of optical clocks, whose linewidth and frequency stability directly affect the performance of the optical clock. And they also have a variety of applications such as photonic generation of low-phase-noise microwave, gravitational wave detection, and ultra-coherence spectroscopy.
We firstly build the Yb\({}^{+}\) optical clock with the quadrupole transition (E2) as the clock transition to verify the feasibility of the system. It is worth noting that the relatively short lifetime (53 ms) will limit the final performance of the E2 transition ytterbium ion clock, which leads to a compromise of the system design. The ytterbium ion optical clock system includes the optical system, the physical system, and the data acquisition system. The system diagram is shown in Fig. 1.
We present and characterize an 871 nm laser system applied for the E2 transition interrogation following the Conference on Precision Electromagnetic Measurements (CPEM) proceedings article[11] where we primarily demonstrate the system's design. An external cavity diode laser (ECDL) at 871 nm was frequency stabilized to a high-finesse optical cavity with Pound-Drever-Hall (PDH) method before second harmonic generation (SHG). This extended article reports the laser system characterization results beyond the conference version. Since only one set of 871nm laser system was built, in order to evaluate the performance of the laser system, a narrow-linewidth optical frequency comb at another laboratory was modified for
Figure 1: The scheme diagram of the \({}^{171}\)Yb\({}^{+}\) E2 optical clock.
beat note measurement. The comb is referenced to a 30-cm-long cavity-stabilized 698 nm laser system serving a strontium lattice clock.
This work also established two phase-noise-canceled fiber links to deliver the laser to the reference cavity and the evaluation beat-note module, effectively limiting the additional frequency noise introduced by the reference system. The performance of the 871 nm wavelength extension port of the narrow linewidth erbium-doped fiber comb using single-point frequency-doubling technique was constructed and verified. Based on the evaluation system, this paper demonstrates the stability and linewidth of the ultra-stable laser system, meeting the current requirements of the E2 Yb\({}^{+}\) optical clock.
## II Setup of the Interrogation Laser System
The maximum excitation probability of a clock transition and the short-term stability of an optical clock will benefit significantly with the improvement of the interrogation laser stability. Considering the SNR of the ion optical clock and the natural linewidth of the E2 transition, we expect the short-term stability of the local oscillator to be below the system QPN stability limit[13][14]. A 10-cm cavity scheme was used in this paper, there have been many studies on this length cavity frequency-stabilized laser[17]-[27].
Therefore, a modified version of the 10-cm cavity design was adopted[11], shown in fig. 2. The cavity spacer's diameter is 5 cm with a horizontal notch structure. The cavity is directly supported by 4 fluorinated rubber Viton pads. The support position of the cavity is optimized by the FEA method. The acceleration sensitivity of the cavity is estimated to be 4E\(-\)10/g along the relevant direction[15][16].
A notched cylindrical optical cavity is held in a vacuum chamber providing effective isolation from environmental disturbances, as shown in fig. 3. The cavity spacer is made of ULE glass with a low expansion coefficient. To control the temperature of the vacuum chamber, an active temperature control layer is combined with two inner passive temperature shields. A pair of high-reflectivity flat-concave mirrors are optically contacted to the ULE spacer. The radius of curvature of the concave mirror is 50 cm, and the cavity mirror adopts FS substrates. As estimated, the thermal noise limit of this reference cavity is about 5E-16[12]. The vacuum chamber uses a 20 l/s ion pump to maintain a pressure at 1E-5 Pa. In order to isolate the influence of the ambient vibration, the vacuum chamber is placed on an active isolation platform.
The optical layout diagram of the cavity-stabled laser system is shown in Fig. 4.
Figure 4: Schematic diagram of the cavity-stabilized system. Red lines indicate optical signals, black lines denote the propagation of the electronic signal. All the components are height fixed at 50 mm to reduce vibration sensitivity. The dark gray block represents the optical breadboard placed on an active vibration isolation platform. FR: Faraday rotator; EOM: electro-optic modulator; AOM: acousto-optic modulator; PD: photo detector; APD: avalanche photodiode; FNC: fiber noise cancellation; SMPM: single mode polarization maintaining.
Figure 3: Drawing of the cavity mounted on vacuum chamber.
Figure 2: Drawing of the optical cavity. (a) shows the optical cavity is support by four Viton pads. Dimensions are in mm. (b) shows the spacer displacement under acceleration insensitive holding position in Finite element analysis (FEA).
The laser is a commercial ECDL laser, where a seed laser at 871 nm is amplified and frequency-doubled to 436 nm. The laser light was guided through a 5 m long single-mode polarization-maintaining (SMPM) fiber to an active vibration-isolation platform (AVI-200) in a homemade acoustic isolation. The output end of this fiber is flat, so a small part of the light reflects back for fiber noise cancellation. And the laser head of the 871 nm Toptica DL-Pro ECDL has a DC-coupled (DC Mod.) current modulation port, which can be used as a high-speed feedback frequency control port.
The modulation frequency of the EOM is 20 MHz. In order to further reduce the influence of the RAM in the EOM [28][29][30], the polarization and incident direction of the light is carefully adjusted, and two 60 dB and 35 dB isolations are placed before and after the EOM. A fast low-noise photodetector detects the error signal, and the signal is demodulated and low-pass filtered to obtain a frequency discrimination signal. The laser frequency is corrected by adjusting the laser current and PZT. The locking bandwidth of the system is about 1.5 MHz. To reduce beam distortion and improve alignment stability, the optical height is fixed to 50 cm above the breadboard surface and many homemade support mounts without flexible structures like springs were adopted. Through the cavity ring-down method, the fineness of the cavity is measured to be about 200,000.
Fiber noise cancellation (FNC) is employed to suppress the phase noise induced by the optical fiber that transmits light from the optical bench to the optical cavity. The intensity of the light reflected by the polarization beam splitter (PBS) before the optical cavity is stabilized by adjusting the driving power of the acoustic-optic modulator (AOM) used for the fiber noise cancellation, as shown in Fig 5.
Before collimated into the SMPM, the light is frequency shifted by a \(+1\)-order single pass AOM working at 80 MHz. Henceforth, the AOM will play a multifunctional role in the control system, such as cavity drift compensation, optical fiber noise cancellation, and power stabilization.
## III Characterization of the Laser System
Since only one 871 nm laser was built, for cost reduction and quick evaluation, a narrow-linewidth optical frequency comb referenced[32] to the 30-cm-long cavity-stabilized 698 nm laser system was used to perform the beat-note measurement. Since the stability of the 698 nm laser is at the order of E-16[31], the stability of the beat signal reflects the stability of the 871 nm laser system.
A fiber link with a length around 500 m between these two systems was established. The OFC can faithfully transfer the coherence from the 698 nm reference laser to any wavelength ranging from visible to infrared by single-point frequency-doubling technique. The 871 nm laser spatial overlap with one of the comb branches after amplification, single-point frequency-doubling, and filtering. The heterodyne beat is down-converted and measured by a frequency counter. The beat note signal was filtered by a tracking filter in order to clean up the noise and regenerate the signal-to-noise ratio (SNR). The bandwidth of the tracking was adjusted to avoid copying the electronic noise.
Figure 5: Diagram of the multifunctional control of the AOM.
Figure 6: Scheme of the beat-note measurement based on a narrow-linewidth optical frequency comb (OFC) referenced to the 30 cm-long-cavity stabilized 698 nm laser system.
Figure 7: Laser stability and line-width measurements. Panel (a) shows the fractional frequency instability in ADEV of the cavity-stabilized 871 nm laser system along with its calculated thermal noise floor. Panel (b) shows the spectrum of beat note reaches sub-Hz level when the RBW of 0.25 Hz.
The beat exhibits a linear drift on the order of 2 Hz/s. Figure 7 shows a 5-min time series of the beat note with this linear drift removed. Allan deviations are computed and shown in Fig. 7(a). The frequency instability of the heterodyne beat is reduced to 1.4\(\times\)10\({}^{-15}\) around 1 s. Currently, two noise contributors, the vibration noise and the temperature fluctuation of the passive thermal chamber, limit the instability of the laser to be further reduced down to the thermal-noise limited value of 5\(\times\)10\({}^{-16}\). The linewidth of the beat note between the laser and the comb is 0.5 Hz.
## IV Conclusion and Discussion
We have developed an ultra-stable 871-nm laser referenced to a 10-cm optical cavity operated at room temperature. The frequency instability of the cavity-stabilized 871 nm laser reaches a minimum of 1.4E-15 around 1 s. The measured linewidth of the cavity-stabilized laser is 0.5 Hz. The comparison between different wavelengths (698 nm, 871 nm) was achieved using a phase-coherent fiber link. This also verified the measurement capability of the narrow linewidth OFC at this wavelength. These results laid the foundation for ongoing projects on the ytterbium optical ion clock. The evaluation system enables the future comparison of two optical frequency ratio measurements. The performance of the laser system is good enough to be the interrogation laser for the E2 transition. Further improvements will be considered to enhance cavity-temperature control and optimize the vibration-insensitive cavity-support structure. And more precision comparison between different frequencies should also eliminate inter-branch non-common-mode noise involved in OFCs[33][34].
|
2305.04289 | Optimized pilot distribution to track the phase noise in DFT-s-OFDM for
sub-THz systems | In this paper, we focus on the selection of a pilot pattern to track the
phase noise in high frequency bands in a DFT-s-OFDM chain. By considering the
Wiener filter at the receiver side to perform the tracking, we use the inner
cost function of the said filter as the cost function for the pilot selection.
To obtain this cost function, the phase noise autocorrelation is required.
Therefore, we introduce a new mathematical approximation of the autocorrelation
function of the practical 3GPP phase noise model. At first, this leads to an
analytical expression of the Wiener filter coefficients. Then, the said
coefficients allow us to obtain an analytical expression of the cost function.
Thus, by means of this result, we are able to provide a pilot pattern that
jointly satisfies a constraint on the pilot overhead and a constraint on the
minimum performance of the Wiener filter. | J. -C. Sibel, V. Corlay, A. Bechihi | 2023-05-07T14:35:31Z | http://arxiv.org/abs/2305.04289v1 | # Optimized pilot distribution to track the phase noise in DFT-s-OFDM for sub-THz systems
###### Abstract
In this paper, we focus on the selection of a pilot pattern to track the phase noise in high frequency bands in a DFT-s-OFDM chain. By considering the Wiener filter at the receiver side to perform the tracking, we use the inner cost function of the said filter as the cost function for the pilot selection. To obtain this cost function, the phase noise autocorrelation is required. Therefore, we introduce a new mathematical approximation of the autocorrelation function of the practical 3GPP phase noise model. At first, this leads to an analytical expression of the Wiener filter coefficients. Then, the said coefficients allow us to obtain an analytical expression of the cost function. Thus, by means of this result, we are able to provide a pilot pattern that jointly satisfies a constraint on the pilot overhead and a constraint on the minimum performance of the Wiener filter.
Sub-THz bands, phase noise, Wiener filter.
## Introduction
3GPP NR specifications provide the 5G technology to take benefit of high frequency ranges, namely mmWaves, until 71GHz [1]. It is expected that sub-THz frequencies and beyond will be used for the 6G technology [2]. This opens the room for greater bandwidths, higher data rates, etc. However, working with such high frequency values is not straightforward because of the limited capability of hardware materials. Among others, the phase noise is a phenomenon that stems from this limitations and that induces an increasing and negative impact on signals as the carrier frequency increases. Even though some futuristic components might be developed to cancel this limitation, the 3GPP specifications provide dedicated pilots called Phase Tracking Reference Signals (PT-RS) that enable to estimate and track the phase noise to work with good communication performance [3].
The PT-RS can be distributed in several manners depending on the waveform, the bandwidth, the subcarrier spacing, the Modulation and Coding Scheme (MCS), etc. In this paper, we consider an Orthogonal Frequency-Division Multiplexing (OFDM) chain with a Discrete Fourier Transform (DFT) precoding, simply called DFT-s-OFDM [4]. For this case, the PT-RS are inserted in the time domain before the DFT precoding in equally spaced groups. During the phase of specifications, the decisions on the values of group sizes and group spacings were made based on numerical evaluations of the whole DFT-s-OFDM chain from several companies [5]. The drawback in this approach is that results also depends, at least, on the MCS that is out of the tracking algorithm. It would be more relevant to select the PT-RS pattern based on the performance of the tracking algorithm only.
In this paper, we focus on the Wiener filter [6] to track the phase noise which is a usual scheme used in wireless communications. The filter coefficients are obtained by minimizing a cost function \(J\) that is determined by the autocorrelation function of the phase noise. The analytical expression is not available for the practical model of 3GPP [7] usually used in the literature. To overcome this difficulty, we propose an analytical approximation of the said autocorrelation function based on its graphical shape which helps obtain an analytical expression of \(J\). Out of this effort, we are given the possibility to analyze and predict the behavior of \(J\) as a function of the simulation parameters, e.g., the carrier frequency and the PT-RS spacing. Indeed, including one constraint related to an arbitrary maximum value of \(J\) and one constraint related to a maximum overhead for the sake of the spectral efficiency, we are able to extract the PT-RS spacing that minimizes \(J\), i.e., that offers the best performance of the Wiener filter. As in the 3GPP specifications, we consider the PT-RS to be equally spaced and we simplify the study by assuming that the PT-RS groups are of size one.
**Contributions -** We provide an analytical model of the autocorrelation function for the 3GPP phase noise model. Based on this, we provide the analytical expression of the Wiener filter coefficients. From this, we derive an analytical form of the cost function \(J\) as a function of the PT-RS spacing. We show that through numerical evaluations a linear approximation of \(J\) is equivalent. We finally show how to select the PT-RS spacing that satisfies a constraint on \(J\) as well as a constraint on the PT-RS overhead.
The paper is structured as follows. Section I presents the considered communication chain and describes our approximation of the 3GPP phase noise model. Then, Section II exposes the Wiener filter computation with the derivation of the filter coefficients based on our approximation model. Section III presents the associated derivation of the cost function of the Wiener filter using the previously computed filter coefficients and a analysis of its behavior. Finally, Section IV describes the method for obtaining the PT-RS spacing that fulfills a maximum cost constraint and a maximum overhead constraint with an application example. |
2308.02604 | Computational modeling to determine the physical characteristics of
biological tissues for medical diagnosis | Timely diagnosis of breast cancer is an important task. This type of breast
cancer is one of the most common diseases. The method of microwave
radiothermometry is a promising direction for solving this problem. The method
is based on measuring internal temperature of biological tissues in microwave
frequency range. Computer simulations are used to improve the quality of
diagnostics. Computer models make it possible to evaluate the effect of heat
release in a malignant tumor on the thermal dynamics inside the mammary gland.
It is necessary to build personalized models, taking into account the
individual nature of the internal structure of the mammary gland in each
patient. One of the problems is the determination of biophysical
characteristics of biological components. Methods for determining these
characteristics using computer simulations are proposed. The coefficient of
thermal conductivity and specific heat of biological tissues are determined
from known temperature distributions. Finding the physical parameters for a
quasihomogeneous biological tissue is the first approximation for solving this
problem. The least squares method is used as a solution method. The results
obtained are in good agreement with previously known exact solutions, which
indicates the applicability of this method for solving this class of problems.
The efficiency of using parallel technologies in solving the inverse problem is
investigated and the applicability of Open MP technology is demonstrated. | Maxim Polyakov | 2023-08-04T09:41:18Z | http://arxiv.org/abs/2308.02604v1 | Computational modeling to determine the physical characteristics of biological tissues for medical diagnosis +
###### Abstract
Timely diagnosis of breast cancer is an important task. This type of breast cancer is one of the most common diseases. The method of microwave radiothermometry is a promising direction for solving this problem. The method is based on measuring internal temperature of biological tissues in microwave frequency range. Computer simulations are used to improve the quality of diagnostics. Computer models make it possible to evaluate the effect of heat release in a malignant tumor on the thermal dynamics inside the mammary gland. It is necessary to build personalized models, taking into account the individual nature of the internal structure of the mammary gland in each patient. One of the problems is the determination of biophysical characteristics of biological components. Methods for determining these characteristics using computer simulations are proposed. The coefficient of thermal conductivity and specific heat of biological tissues are determined from known temperature distributions. Finding the physical parameters for a quasihomogeneous biological tissue is the first approximation for solving this problem. The least squares method is used as a solution method. The results obtained are in good agreement with previously known exact solutions, which indicates the applicability of this method for solving this class of problems. The efficiency of using parallel technologies in solving the inverse problem is investigated and the applicability of Open MP technology is demonstrated.
**Keywords**: Numerical methods \(\cdot\) biotissues \(\cdot\) microwave radiometry \(\cdot\) mathematical modeling \(\cdot\) heat dynamics \(\cdot\) mammary gland \(\cdot\) parallel computing \(\cdot\) diagnosis of breast cancer \(\cdot\) thermometric data
## 1 Introduction
The current level of development of computer technology allows us to significantly expand the class of applied problems which can be solved using simulation methods. Many scientific and technical problems which have been solved analytically are nowadays solved by numerical methods using relevant software for engineering analysis. In experimental studies of transient thermal processes, it is sometimes impossible to conduct direct measurements of required physical quantities, and these characteristics are inferred from the results of indirect measurements. The only way to find the required physical quantities for such problems is to solve the inverse boundary problems for heat conduction.
There are a number of applied studies in which it is impossible to determine the initial conditions. The mathematical models of such problems have the form of inverse boundary value problems with unknowns initial data.
An important task is to determine the biophysical parameters of biological tissues. For example, in [24] the laws of the propagation of various types of elastic waves in biological tissues in the range of acoustic frequencies
are investigated theoretically and experimentally. The contributions of imaginary and real components of the complex modulus elasticity to the speed of elastic waves is analyzed. It is shown that in soft tissues, low-frequency elastic disturbances propagate mainly as transverse waves.
The method of radio microwave thermometry is based on the measurement of tissue radiation in the ultrahigh range. An important role is played by methods for assessing the characteristics of emissivity biological tissues at microwave frequencies [9]. Detailed mechanisms of heat transfer in tissues of living creatures and thermal response to cauterization were analized by [14] using the theory of biothermal transfer. The study was conducted experimentally using an infrared camera and thermocouples.
The article [6] conducted a numerical simulation of thermal processes in biological tissues under the influence of focused ultrasound waves. The temperature distribution was calculated using inhomogeneous heat equation with a relaxation term, and an acoustic field was specified as a focused Gaussian beam. It was shown that an increase in intensity and a decrease in the exposure time at a constant dose of radiation, characteristic for therapeutic conditions, contributes to a significant localization and efficiency of the heating process.
The article [12] showed a possibility of creating devices on the ArduinoUno platform for measuring temperature and electrical impedance of affected areas of biological tissues. The creation of a device for static and dynamic research in soft biological tissues (in particular, AAA tissue) is described by [1]. In this paper, the preservation of viability of drug during the experiment, and for obtaining experimental data used to model aneurysmal stability and deformation, is considered. The authors rated the technical data of the device and evaluated and obtained repeatable results. The device allowed the authors to test not only arterial tissue, but also any soft biological tissues or artificial materials for medical applications in static and dynamic conditions.
One of urgent problems in medicine is to improve the quality of early diagnosis of breast cancer. Tumors should have a diameter of not more than 5-7 mm in order to detect them in a timely manner. However, according to statistics, the average size of newly detected tumors is significantly larger (1.34 cm), and the frequency of detection of tumors up to 1 cm in diameter is 10-20 %. Traditionally used methods do not allow us to detect a tumor at an early stage. The global trend of modern mammography is the search for acceptable non-invasive methods for conducting preventive examinations. To determine the effect of cancer on the background temperature, computational experiments based on the solution of the heat equation for biological tissues are traditionally carried out. Numerical assessment of thermal behavior of two types breast cancer: ductal carcinoma and invasive ductal carcinoma, was performed by [5]. Their analysis was based on taking into account two-dimensional geometry which characterizes the anatomy of the breast. Thermal behavior was obtained using a numerical solution to the biothermal equation. The design and experimental characteristics of a miniature radiothermometry system for measuring average volumetric temperature of tissue sites located at a depth of 5 cm in a body are presented by [31]. Their results show that this device allows us to accurately assess deep temperatures, and monitor for a long time.
An important indicator of the state of human tissues is temperature, especially at a considerable depth from the surface. It is possible to obtain additional and extremely important information about the condition of internal tissues and skin integuments by measuring the intrinsic radiation of biological tissues in microwave and infrared wavelength ranges. This is the basis for the diagnosis of diseases using the method of microwave radiothermometry. The method of microwave thermometry was first described by [2]. Later it was proven that heat release of a tumor is directly proportional to its growth rate [7], [8]. Therefore, microwave radiothermometry has unique ability to detect primarily rapidly growing tumors.
A new flexible hexagonal patch antenna with a substrate operating in the industrial, scientific and medical bands was presented by [29]. A two-step methodology was recommended. This implies the construction of an antenna with a hexagonal patch with a hexagonal slot and without a slot, into which the substrate was inserted, then the antenna was placed on skin of chest of human body model. Simulated antenna results show the location of a tumor present in skin and mammary gland this was achieved by observing changes in density of skin and mammary gland with and without a tumor.
The RTM method is known for its simplicity and the possibility of wide application without medical restrictions, in contrast to various complex and refined approaches, such as highly selective screening of circulating tumor cells using microfluidic devices [30], magnetic-acoustic electric tomography [37], genomic testing [22], development of fluorescent oligonucleotide probes for specific detection of a cancer marker [23]. The problems of choosing the most appropriate methods for diagnostics of breast cancer are actively discussed
in the scientific literature e.g. [36]. This discussion takes into account known shortcomings of traditional mammography. 3D imaging using various methods of three-dimensional imaging of the mammary gland were discussed by [33]. Machine learning methods are increasingly used to diagnose medical images, including those for breast cancer [4]. Active application of the method of microwave radiotherapy in Russia began with the development of the microwave radio thermometer RTM-01-RES in 1997. The article [35] presented a systematic analysis of the data the on the role of microwave thermometry in risk assessment, diagnosis of breast pathology and in assessing effect of neoadjuvant therapy in treatment of breast cancer available in the literature. Various aspects of the application of microwave thermometry in breast diseases were described. These include diagnostic potential of the method and its importance in the differentiation of hyperplasia, benign and malignant diseases. Studies have also shown the prognostic role of microwave thermometry and its possible application for assessing the effect of preoperative chemotherapy for locally advanced breast cancer.
Important advantages of the microwave radiotherapy method are its ability to timely detect diseases and its absolute harmlessness, both for the patient and for medical personnel [27]. The use of microwave thermometry for diagnosis of breast cancer was investigated by [16].
It is worth noting that breast cancer is not a specific problem for women. Statistics for this disease in women as a whole does not improve [28], but this disease is also observed in men [13], [25]. The possibility of using microwave imaging for continuous monitoring of the patient is being considered. The use of microwave radiothermometry to diagnose and monitor progression of cerebral stroke disease and operative counteraction to its uncontrolled growth and possible decision-making support in clinical treatment is discussed by [26]. Clinical studies show that it is possible to monitor thermal dynamics of biological tissues for several hours. Simulated and experimental results show that a radiometric sensor with a frequency of 1.1-1.6 GHz with a diameter of 2.5 cm is a suitable tool for non-invasive monitoring of brain temperature [21]. This shows a wide scope of this diagnostic method. In addition to diagnostic purposes, microwave radiothermometry is used to monitor the conditions of laboratory tests. For example, thermal denaturation of albumin by microwave radiometry was monitored in [11].
Breast cancer modeling for contact thermography and inflammation modeling for the thermo-optical indicator are presented by [3].
A new technique based on the use of electrical impedance to localize preclinical carcinoma emulators in mammary gland agar phantoms [10] is described. The main idea of the proposed positioning algorithm, called the circular anomaly tracking algorithm, is to find the chest agar region. This region is defined by straight lines connecting pairs of electrodes that have the minimum difference value of a certain normalized impedance along the measurement sweep. This difference was obtained relative to phantom of breast agar without a carcinoma emulator. The proposed technique was evaluated using seven experimental models of agar, six of which had emulators of carcinoma fractions with different locations and electrical conductivity. The authors achieved high rates in the detection of carcinoma.
A separate and independent task is the construction of 3D models of internal and general structure of biological tissues. It must be pointed out, however, that such 3D models are also important for various medical tasks, including preoperative planning and evaluation of oncoplastic surgery [34], cosmetic surgery [15], including the establishment of safety criteria for MRI research [17].
The key role for the radiothermometric method is played by the possibility of increasing its accuracy by improving the design features of the measuring system. The use of receiving antennas with a diameter of 2.5 and 7 cm in the 1.1-1.6 GHz region allows us to track clinically significant changes in deep temperatures for various therapeutic applications [32].
The method of microwave radiothermometry is used not only for the diagnosis of paired organs. It is also used in neurology. With any pathology of central and peripheral neural systems, a universal reaction occurs leading to changes in temperature of metabolic, vascular and regulatory genesis. Sustained temperature changes often precede clinical manifestations of pathological process and, therefore, can be a factor in early diagnostics and control of its dynamics.
The main objective of the study is to create a methodology for the initial temperature distribution. This methodology will allow the creation of individual computer models of patients in the future.
Formulation of the problem
### Modeling of thermal processes in biological tissues
The mathematical model is based on three-dimensional non-stationary partial differential equations determining the dynamics of heat. The temperature distributions \(T(\vec{r},t)\) depend on thermal conductivity coefficient \(\delta(\vec{r})\), heat sources, metabolic processes \(Q_{met}\), blood flow, cancer formations \(Q_{car}\) and radiation cooling \(Q_{rad}\). Temperature dynamics is determined by the biothermal equation [18]
\[\rho(\vec{r})c_{p}(\vec{r})\frac{\partial T(\vec{r},t)}{\partial t }=\nabla(\delta(\vec{r})\nabla T(\vec{r},t))+Q_{bl}(\vec{r},t)+\] \[+Q_{met}(\vec{r},t)+Q_{rad}(\vec{r},t), \tag{1}\]
where \(\rho\) is substance density, \(c_{p}\) is specific heat of substance, \(\nabla=\{\frac{\partial}{\partial x},\frac{\partial}{\partial y},\frac{ \partial}{\partial z}\}\).
A boundary condition of the third kind is used in our analysis. In this case, convective heat transfer between the surface of the body and the environment with a constant heat flow is determined as:
\[q=-\delta\frac{\partial T}{\partial\vec{n}}=\alpha(T-T_{air}), \tag{2}\]
where \(q\) is specific heat flux, \(\vec{n}\) is the normal vector, \(\alpha\) is heat transfer coefficient, \(T_{air}\) is ambient temperature.
The intensity of heating due to blood flow is controlled by the difference in tissue temperature \(T\), blood \(T_{bl}\), specific heat capacity of blood \(c_{p,b}\)
\[Q_{bl}=-\rho\rho_{bl}c_{p,b}\omega_{bl}(T-T_{bl}), \tag{3}\]
where \(\omega_{bl}\) is blood flow intensity in heating region, the values of which can lie in very wide limits \(\omega_{bl}=4\cdot 10^{7}-2\cdot 10^{-5}\) m\({}^{3}\)/kg\(\cdot\) s.
Among other heat sources we can mention: heat generation in tissues as a result of vital processes, blood flow, which is the most important source of thermal energy and radiation re-emission. Also, sources due to cancer cells should be taken into account. The specific energy density is determined by intensity of biochemical processes in tissues. Its typical values are 4000-5000 W/m\({}^{3}\). The temperature in biological tissues changes significantly in the presence of tumors.
Earlier, numerical experiments were carried out to reveal dependence of temperature dynamics on various parameters of biological tissue [20]. It was shown that the dynamics of temperature depends strongly on parameters of biological components.
An important problem in the method of microwave thermometry is determination of the brightness temperature, which is different from thermodynamic temperature. This temperature is measured using microwave antennas. The antenna with a frequency of several GHz allows us to measure thermal radiation from biological tissues in a certain frequency range.
Brightness temperature infers from the following equation
\[T_{B}(\vec{r})=\int\limits_{\Delta f}\left\{\left|S_{11}(f) \right|^{2}T_{REC}+\left[1-\left|S_{11}(f)\right|^{2}\right]\times\right.\] \[\left.\times\left(\int_{V_{0}}T(\vec{r})\,\frac{P_{d}(\vec{r},f)} {\int_{V_{0}}P_{d}(\vec{r},f)\,dV}\,dV+T_{EMI}\right)\right\}df\,, \tag{4}\]
where \(P_{d}=\frac{1}{2}\,\sigma(\vec{r},f)\cdot\left|\vec{E}(\vec{r},f)\right|^{2}\) is the electromagnetic field power density, \(\vec{E}\) is electric field vector. Values \(T_{EMI}\) and \(T_{REC}\) characterize electromagnetic interference when measured with a radiometer [35]. Coefficient \(S_{11}\) determines the interaction between the antenna and the biological tissue. Integration is carried out over the entire volume of biotissue (\(V_{0}\)).
To construct a stationary electric field distribution, it is convenient to solve the time-dependent Maxwell equations and as the result to obtain the stationary state:
\[\frac{\partial\vec{B}}{\partial t}+rot(\vec{E})=0\,,\quad\frac{ \partial\vec{D}}{\partial t}-rot(\vec{H})=0\,,\quad\vec{B}=\mu\vec{H}\,,\] \[\vec{D}=\varepsilon\vec{E}\,, \tag{5}\]
where \(\vec{B}\) is magnetic induction, \(\vec{E}\) is electric field strength, \(\vec{D}\) is electric induction, \(\vec{H}\) is magnetic field strength, \(\varepsilon(\vec{r})\) is the dielectric constant, \(\mu(\vec{r})\) is magnetic permeability.
### An object to be simulated and its geometry
The female breast is an organ with a complex organization, the structure of which should create the most optimal conditions for fulfilling its main physiological functions (production of milk and feeding of a child).
Breast consists of a layer of skin under which the milk gland is located (glandular tissue) - it is in this organ milk is produced. The mammary gland is attached with the help of connective tissues to muscles of the chest (see Figure 1). The mammary gland consists of 15-20 glandular lobules, adipose and connective tissue. The sizes of all components vary from one person to another.
The mammary gland is located on the pectoralis major muscle. The mammary gland is covered with very thin skin, which easily moves over base. Under the skin is a fat layer, the thickness of which can vary from one woman to another. Under the layer of fat is the body of the mammary gland, covered with connective
Figure 1: Schematic structure of the breast [19]
Figure 2: The projection of the model’s geometry on a 2D plane
tissues a capsule by which it is suspended from collarbone. Most of the breast is filled with fat. Amount of fat in female breaks varies significantly. For some women their breast consists almost exclusively of fat, while the others have glandular tissue in the breast covers more space than fat. Figure 2 shows simplified model geometry.
### A method to measure the internal temperature of the mammary glands
Currently used complex RTM-01-RES allows us to evaluate the functional state of tissues by measuring their internal temperature (RTM) at a depth of 5 cm and skin temperature (IR). Examination of a patient is carried out in a horizontal position, naked to waist, hands under the head. The examined area 15 minutes before the measurement is released from clothes for acclimatization to room temperature of the entire measurement area. Due to the presence of radio noise in atmosphere and to exclude influence of position antenna in space on the measurement results, latter during examinations, it is recommended to orient the patient in one direction. Thus, when measuring thermal emission of symmetrical points, the patient changes position, sitting on swivel chair. The receiving antenna without air gaps is pressed against the skin surface above the temperature measurement area. After stabilization of the parameters, that software monitors and confirms, measured temperature entered into the database.
The examination begins with measuring the temperatures at the reference points T1 and T2. The first point is located in the center of the breast immediately below and between the mammary glands; the second is located directly under the xiphoid process. Further measurements are carried out at 10 points on each gland and in the axillary region (see Figure 3). In accordance with the methodology, measurements should be carried out at an ambient temperature of 20 to 25 degrees.
### Description of the problem
The construction of simulation models for the mammary glands will allow us to evaluate temperature anomalies and detect structural changes in internal tissues. There is a need to solve the inverse problem of thermal conductivity to develop such models. This would enable us to restore the structure of the mammary gland according to the known distribution of deep and skin temperatures.
As input parameters, we will use the temperature data vector \(\vec{T}=\{T_{0},...,T_{8}\}\) for one mammary gland the right gland was chosen for this obtained using a computational experiment in the microwave in the infrared range. At an output, it is necessary to obtain the thermal conductivity coefficients of various biological tissues with other parameters being fixed. Figure 4 shows the procedure for finding this coefficient.
Figure 3: The scheme for measuring the temperature of the mammary glands [38]
## 3 Numerical solution of the inverse task using optimization methods
### Description of method
The process is assume to be transient. At the boundary, a constant ambient temperature is set. The value of this temperature, \(T_{air}\), is taken from experimental data. The values of predicted \(\vec{T}=\{T_{0},...,T_{8}\}\), and measured temperatures are compared using the minimizing of the quadratic residual functional
\[A=\sum_{i}(T_{i}-T_{i}^{exp})^{2}\rightarrow\delta_{min}. \tag{6}\]
The objective function is the sum of squares of the differences between the measured \(T_{i}^{exp}\) and the calculated temperature values \(T_{i}\). The control parameter is thermal conductivity coefficient \(\delta\). It is required to find a vector \(\vec{\delta}\) that minimizes the discrepancy \(N\). Due to the incorrectness of the problem, we minimize the Tikhonov functional. Moreover, the functional is defined positively, therefore, it has a single minimum.
### Computational Experiment
The objective function is set as the vector of temperatures measured in the microwave range \(\vec{T}_{RTM}=\{34.2,33.7,33.6,33.6,33.7,33.7,33.8,33.6,33.6\}\) and in the infrared range \(\vec{T}_{IK}=\{33.8,\)\(33.3,\)\(33.2,\)\(33.2,\)\(33.2,\)\(33.3,\)\(33.4,\)\(33.3,\)\(32\}\) (see Figure 5). The temperatures are set in \({}^{o}\)C in all cases.
After performing \(\sim 10^{8}\) iterations, the following thermal conductivity vector was obtained \(\delta^{\sharp}=\{0.42,0.24,0.41,0.44\}\). The calculated and measured values of thermal conductivity coefficients differ by about \(\sim 10^{-2}\) (Table 1).
Remembering that the structure is very inhomogeneous, this result completely satisfies the requirements of the task. At the next stage we conducted a direct computational experiment to verify the results and use
Figure 4: The procedure for finding the coefficient of thermal conductivity
Figure 5: Initial temperature distribution
the resulting vector \(\vec{\delta^{*}}\) as input parameters. The resulting temperature distribution was constructed along the axis of axial symmetry (see Figure 6).
We carried out a similar procedure, but fixed the thermal conductivity coefficients of biological tissues, and made the specific heat \(c_{p}\) a controlling parameter.
Table 2 shows that error in determining the specific heat is \(\sim 0.1\), which is an acceptable result of numerical experiment.
The experimental results indicate that the proposed method gives fairly good results and provides the necessary accuracy of solution as a first approximation.
### Parallel technologies
The process of solving the problem is resource-intensive. It makes sense to use parallel computing technologies to reduce the time spent on calculations. The easiest way to parallelize the code is to use OpenMP standard for multithreaded software for systems with shared memory, since in this case specialized computer servers are not required, and the application is rather universal in use.
Tasks performed by threads in parallel, as well as the data required to perform these tasks, are described
\begin{table}
\begin{tabular}{|l|c|c|} \hline \multirow{2}{*}{Biotissue} & Calculated value & Exact value \\ & W/(m \({}^{\circ}\)C) & W/(m \({}^{\circ}\)C) \\ \hline \(\delta_{skin}\) & 0.42 & 0.45 \\ \hline \(\delta_{fat}\) & 0.24 & 0.2 \\ \hline \(\delta_{mam.gl.}\) & 0.41 & 0.4 \\ \hline \(\delta_{nipple}\) & 0.44 & 0.4 \\ \hline \end{tabular}
\end{table}
Table 1: Results of a numerical experiment with control parameter \(\delta\)
Figure 6: Verification of the results of a computational experiment (dashed line) with an exact solution (solid line)
using special preprocessor directives of the corresponding language pragmas. The number of created threads can be controlled both by the program itself by calling library procedures, and from the outside using environment variables. As follows from the comparison of the results, the OpenMP version gives an average acceleration of 1.8 times compared to the serial version (see Figure 7).
## 4 Discussion of the results
Matching the results of computational experiments with laboratory experiments allows us to get more accurate physico-chemical characteristics of various tissues and components of the mammary gland. It is expected that this will increase efficiency of timely diagnose of tumors.
A serious problem for diagnostics and modeling of this phenomenon lies in a wide rage of basic parameters characterizing tissues in various organisms (thermal conductivity, transfer and scattering coefficients, electrical conductivity, dielectric constant, heat capacity, blood viscosity, blood flow parameters, heat transfer of the capillary system). The situation is significantly complicated due to strong heterogeneity of the mammary glands with characteristic spatial scales of less than 1 cm. The spread in the characteristics of the tumor tissue can malignant tumor tissue reach order. For example, strong dependence of the heat release on its doubling time is well known. It is possible to use cluster computing to solve large distributed memory models. To speed up work, cluster implementation in software can apply multi-core computing with shared memory in each node combined with an interface-based distributed memory model message passing MPI (Message Passing Interface). This approach, known as hybrid concurrency, speeds up the work considerably due to efficient use of computing resources.
## 5 Conclusion
A method for modeling the dynamics of thermal processes in biological tissues of the mammary gland has been developed. This method provides the required accuracy of solutions, stability and the high convergence rate required by personalized medicine. The inverse problem of reconstructing thermal conductivity coefficient in the equation of heat dynamics in biological tissues from the final temperature distribution has been solved. This solution is used in the mathematical model for determining the temperature distribution inside the mammary gland. A numerical example of solving the inverse problem has been studied in details.
The lack of detailed qualitative and quantitative description of the behavior of temperature fields in various human organs, both in the presence of pathological processes and in their absence, significantly complicates the development of effective methods of medical diagnosis. Our approach is the first step towards solving this problem. The model of mammary gland was presented as quasi-one-sided, and minor details of the mammary gland were not taken into account.
Figure 7: Dependence of the acceleration of the parallel version relative to the serial version. The graph shows the dependence of the acceleration on the number of iterations.
## Acknowledgement
The reported study was funded by RFBR according to the research project No. 19-37-90142.
|
2302.08414 | Development of Low-Threshold Detectors for Low-Mass Dark Matter Searches
Using an N-Type Germanium Detector at 5.2 K | We investigated charge transport in an n-type germanium detector at 5.2 K to
explore new technology for enhancing low-mass dark matter detection
sensitivity. Calculations of dipole and cluster dipole state binding energies
and electric field-dependent trapping cross-sections are critical to developing
low-threshold detectors. The detector operates in two modes: depleting at 77K
before cooling, or directly cooling to 5.2 K and applying different bias
voltages. Results indicated lower binding energy of charge states in the second
mode, at zero field and under an electric field, suggesting different charge
states formed under different operating modes. Measured cluster dipole and
dipole state binding energies at zero field were 7.884$\pm$0.644 meV and
8.369$\pm$0.748 meV, respectively, signifying high low-threshold potential for
low-mass dark matter searches in the future. | Sanjay Bhattarai, Dongming Mei, Rajendra Panth, Mathbar Raut, Kyler Kooi, Hao Mei, Guojian Wang | 2023-02-16T16:43:58Z | http://arxiv.org/abs/2302.08414v1 | Development of Low-Threshold Detectors for Low-Mass Dark Matter Searches Using an N-Type Germanium Detector at 5.2 K
###### Abstract
We investigated charge transport in an n-type germanium detector at 5.2 K to explore new technology for enhancing low-mass dark matter detection sensitivity. Calculations of dipole and cluster dipole state binding energies and electric field-dependent trapping cross-sections are critical to developing low-threshold detectors. The detector operates in two modes: depleting at 77K before cooling, or directly cooling to 5.2 K and applying different bias voltages. Results indicated lower binding energy of charge states in the second mode, at zero field and under an electric field, suggesting different charge states formed under different operating modes. Measured cluster dipole and dipole state binding energies at zero field were 7.884\(\pm\)0.644 meV and 8.369\(\pm\)0.748 meV, respectively, signifying high low-threshold potential for low-mass dark matter searches in the future.
+
Footnote †: preprint: APS/123-QED
## I Introduction
The interaction between dark matter (DM) and ordinary matter is limited to weak elastic scattering processes, resulting in only a small energy deposition from nuclear or electron recoil [1; 2; 3]. This highlights the need for a detector with a very low energy threshold to detect DM [4]. The LZ experiment has pushed the sensitivity for weakly interacting massive particles (WIMPs) with a mass greater than 10 GeV/c\({}^{2}\) to the point where the neutrino-induced background limits its sensitivity [5]. However, the recent emergence of low-mass DM in the MeV range has generated excitement as a DM candidate, although current experiments cannot detect it due to its small mass. The detection of MeV-scale DM requires new detectors with thresholds as low as sub-eV, since both electronic and nuclear recoils from MeV-scale DM range from sub-eV to 100 eV [6]. Conventional detector techniques cannot detect this low-mass DM.
Germanium (Ge) detectors have the lowest energy threshold among any current detector technology, making them ideal for low-mass DM searches [2; 7; 8; 9]. The band gap of Ge at 77K is 0.7 eV and the average energy required to generate an electron-hole pair in Ge is about 3 eV [10]. Thus, a Ge detector can provide a very low energy threshold. Furthermore, proper doping of the Ge detector with impurities can expand the parameter space for low-mass DM searches even further. Shallow-level impurities in Ge detectors have binding energies of about 0.01 eV, and can form dipole states and cluster dipole states when operated at temperatures below 10 K [4; 11; 12]. These dipole states and cluster dipole states have even lower binding energies than the impurities themselves, providing a potential avenue for detecting low-mass DM. Although the binding energies of impurities in Ge is well understood [13; 14], little is known about the binding energy of the dipole states and cluster dipole states near helium temperature.
At low temperatures near liquid helium, residual impurities in germanium freeze out from the conduction or valence band into localized states, forming electric dipoles (\(D^{0^{\ast}}\) for donors and \(A^{0^{\ast}}\) for acceptors) or neutral states (\(D^{0}\) and \(A^{0}\)). These dipole states have the ability to trap charge carriers and form cluster dipole states (\(D^{+^{\ast}}\) and \(D^{-^{\ast}}\) for donors, and \(A^{+^{\ast}}\) and \(A^{-^{\ast}}\) for acceptors)[12]. This phenomenon has been studied in detail in a previous work by Mei et. al[12]. When an alpha particle (\(\alpha\)) from an \({}^{241}\)Am decay is sent to a Ge detector, it deposits energy and creates electron-hole pairs within a 10 \(\mu\)m range from the surface of the detector [15; 16]. By applying a positive or negative bias voltage to the bottom of the detector and operating it at a cryogenic temperature of approximately 4 K, only one type of charge carrier is drifted through the detector. These drifted charge carriers undergo a dynamic process of elastic scattering, trapping, and de-trapping, allowing us to study the binding energy of the formed dipole states and cluster dipole states. In this study, an n-type Ge detector is operated in two different modes, applying different bias voltages and cooling the detector to cryogenic temperature.
### Mode 1
In this mode, an n-type planar detector is first cooled to 77K and a bias voltage is applied, gradually increasing until the detector is fully depleted. The bias is then increased by an additional 600 volts to become the operational voltage. The detector is then cooled down to 5.2 K while still under the applied operational voltage. At 77 K, the depletion process causes all the free charge carriers to be swept away, leaving only the space charge states, \(D^{+}\), behind. Upon cooling to 5.2 K, a charge trapping process occurs, resulting in the formation of dipole states as electrons drift across the detector [12]. Continued drift of electrons across the detector can result in de-trapping of charge carrier through impact ioniza
tion of the dipole states. The key charge-trapping and de-trapping processes are described below:
\[e^{-}+D^{+}\to D^{0^{*}},e^{-}+D^{0^{*}}\to 2e^{-}+D^{+}. \tag{1}\]
In this mode, the operation of the n-type planar detector begins with the formation of dipole states via charge trapping as a result of the Coulomb force between the space charge states and the drifting electrons. The second process is the release of trapped charge through impact ionization of the dipole states, known as charge de-trapping. By examining the time-dependent behavior of this de-trapping process, we are able to determine the binding energy of the dipole states.
### Mode 2
In this mode of operation, the n-type planar Ge detector is cooled directly to 5.2 K without any applied bias voltage. Once cooled, the detector is then biased to the desired voltage level. At these low temperatures, impurities in the Ge crystal freeze out from the conduction or valence band to form localized states that result in the creation of dipole states. As it is an n-type detector, the majority of these dipole states are \(D^{0^{*}}\)[12]. When an \(\alpha\) source is placed near the detector, the resulting \(\alpha\)-particle-induced electron-hole pairs are created on the surface of the detector. Upon applying a positive bias voltage to the bottom of the detector, the electrons created by the \(\alpha\) particles are drifted across the detector, leading to the following processes occurring within the detector:
\[e^{-}+D^{0^{*}}\to D^{-^{*}},e^{-}+D^{-^{*}}\to 2e^{-}+D^{0^{*}}. \tag{2}\]
The first process in this mode is a trapping of charges by the Coulomb forces exerted by the dipole states on the drifted electrons, resulting in the formation of cluster dipole states. The second process is a de-trapping of charges through impact ionization of the cluster dipole states. The detector experiences a dynamic process of charge trapping, transport, and creation. The study of the time-dependent de-trapping of charges through the impact ionization of cluster dipole states helps us determine their binding energy.
When comparing the two operational modes, it can be noted that in Mode 2, the dipole states are formed at 5.2 K without any applied bias voltage. These dipole states rapidly trap charges as soon as the electrons are drifted across the detector, resulting in a shorter trapping time and lower binding energy. In contrast, in Mode 1, the dipole states are formed in the space charge region when electrons are drifted across the detector with an applied bias voltage. Therefore, it is expected that the trapping time will be longer and the binding energy of the dipole states will be higher than that of the cluster dipoles.
### Physics Model
As mentioned earlier, the formation of dipole states and cluster dipole states in the detector depends on the operational mode. In Mode 2, when the n-type Ge detector is cooled down to 5.2 K, the majority impurity atoms freeze out from the conduction band and form electric dipole states, \(D^{0^{*}}\). If a positive bias voltage is applied to the bottom of the detector, electrons produced by the \(\alpha\) particles from the \({}^{241}\)Am source, which is located above the detector within the cryostat, can be drifted across the detector. This drifting of electrons leads to the formation of cluster dipole states, \(\mathrm{D}^{-^{*}}\), through the charge trapping between the dipole states and the drifted electrons. As the bias voltage increases, the charge carriers gain more kinetic energy and begin to emit from the traps, resulting in a decrease in the number of cluster dipole states and an increase in electric dipole states.
In Mode 1, when a positive bias voltage is applied, electrons are drifted across the detector, leading to the formation of dipole states \(\mathrm{D}^{0^{*}}\) through the space charge states of \(\mathrm{D}^{+}\). As the bias voltage increases, the drifted electrons gain more kinetic energy and are capable of freeing trapped electrons from the dipole states. In both modes, the emission rate of the charge carriers is time-dependent and reaches a balance when the charge emission and charge trapping are equal. At a sufficient bias voltage, such as around 800 volts, charge trapping becomes negligible and the charge emission also becomes negligible. The emission rate (\(e_{n}\)) of the charge carriers can be mathematically expressed as: [17].
\[e_{n}=\sigma_{trap}v_{th}N_{c}\exp\left(-\frac{E_{B}}{k_{B}T}\right), \tag{3}\]
where \(\sigma_{trap}\) represents the trapping cross-section, \(v_{th}\) is the thermal velocity, \(N_{c}\) = 2.46 \(\times\) 10\({}^{15}\)/cm\({}^{3}\) is the effective density of states of electrons in the conduction band at 5.2 K, \(E_{B}\) is the binding energy of the trapped charge carriers, \(k_{B}\) is the Boltzmann constant, and \(T\) is the temperature of the detector.
By using the experimental data to directly determine \(e_{n}\) and by knowing the values of \(v_{th}\), \(N_{c}\), and \(T\), one can obtain the binding energy of dipole states or cluster dipole states from equation 3, provided the value of the trapping cross-section, \(\sigma_{trap}\), is known. However, determining the value of \(\sigma_{trap}\) requires further calculation, as will be discussed.
The trapping cross-section (\(\sigma_{trap}\)) of the charge carriers is related to the trapping length (\(\lambda_{th}\)) through the following relation:[18; 19]
\[\lambda_{th}=\frac{1}{\left(\frac{N_{A}+N_{D}\pm|N_{A}-N_{D}|}{2}\right) \times\left(\sigma_{trap}\times\frac{v_{st}}{v_{d}}\right)}, \tag{4}\]
where \(N_{A}\) and \(N_{D}\) represent the p-type and n-type impurities, respectively. \(v_{tot}\) is the total velocity of the drift electrons, and \(v_{d}\) is the drift velocity, which is dependent
on the electric field (\(E\)) and is given by:
\[v_{d}\approx\frac{\mu_{0}E}{1+\mu_{0}E/v_{sat}}, \tag{5}\]
where \(\mu_{0}\) represents the mobility of the charge carrier when the field is zero, and can be expressed as \(\mu_{0}=\mu_{0}(H)/r\). The Hall mobility, \(\mu_{0}(H)\), has standard values of \(36000\text{ cm}^{2}/\text{Vs}\) for electrons and \(42000\text{ cm}^{2}/\text{Vs}\) for holes, while the corresponding values of \(r\) are 0.83 for electrons and 1.03 for holes. The saturation velocity, \(v_{sat}\), can be calculated using the following empirical formula[19]:
\[v_{sat}=\frac{v_{sat}^{300}}{1-A_{v}+A_{v}(T/300)}. \tag{6}\]
The saturation velocity at 300 K, \(v_{sat}^{300}\), for electrons and holes are \(7\times 10^{6}\text{ cm}/\text{s}\) and \(6.3\times 10^{6}\text{ cm}/\text{s}\), respectively. The values of \(A_{v}\) for electrons and holes are 0.55 and 0.61, respectively [20]. Additionally, the charge collection efficiency (\(\epsilon\)) of a planar Ge detector can be related to the trapping length (\(\lambda_{th}\)) through the following formula [19; 21]:
\[\epsilon=\frac{\lambda_{th}}{L}\left(1-\exp\left(-\frac{L}{\lambda_{th}} \right)\right), \tag{7}\]
where \(L=5.5\text{ mm}\) represents the detector thickness.
The determination of the charge collection efficiency (\(\epsilon\)) in a planar Ge detector enables us to calculate the charge trapping cross-section (\(\sigma_{trap}\)) using equation 4. The necessary inputs, such as the net impurity concentration (\(N_{A}+N_{D}\pm|N_{A}-N_{D}|\)), are known from the Hall effect and capacitance-voltage measurements, while the electric field (\(E\)) in the detector can be obtained using the applied bias voltage.
With the calculated values of \(\epsilon\) and the known thickness of the detector (L), we can find \(\lambda_{th}\) from equation 7. The total velocity (\(v_{tot}\)) of the charge carriers is the combination of their thermal velocity (\(v_{th}\)) and the saturation velocity (\(v_{sat}\)). By combining the equations for \(\lambda_{th}\) and \(v_{tot}\), we can determine the electric field-dependent trapping cross-section (\(\sigma_{trap}\)) [19].
In an n-type Ge detector, the emission rate (\(e_{n}\)) of charge carriers from the traps is measured during operation in both Mode 1 and Mode 2. The energy versus time plot is used to determine the emission rate by analyzing the slope of the plot after a given bias voltage has been applied to the detector. By combining this value with equation 2, we can find the binding energy of dipole states and cluster dipole states in the n-type Ge detector at cryogenic temperature.
## II Experimental procedure
The USD crystal growth and detector development infrastructure is a state-of-the-art facility equipped with a zone refining process for purifying commercial ingots to a high level of purity suitable for crystal growth using the Czochralski method [22; 23; 24]. This results in high-quality homegrown crystals that are used for the fabrication of n-type (R09-02) detectors in the USD detector fabrication lab [25]. The R09-02 detector has a net impurity concentration of \(7.02\times 10^{10}/cm^{3}\) and dimensions of 11.7 mm \(\times\) 11.5 mm \(\times\) 5.5 mm.
To ensure optimal electrical performance, an amorphous Ge passivation layer of 600 nm was coated on the surface of the Ge crystal as the electrical contact, effectively blocking surface charges[26; 27]. An alpha source (\({}^{241}Am\)) was positioned near the detector inside a cryostat, and the energy deposition of \(\alpha\) particles was measured. This creates localized electron-hole pairs near the top surface of the detector, and the electrons are drifted through the detector by applying a positive bias voltage to the bottom of the detector. The experimental setup for this measurement is illustrated in Figure 1.
This experiment was conducted using two modes of operation. In Mode 1, the R09-02 detector was depleted at 77 K with a depletion voltage of 1200 V and an operational voltage of 1800 V. An alpha source (\({}^{241}Am\)) emitting alpha particles with an energy of 5.3 MeV was positioned above the detector within the cryostat. The energy spectrum was measured for the energy deposition of the 5.3 MeV alpha particles, which was visible as a 3.7 MeV energy peak due to energy loss on the way to the detector's active region. This 3.7 MeV energy deposition served as a reference for the energy deposition of 5.3 MeV alpha particles in the n-type detector without charge trapping, as the detector charge trapping at 77 K with a bias of 1800 volts was negligible. The charge collection efficiency was determined by dividing the measured alpha energy peak by 3.7 MeV for a given bias voltage.
In this mode, the detector was fully depleted at a constant bias voltage of 1800 V as the temperature was decreased to 5.2 K. This allowed for the formation of electric dipole states due to space charge at 5.2 K. The data was collected with a bias voltage applied in descending order
Figure 1: The detector is loaded into a pulse tube refrigerator (PTR), and two temperature sensors mounted above and below the detector are used to determine the temperature of the detector.
from 1800 V to 30 V at 5.2 K, with histograms of energy deposition by alpha particles recorded every 2-3 minutes for 60 minutes at each bias voltage.
In Mode 2, the detector was cooled directly to 5.2 K without any bias voltage applied. Once the temperature reached 5.2 K, a positive bias voltage was gradually applied from the bottom of the detector, causing the electrons created on the surface to be drifted across the detector under the electric field. Energy spectrum measurements were taken at different bias voltages of 30 V, 100 V, 200 V, 300 V, 450 V, 600 V, 1200 V, and 1800 V. Similar to Mode 1, data was taken for 60 minutes at each bias voltage with histograms of energy deposition by alpha particles recorded every 2-3 minutes.
## III Result and Discussion
Figures 2 and 3 demonstrate the energy deposition from 5.3 MeV alpha particles in the n-type detector when it operates under Mode 1 and Mode 2, respectively. The charge collection efficiency of the detector is determined by comparing the mean total energy deposited at 5.2 K with a specific bias voltage to the mean energy deposited at 77 K when the detector was depleted and operated with a bias voltage of 1800 volts. For instance, the mean energy observed at 77 K with a bias voltage of 1800 V was 3.7 MeV, while the mean energy observed at 30 V at 5.2 K was 0.725 MeV. This results in a charge collection efficiency of 19.6% (\(\epsilon=0.725\) MeV/3.7 MeV) in Mode 2. Figure 4 shows the charge collection efficiency as a function of the applied bias voltage when the detector is operated in Mode 1 & 2. The trapping length (\(\lambda_{trap}\)) of the charge carriers was then calculated using equation7 based on the charge collection efficiencies obtained at various bias voltages and the thickness (\(L\)) of the detector (5.5 mm). The calculated values are presented in Figure 5.
The net impurity concentration of the detector was measured to be \(7.02\times 10^{10}/cm^{3}\) and it was operated at a temperature of 5.2 K using the two modes described earlier. These values, along with other parameters presented in equations 5, 6, and 7, were utilized to calculate the trapping cross-section of the trap centers. The relationship between the trapping cross-section and the applied bias voltage is illustrated in Figure6.
To determine the charge emission rate described in equation 3, we conducted a measurement of the energy deposition from \(\alpha\) particles as a function of time for a given bias voltage at 5.2 K over a 60-minute interval. We recorded the histogram of the energy deposition every 2-3 minutes within this time frame. The mean value of the energy deposition was determined from the observed \(\alpha\) peak. An example of this measurement is shown in Figure 7, where the energy deposition versus time is plotted for a bias voltage of 200 volts.
As demonstrated in Figure 7, when the bias voltage is applied to the detector, the charge emission rate increases linearly for the first few minutes. This is due to the fact that the de-trapping through impact ioniza
Figure 3: The energy deposition of 5.3 MeV \(\alpha\) particles in an n-type detector operating in Mode 2.
Figure 4: The graph of charge collection efficiency (\(\epsilon\)) versus applied electric field (\(E\)) for Detector R-09 at Mode 1 and Mode 2 has been plotted, with errors taken into account. The error in \(\epsilon\) is based on the measurement of the mean energy deposition, while the error in \(E\) is largely influenced by the bias voltage applied. A fitting model, \(\epsilon=p_{0}+[(p_{1}\times\exp(-(p_{2})\times E)]\), was utilized to curve-fit the data, resulting in the following fitted parameters: \(p_{0}=1.01\pm 0.008\), \(p_{1}=-0.973\pm 0.001\), and \(p_{2}=(0.0033\pm 0.0003)\over\cos^{2}\) for Mode 1 and \(p_{0}=1.008\pm 0.008\), \(p_{1}=-0.974\pm 0.001\), and \(p_{2}=(0.0027\pm 0.0003)\over\cos^{2}\) for Mode 2 respectively.
Figure 2: The energy deposition of 5.3 MeV \(\alpha\) particles in an n-type detector operating in Mode 1.
tion of the dipole states or cluster dipole states outpaces the trapping of the charge carriers in the initial minutes at a given voltage. However, once the trapping and de-trapping reach a dynamic equilibrium, the energy deposition becomes constant. The slope of the portion of the plot where the emission of charge carriers is dominant provides the charge-energy emission rate per unit of time, represented as \(e_{n}\) in equation 3. By dividing \(e_{n}\) by the binding energy of the dipole states or cluster dipole states (\(E_{b}\)), the emission rate of electrons can be obtained. These emission rates are then utilized in equation 3 to numerically determine the binding energy for the respective dipole states or cluster dipole states. The calculated binding energies are presented in Table 1.
The binding energy measured by the detector in Mode 1 pertains to the dipole states, whereas Mode 2 provides data on the binding energy of the cluster dipole states. Additionally, the binding energy values obtained at varying bias voltages demonstrate a relationship with the electric field. As shown in Figure 8, the binding energies are plotted as a function of the electric field at a temperature of 5.2 K.
In Mode 1, the binding energies of the dipole states (\(D^{0^{\gamma}}\)) vary from 5.99 meV to 8.05 meV depending on the electric field. When the electric field is zero, the average binding energy is calculated to be 8.369 \(\pm\) 0.
energies of the cluster dipole states (\(D^{-^{*}}\)) in Mode 2 range from 4.52 meV to 8.15 meV based on the applied electric field. At zero field, the average binding energy is 7.884 \(\pm\) 0.644 meV. The results indicate that the binding energy at zero field for \(D^{0^{*}}\) states is greater than that of \(D^{-^{*}}\) states. Moreover, Figure8 reveals that \(D^{-^{*}}\) states are more sensitive to the electric field than \(D^{0^{*}}\) states. It should be noted that the binding energies at zero field for both \(D^{0^{*}}\) states and \(D^{-^{*}}\) states are lower than the binding energies of ground state impurity atoms in a Ge detector, which typically fall within the range of 10 meV.
## IV Conclusions
Our study of binding energies and trapping cross-sections in an n-type Ge detector operating at a low temperature has revealed valuable insights. Our measurements indicate that the binding energy of dipole states is 8.369 \(\pm\) 0.748 meV and the binding energy of cluster dipoles is 7.884 \(\pm\) 0.644 meV, both of which are lower than the typical binding energy (around 10 meV) of ground state impurities in Ge. We found that at a temperature of 5.2 K, the thermal energy of 0.448 meV is much lower than these binding energies, indicating that the corresponding cluster dipole states and dipole states are thermally stable at a temperature of 5.2 K. The application of an electric field causes the smaller binding energy of cluster dipoles to result in increased de-trapping via impact ionization when compared to dipole states. The trapping cross section, which ranges from \(3.99\times 10^{-11}cm^{2}\) to \(1.35\times 10^{-13}cm^{2}\), is primarily influenced by the electric field. Our findings further demonstrate that the binding energy and trapping cross-section decrease as the electric field within the detector increases. These low binding energies suggest the potential for developing a low-threshold detector using appropriately doped impurities in Ge for low-mass dark matter searches.
## V Acknowledgments
The authors would like to thank Mark Amman for his instructions on fabricating planar detectors. We would also like to thank the Nuclear Science Division at Lawrence Berkeley National Laboratory for providing us with a testing cryostat. This work was supported in part by NSF OISE 1743790, DE-SC0004768, and a governor's research center supported by the State of South Dakota.
|
2303.10172 | Hematoxylin and eosin stained oral squamous cell carcinoma histological
images dataset | Computer-aided diagnosis (CAD) can be used as an important tool to aid and
enhance pathologists' diagnostic decision-making. Deep learning techniques,
such as convolutional neural networks (CNN) and fully convolutional networks
(FCN), have been successfully applied in medical and biological research.
Unfortunately, histological image segmentation is often constrained by the
availability of labeled training data once labeling histological images for
segmentation purposes is a highly-skilled, complex, and time-consuming task.
This paper presents the hematoxylin and eosin (H&E) stained oral cavity-derived
cancer (OCDC) dataset, a labeled dataset containing H&E-stained histological
images of oral squamous cell carcinoma (OSCC) cases. The tumor regions in our
dataset are labeled manually by a specialist and validated by a pathologist.
The OCDC dataset presents 1,020 histological images of size 640x640 pixels
containing tumor regions fully annotated for segmentation purposes. All the
histological images are digitized at 20x magnification. | Dalà F. D. dos Santos, Paulo R. de Faria, Adriano M. Loyola, Sérgio V. Cardoso, Bruno A. N. Travençolo, Marcelo Z. do Nascimento | 2023-01-13T19:31:03Z | http://arxiv.org/abs/2303.10172v1 | ## Article information
### Abstract
Computer-aided diagnosis (CAD) can be used as an important tool to aid and enhance pathologists' diagnostic decision-making. Deep learning techniques, such as convolutional neural networks (CNN) and fully convolutional networks (FCN), have been successfully applied in medical and biological research. Unfortunately, histological image segmentation is often constrained by the availability of labeled training data once labeling histological images for segmentation purposes is a highly-skilled, complex, and time-consuming task. This paper presents the hematoxylin and eosin (H&E) stained oral cavity-derived cancer (OCDC) dataset, a labeled dataset containing H&E-stained histological images of oral squamous cell carcinoma (OSCC) cases. The tumor regions in our dataset are labeled manually by a specialist and validated by a pathologist. The OCDC dataset presents 1,020 histological images of size 640x640 pixels containing tumor regions fully annotated for segmentation purposes. All the histological images are digitized at 20x magnification.
### Specifications table
\begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline \multicolumn{1}{|c|}{Subject} & Computer Science, Computer Vision, and Pattern Recognition. \\ \hline \multicolumn{1}{|c|}{Specific} & Tumor segmentation, H&E-stained histological images, Tumor histological images, and Oral biopsy images. \\ \hline \end{tabular}
## 1 Introduction
The proposed method is based on the use of a deep learning model (DeepDeep) [1]. The deep learning model is a simple model for the classification of images. The deep learning model is a simple model for the classification of images.
## Value of the data
* An H&E-stained histological OSCC images dataset with pixel-level tumor annotations designed for segmentation purposes.
* Useful in the development of computational techniques for histological image segmentation to support pathologists in decision-making in cases of oral cavity-derived cancer.
* Can be used by computer science researchers to enhance and compare the segmentation results achieved by different segmentation methods.
### Objective
The availability of labeled training data often constrains the development of computational techniques for histological image segmentation once labeling histological images for segmentation purposes is highly skilled, complex, and time-consuming. OSCC is a kind of cancer that is scarcely studied because of the lack of available labeled training images. Deep learning models traditionally need large amounts of labeled data samples to be trained. The OCDC dataset allows to conduct of new studies on OSCC and can be used by other researchers for testing their computational approaches to OSCC region segmentation.
### Data description
The OCDC dataset consists of 1,020 H&E-stained histological images of size 640x640 pixels and their corresponding pixel-level tumor regions annotation masks. The 1,020 images and their corresponding labels were randomly split into 840 images to train and 180 images to test [3]. Fig. 2 illustrates the images and tumor regions annotation masks of the dataset. The OCDC dataset was built using tissue specimens collected from OSCC-affected patients. All OSCC cases were retrieved from the Department of Oral and Maxilofacial Pathology archives at the Federal University of Uberlandia between 2006 and 2013 under approval by the Committee on Research and Ethics of the Institution (CAAE number: 15188713.9.0000.5152).
The dataset images are split into 840 images to train and 180 images to test. The 1,020 images of the OCDC dataset contain both images of OSCC tumor regions and images of normal tissues like the serous salivary gland, connective tissue, mucous salivary gland, striated muscle, keratinized epithelial tissue, and oral mucosa. All the raw data are publicly available in a Mendeley data repository:
* [https://data.mendeley.com/datasets/9bsc36jyt/1](https://data.mendeley.com/datasets/9bsc36jyt/1)
### Experimental design, materials and methods
To create the OCDC dataset, tissue samples obtained after surgical procedure in 15 OSCC-affected patients were digitized into 15 WSIs using the Slide Scanner Aperio AT2 (Leica Biosystems Imaging, Inc., Nussloch, Germany) coupled to a computer (Dell Precision T3600) at 20x magnification and pixel-level resolution of 0.5025\(\upmu\)m x 0.5025\(\upmu\)m. The digitized WSIs have different sizes - the larger one has almost three billion pixels (63,743 x 45,472 pixels) - and are stored in SVS [2] format using the RGB color space. H&E-stained histological WSIs are
multi-gigapixel images created from tissue sections that contain many different types of cells, tissues, and structures, such as blood vessels, keratin, lymphocytes, glands, muscle, and tumor cells [1].
A total of 1,020 images of size 640 x 640 pixels were randomly extracted from these 15 WSIs and the tumor regions present in each image were hand-annotated by a specialist for segmentation purposes [3]. The tumor regions were annotated with the GNU Image Manipulation Program (GIMP) using a pen on a touch screen monitor. A pathologist validated the resulting pixel-level annotated images. This kind of task and a large amount of information in the images require very skilled professionals and a great effort to label the relevant structures to create this kind of image dataset. Fig. 1 illustrates the OSCC region labeling process from the OCDC dataset.
The image dataset was split into two subsets: 840- and 180-image-containing training and test sets, respectively [3]. Representative examples from the 1,020 produced images are shown in Fig. 2. Fig. 2 (m-r) also shows the corresponding hand-annotated regions produced from the images presented in Fig. 2 (g-l).
### Ethics statements
All OSCC cases were retrieved from the Department of Oral and Maxilofacial Pathology Archive at the Federal University of Uberlandia between 2006 and 2013 under approval by the Committee on Research and Ethics of the Institution (CAAE numbers: 15188713.9.0000.5152 and 58534122.7.0000.5152).
### Acknowledgments
The authors gratefully acknowledge the financial support of National Council for Scientific and Technological Development CNPq (Grant 311404/2021-9) and the State of Minas Gerais Research Foundation - FAPEMIG (Grant APQ-00578-18).
Figure 2: Some images (640×640 pixels) from the OCDC dataset: 1st row (a-f) shows normal regions (serous salivary gland, connective tissue, mucous salivary gland, striated muscle, keratinized epithelial tissue, and oral mucosa); 2nd row (g-l) shows OSCC regions; 3rd row (m-r) shows the produced hand-annotated tumor regions (in white); 4th row (s-x) shows the cancer image with the identified tumor regions overlapped in blue. |
2307.02081 | LØ: An Accountable Mempool for MEV Resistance | Possible manipulation of user transactions by miners in a permissionless
blockchain systems is a growing concern. This problem is a pervasive and
systemic issue, known as Miner Extractable Value (MEV), incurs highs costs on
users of decentralised applications. Furthermore, transaction manipulations
create other issues in blockchain systems such as congestion, higher fees, and
system instability. Detecting transaction manipulations is difficult, even
though it is known that they originate from the pre-consensus phase of
transaction selection for a block building, at the base layer of blockchain
protocols. In this paper we summarize known transaction manipulation attacks.
We then present L{\O}, an accountable base layer protocol specifically designed
to detect and mitigate transaction manipulations. L{\O} is built around
accurate detection of transaction manipulations and assignment of blame at the
granularity of a single mining node. L{\O} forces miners to log all the
transactions they receive into a secure mempool data structure and to process
them in a verifiable manner. Overall, L{\O} quickly and efficiently detects
reordering, injection or censorship attempts. Our performance evaluation shows
that L{\O} is also practical and only introduces a marginal performance
overhead. | Bulat Nasrulin, Georgy Ishmaev, Jérémie Decouchant, Johan Pouwelse | 2023-07-05T07:44:36Z | http://arxiv.org/abs/2307.02081v1 | # L\(\O\): An Accountable Mempool for MEV Resistance
###### Abstract.
Possible manipulation of user transactions by miners in a permissionless blockchain systems is a growing concern. This problem is a pervasive and systemic issue, known as Miner Extractable Value (MEV), incurs high costs on users of decentralised applications. Furthermore, transaction manipulations create other issues in blockchain systems such as congestion, higher fees, and system instability. Detecting transaction manipulations is difficult, even though it is known that they originate from the pre-consensus phase of transaction selection for a block building, at the _base layer_ of blockchain protocols. In this paper we summarize known transaction manipulation attacks. We then present L\(\O\), an accountable base layer protocol specifically designed to detect and mitigate transaction manipulations. L\(\O\) is built around accurate detection of transaction manipulations and assignment of blame at the granularity of a single mining node. L\(\O\) forces miners to log all the transactions they receive into a secure mempool data structure and to process them in a verifiable manner. Overall, L\(\O\) quickly and efficiently detects reordering, injection or censorship attempts. Our performance evaluation shows that L\(\O\) is also practical and only introduces a marginal performance overhead.
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal: LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates
+
Footnote †: journal LaTeX Templates |
2302.08372 | Properties of Infinite Nuclear Medium from QCD Sum Rules and the Neutron
Star-Black Hole Mass Gap | A non-perturbative framework is provided to connect QCD with nuclear
phenomenology in the intermediate and high density regime. Using QCD Sum Rules,
in-medium scalar and vector self-energies of nucleons are calculated as
functions of the density of an infinite nuclear medium. The self-energies are
used in the relativistic mean field theory lagrangian of a high-density nuclear
medium to find the binding energy of in-medium nucleons and the value of light
quark condensate, $\langle \bar{q} q \rangle_{\rm{vac}} = -~(0.288
~\rm{GeV})^3$, in the Borel-improved resummation scheme. The critical mass of
an ideal neutron star is obtained by coupling a uniform saturation energy
density of cold, dense nuclear matter to Einstein equation in hydrostatic
equilibrium. Since it is less likely for a neutron star core to avoid
deconfinement and enter the rigid vector repulsion phase where the speed of
sound can smoothly approach from conformal to causal limit, a gap should exist
in the stellar mass spectrum, $[3.48M_\odot, 5.47M_\odot]$, where it would be
rare to find any isolated, cold, non-rotating neutron star or a black hole. | Bijit Singha, Debasish Das, Leonard S. Kisslinger | 2023-02-16T15:38:25Z | http://arxiv.org/abs/2302.08372v2 | # Properties of Infinite Nuclear Medium from QCD Sum Rules and the Neutron Star-Black Hole Mass Gap
###### Abstract
A non-perturbative framework is provided to connect QCD with nuclear phenomenology in the intermediate and high density regime. Using QCD Sum Rules, in-medium scalar and vector self-energies of nucleons are calculated as functions of the density of an infinite nuclear medium. The self-energies are used in the relativistic mean field theory lagrangian of a high-density nuclear medium to find the binding energy of in-medium nucleons and the value of light quark condensate, \(\langle\bar{q}q\rangle_{vac}=-\) (0.288 GeV)\({}^{3}\), in the Borel-improved resummation scheme. The critical mass of an ideal neutron star is obtained by coupling a uniform saturation energy density of cold, dense nuclear matter to Einstein equation in hydrostatic equilibrium. Since it is less likely for a neutron star core to avoid deconfinement and enter the rigid vector repulsion phase where the speed of sound can smoothly approach from conformal to causal limit, a gap should exist in the stellar mass spectrum, \([3.48M_{\odot},5.47M_{\odot}]\), where it would be rare to find any isolated, cold, non-rotating neutron star or a black hole.
Keywords: Nuclei, Sum Rules, Nuclear Matter, Neutron Star, Black Hole, Mass Gap, General Relativity and Quantum Cosmology; Nuclear Astrophysics; Astrophysics - High Energy Astrophysical Phenomena
## 1 Introduction
The fundamental degrees of freedom in Quantum Chromodynamics(QCD) are quarks [1] and gluons [2] which have never be observed in isolation in any physical experiment. The physical states and the interactions that we prevalently observe in experiments are QCD bound states and the residual part of QCD interactions, namely hadronic interactions [3, 4, 5, 6, 7, 8, 9]. While the fluctuations of QCD vacuum [10] are responsible for making the nucleons hugely massive, the interaction between these nucleons occurs through the exchange of mesons which are rather QCD bound states. In fact, a renormalizable field theory can be consistently formulated in terms of Lorentz-invariant lagrangian density written in terms of hadrons as fundamental degrees of freedom (d.o.f.) [11, 12] to provide a number of successful predictions in nuclear physics without even considering the color interactions as laid out in QCD.
A quantitative understanding of this quark-hadron duality as well as establishing its accuracy over all energy scale still remains a challenging issue. For sufficiently high energy, this duality sets in with asymptotic freedom [13, 14, 15, 16] which makes the QCD calculations trivial. But non-perturbative effects start to dominate at an energy around the nuclear mass. Although the pion-pion and pion-nucleon interactions are weak for
small momenta at that scale due to spontaneous breaking of chiral symmetry [17, 18], the nucleon-nucleon interactions become non-perturbative. The issue becomes more complex when, at the same energy scale, we try to understand this duality in the context of the thermodynamic properties of dense nuclear matter. This specific problem behooves to be solved only analytically, since any deviation from or violation of quark-hadron duality is essentially a Minkowskian phenomenon and numerical Euclidean approach such as Lattice QCD suffers from a sign problem [19, 20] that typically occurs for a non-zero chemical potential in the presence of a fermionic background such as a dense nuclear medium. Moreover, the characteristic energy scale of QCD and that of nuclear physics are different by orders of magnitude. Lightest hadrons can be as massive as hundreds of MeV while the typical binding energy that dictates the nuclear phenomena is only around a few MeV. Also, the physics of exotic states of matter in high temperature or large compression, where the QCD d.o.f. start to predominate hadronic d.o.f., is not fully developed yet. All these impose serious challenges in utilizing quark-hadron duality to explore the connections between the QCD theory and nuclear physics observables as well as to study astrophysical objects like neutron star [21, 22, 23].
A popular approach to go around these challenges is to frame an effective lagrangian of strong interaction by exploiting the symmetry properties of QCD in the chiral limit [24, 25, 26, 27, 28, 29], although utility of this approach to predict the behavior of dense nuclear matter is still very limited. Attempts have also been made to map the behavior of strongly coupled QCD medium to a higher dimensional theory of gravity using AdS-CFT duality [30, 31, 32, 33, 34, 35, 36]. But this approach faces challenges too because, unlike CFT, (a) QCD is not superconformal, (b) QCD has confinement, (c) the number of QCD colors as well as the 't Hooft QCD coupling are not infinite.
In this work, we exploit quark-hadron duality to successfully predict the physical properties of saturated nuclear matter and of an ideal neutron star that is comprised of such matter. The methodology we use is called QCD Sum Rules [37, 38, 39, 40, 41]. QCD Sum Rules for nucleons in nuclear medium have been investigated years ago [42, 43, 44, 45, 46] with a number of important predictions such as positive vector self-energy, the reduction of effective nucleon mass arising from the reduction of the in-medium light-quark condensate its vacuum value _etc._ which matches with the conclusions of our work. These articles observed strong dependence of the effective nucleon mass on the Borel mass which is a redundant parameter from the phenomenological point of view. In our work, the Borel mass was fixed using the condition that the effective nucleon mass will approach the physical nucleon mass in the absence of the surrounding nuclear medium. Moreover, we derive the value of the light quark condensate in vacuum, effective mass of a nucleon as a function of medium density, and the saturation curve for nuclear matter. All these results show similar quantitative features as found in the Quantum Hadrodynamics(QHD) calculations by Walecka _et. al._ in [11, 12].
The importance of this work lies in the fact that it provides an intuitive, systematic, and non-perturbative framework to connect QCD to nuclear phenomenology, especially in intermediate and high density regime. This work provides an effective, non-perturbative way to match the phenomenology arising from QCD lagrangian with the predictions of the long-range, strongly coupled effective field theory of QHD. Furthermore, this framework has the capacity to provide equation of state for nuclear matter with all densities higher than observed terrestrial density, - essential for the estimation of the physical properties of neutron stars from first principles QCD calculations.
The structure of this paper is as follows. In Sec. 2, we start explicitly with the light-quark d.o.f. of QCD to write an Operator Product Expansion(OPE) of the in-medium nucleon two-point function. In Sec. 3.1, we propose a phenomenological model of in-medium nucleons consistent with hadron scattering observations. The Borel-transformed OPE and the model is then compared in Sec. 3.2 to give us the expressions for scalar and vector self-energies of the nucleon in terms of quark condensates in Sec. 3.3. The self-energies are used in the mean-field QHD lagrangian density of Sec. 3.4 to obtain the saturation curve, value of the coupling parameters and the value of vacuum light-quark condensate in Sec. 4. Sec. 5.1 uses these information in Tolman-Oppenheimer-Volkoff equations to obtain the maximum masses of a neutron star for the scenarios when the speed of sound in neutron star core approaches the conformal and causal limit. Sec. 5.2 discusses the possibility of a universal gap in the stellar mass spectrum where it would be rare to find any isolated, cold, non-rotating neutron star or a black hole.
Two-point Correlator of Nucleons in Nuclear Medium
In QCD Sum Rules, we attack the nuclear bound state problem from the short distance side and gradually move to larger distances where asymptotic freedom starts to break down and exploit the non-trivial structure of the QCD vacuum signalled by the emergence of power corrections. Quantitatively, we represent the hadrons propagating in nuclear matter in terms of their interpolating quark current at large virtualities. Then we construct a two-point correlator using the hadron operators. We treat the correlator in the OPE framework where long and short distance contributions are dealt differently: the former is represented as Wilson's coefficients and are evaluated using perturbative QCD, while the latter entails infrared behavior of the Green's functions of quarks and gluons and is represented by various condensates. In our calculation, we consider only the identity operator and the light-quark condensate, which are the operators for the OPE up to mass dimension three.
In order to write QCD Sum Rules for nucleons in nuclear medium, we start with a color-singlet hadron current that couples maximally to a nucleon
\[\eta_{N}(x) = \epsilon_{abc}\left[q^{aT}(x)C\gamma_{\mu}q^{b}(x)\right]\gamma_{ 5}\gamma^{\mu}q^{c}(x), \tag{1}\]
where \(C\) denotes the charge conjugation matrix, \(T\) denotes transpose in Dirac space, \(q\) denotes light quark field with \(SU(2)\) isospin symmetry, \(a,b,c\) denote color indices and \(\epsilon_{abc}\) is the totally antisymmetric tensor on the index subset of the three color indices. The choice of current for a given \(J^{PC}\) is not unique but it is chosen in such a way that the coupling to nucleon intermediate state is maximized while the contribution of the higher order states to the correlation function is negligible. Sensitivity to the choice of current in Eq. (1) is discussed in [45].
We consider the two-point correlator in momentum space comprised of the time-order product of the local hadron current and its Hermitian conjugate
\[\Pi_{2}^{N}(p) = i\int d^{4}x\ e^{ip.x}\ \langle 0|T\big{[}\eta_{N}(x)\bar{\eta}_{N} (0)\big{]}|0\rangle\, \tag{2}\]
where the expectation value is taken over physical, non-perturbative vacuum state. We use Eq. (1) in Eq. (2) and using the light-quark propagator in fixed point gauge to the first order in the light quark mass in spacetime coordinate in the presence of background quark field [39]
\[\left[S_{ab}^{q}(x)\right]_{\alpha\beta}=\frac{i}{2\pi^{2}}\delta_{ab}\frac{ \not{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
with
\[\Pi_{s}(p^{2},p_{0}) = -\frac{1}{4\pi^{2}}P^{2}\ln P^{2}\langle\bar{q}q\rangle_{\rho_{N}}+ \cdots\, \tag{6}\] \[\Pi_{q}(p^{2},p_{0}) = -\frac{1}{64\pi^{4}}(P^{2})^{2}\ln P^{2}+\frac{1}{3\pi^{2}}p_{0} \ln P^{2}\langle q^{\dagger}q\rangle_{\rho_{N}}+\cdots\,\] (7) \[\Pi_{u}(p^{2},p_{0}) = -\frac{2}{3\pi^{2}}P^{2}\ln P^{2}\langle q^{\dagger}q\rangle_{ \rho_{N}}+\cdots\, \tag{8}\]
after omitting all the power-divergent terms which vanish anyway after Borel transformation of the two-point function. The above expressions match the OPE derived in [42] with light-quark isospin symmetry assumed in our work. We would like to mention here that the two-point correlator \(\Pi_{2}^{N}(p)\) can be expanded into a much general form in Dirac space. But the symmetries of Lorentz covariance, time reversal and parity dictate that the two-point correlator comprises three distinct structures as shown in Eq. (5) and this is true to all orders of OPE in Eq. (4). Moreover, in the limit where the density of nuclear medium is zero, \(\Pi_{u}(p^{2},p_{0})\to 0\) and \(\Pi_{q}\) become a function of \(P^{2}\) only. This overlaps with the fact that the symmetry of the two-point correlator in vacuum contains only two structures: scalar and \(p\hskip-5.0pt/\).
## 3 Phenomenological Side of the Sum Rule
### Quasinucleons in Large Nuclear Medium
In the previous section, we derived the expression for Fourier transform of the nucleon two-point correlation function using OPE. Lehmann-Kallen spectral representation tells us that the analytic structure of this correlation function in complex-\(p^{2}\) plane should have an isolated simple pole at the mass of a quasi-nucleon state. In view of this fact and with the help of empirical data as well as theoretical features of the hadron scattering phenomena, we now try to come up with a phenomenological model of the two-point correlation function at an intermediate energy. In line of Dirac-Brueckner-Hartee-Fock (DBHF) approach, we assume weak three-momentum dependence of the in-medium self-energies for the bound states as well as low-lying continuum states [46, 47].
We assume that the two-point function will have a pole at the physical nucleon mass to write [46]
\[\Pi_{N}(p) = -\ \frac{\lambda_{N}^{*2}}{(p^{\mu}-\Sigma_{V}^{\mu})\gamma_{\mu} -(M_{N}+\Sigma_{S})}+\mbox{continuum}. \tag{9}\]
where \(\lambda_{N}^{*}\) is the coupling of the nucleon current to the physical quasinucleon in the nuclear medium, \(\Sigma_{V}^{\mu}\) and \(\Sigma_{S}\) are the in-medium vector and scalar self-energies. From the model above, we expect to retrieve a few essential features such as attractive scalar and repulsive vector potential of several hundreds of MeV cancelling each other in such a way that the binding energy of a nucleon turns out to be only a few MeV eventually. It is important to note here that the scalar and vector potentials are physically observed only in combinations and not individually. Additionally, cancellation of the imaginary part of scalar and vector potential is expected in order to achieve a stable quasinucleon state.
We can further expand the vector self-energy to write
\[\Sigma_{V}^{\mu}=\Sigma_{V}u^{\mu}+\Sigma_{V}^{\prime}q^{\mu}. \tag{10}\]
We neglect \(\Sigma_{V}^{\prime}\) because of weak \(q\)-dependence of this term and the negligible contributions from higher mass excitations. Doing this for a nuclear medium in the rest frame, we square the denominator of Eq. (9):
\[(p^{\mu}-\Sigma_{V}^{\mu})^{2}-M_{N}^{*2}=p^{2}-\mu^{2}, \tag{11}\]
where
\[\mu^{2}=M_{N}^{*2}+2p_{0}\Sigma_{V}-\Sigma_{V}^{2}. \tag{12}\]
Finally, in the phenomenological side, we use Eq. (13, 11, 12) together to derive the following expressions for different tensor structures as in Eq. (5):
\[\Pi_{s}(p^{2},p_{0}) = -\lambda_{N}^{*2}\frac{M_{N}^{*}}{p^{2}-\mu^{2}}, \tag{13}\] \[\Pi_{q}(p^{2},p_{0}) = -\lambda_{N}^{*2}\frac{1}{p^{2}-\mu^{2}},\] (14) \[\Pi_{u}(p^{2},p_{0}) = -\lambda_{N}^{*2}\frac{\Sigma_{V}}{p^{2}-\mu^{2}}. \tag{15}\]
### Borel Transformation
Because of the infrared slavery in QCD, insertion of more bubbles in the QCD Feynman diagrams leads to softer momenta where the coupling constant starts to become increasingly large. Thus, the IR region of the loop integral becomes more and more important and running coupling assumes leading logarithms in its expression. These different powers of logarithms gives us factorial divergence in the expansion of the correlation function. In these cases, we try to achieve rapid convergence for a resummed Borel series of the correlator. We make a Borel transformation of the form shown here, where we consider sufficiently high moment of the correlator and a high momentum, where only the contribution from the lowest resonance predominates all other resonances in the channel. Additionally, spurious power divergences could appear in OPE but they vanish with Borel transform. Hence, all such terms were already ignored in the OPE of time-ordered product.
Borel transform of a function \(f(P^{2})\) is defined as:
\[{\cal B}_{M^{2}}\left[f(P^{2})\right]=\lim_{P^{2},n\rightarrow\infty,P^{2}/n=M ^{2}}\frac{\left(P^{2}\right)^{n+1}}{n!}\left(\frac{-d}{dP^{2}}\right)^{n}f(P^ {2}). \tag{16}\]
Using this definition, we derive the following formulae:
\[{\cal B}_{M^{2}}\left[\frac{1}{p^{2}-\mu^{2}}\right] = -e^{-\mu^{2}/M^{2}}, \tag{17}\] \[{\cal B}_{M^{2}}\left[\ln P^{2}\right] = -M^{2},\] (18) \[{\cal B}_{M^{2}}\left[P^{2}\ln P^{2}\right] = M^{4},\] (19) \[{\cal B}_{M^{2}}\left[(P^{2})^{2}\ln P^{2}\right] = -2M^{6}\, \tag{20}\]
which we use in Eq. (6-8) and Eq. (13-15) to write the OPE tensor structures in terms of their phenomenological counterparts
\[\lambda_{N}^{*2}M_{N}^{*}e^{-\mu^{2}/M^{2}} = -\ \frac{\langle\bar{q}q\rangle_{\rho_{N}}}{4\pi^{2}}\ M^{4}, \tag{21}\] \[\lambda_{N}^{*2}e^{-\mu^{2}/M^{2}} = \frac{M^{6}}{32\pi^{4}}-\frac{p_{0}}{3\pi^{2}}\langle q^{\dagger} q\rangle_{\rho_{N}}M^{2},\] (22) \[\lambda_{N}^{*2}\Sigma_{V}e^{-\mu^{2}/M^{2}} = \frac{2}{3\pi^{2}}\langle q^{\dagger}q\rangle_{\rho_{N}}M^{4}. \tag{23}\]
### Self-energies in Terms of Quark Condensate
Eq. (21, 22, 23) have quark condensates in their expressions which arise from the fact that the chiral symmetry of QCD gets spontaneously broken in QCD vacuum with the condensates emerging as the order parameters of this phenomenon. Restoration of chiral symmetry may happen, for example, in the core of a neutron star where the hadron medium has sufficiently high density. In scenarios where the environment is not this much extreme, the chiral symmetry can be partially restored in the presence of any hadron medium. This leads to a change in the value of quark condensate, \(\langle\bar{q}q\rangle_{\rho_{N}}\), in presence of a nuclear medium of density \(\rho_{N}\) and is given by [48, 49],
\[\langle\bar{q}q\rangle_{\rho_{N}}=\langle\bar{q}q\rangle_{\rm vac}\left[1- \frac{\rho_{N}\sigma_{N}}{f_{\pi}^{2}m_{\pi}^{2}}\right], \tag{24}\]
to leading order. Here \(f_{\pi}\) is the pion decay constant and \(\sigma_{N}\) is the pion-nucleon sigma term [50]. It can be shown that \(f_{\pi}^{2}m_{\pi}^{2}\) is related to the symmetry breaking term of the QCD lagrangian through Gell-Mann-Oakes-Renner relation:
\[f_{\pi}^{2}m_{\pi}^{2}\approx-2m_{q}\langle\bar{q}q\rangle_{\rm vac }\, \tag{25}\]
where \(m_{q}\) is the average current mass of the up and down quarks. Eq. (24) and (25) together gives us
\[\langle\bar{q}q\rangle_{\rho_{N}}=\langle\bar{q}q\rangle_{\rm vac }+\frac{\rho_{N}\sigma_{N}}{2m_{q}}. \tag{26}\]
Furthermore, \(\langle q^{\dagger}q\rangle_{\rho_{N}}\) is related to the net nucleon density [51, 52]:
\[\langle q^{\dagger}q\rangle_{\rho_{N}}=\frac{3}{2}\rho_{N}. \tag{27}\]
Using Eq. (27) in Eq. (22), we get
\[\lambda_{N}^{*2}e^{-\mu^{2}/M^{2}}=\frac{M^{6}}{32\pi^{4}}-\frac {\left(k_{F}^{2}+M_{N}^{*2}\right)^{1/2}+\Sigma_{V}}{3\pi^{2}}\times\left( \frac{3}{2}\rho_{N}M^{2}\right). \tag{28}\]
Now we use Eq. (21) in the RHS of the above expression to get
\[-\frac{\langle\bar{q}q\rangle_{\rho_{N}}}{4\pi^{2}}\frac{M^{4}} {M_{N}^{*}}=\frac{M^{6}}{32\pi^{4}}-\frac{\left[\left(\frac{k_{F}}{M_{N}^{*}} \right)^{2}+1\right]^{1/2}+\frac{\Sigma_{V}}{M_{N}^{*}}}{3\pi^{2}}\times\left( \frac{3}{2}\rho_{N}M_{N}^{*}M^{2}\right). \tag{29}\]
We can express the baryon density(\(\rho_{N}\)) in terms of Fermi momentum (\(k_{F}\)):
\[\rho_{N}=\frac{\gamma}{(2\pi)^{3}}\int_{0}^{k_{F}}d^{3}\vec{k}= \frac{\gamma k_{F}^{3}}{6\pi^{2}}. \tag{30}\]
where \(\gamma\) is the degeneracy factor of the nucleon and it is 4 for nucleon with unbroken isospin symmetry (2 for spin and 2 for isospin). Writing \((k_{F}/M_{N})\) as \(x\) and \((M_{N}^{*}/M_{N})\) as \(y\), we get
\[-\frac{1}{2}\left(\frac{M}{M_{N}}\right)^{2}\frac{1}{y}\left[ \frac{\langle\bar{q}q\rangle_{\rm vac}}{M_{N}^{3}}+\frac{2}{3\pi^{2}}\left( \frac{\sigma_{N}}{2m_{q}}\right)x^{3}\right]=\frac{1}{16\pi^{2}}\left(\frac{M} {M_{N}}\right)^{4}-\frac{2x^{3}y}{3\pi^{2}}\left[\frac{\Sigma_{V}}{M_{N}^{*}}+ \left(1+\frac{x^{2}}{y^{2}}\right)^{1/2}\right]. \tag{31}\]
We get the ratio \(\Sigma_{V}/M_{N}^{*}\) by dividing Eq. (23) by Eq. (21),
\[\frac{\Sigma_{V}}{M_{N}^{*}} = -\ \frac{4}{\left(\frac{\langle\bar{q}q\rangle_{\rm vac}}{\rho_{N }}+\frac{\sigma_{N}}{2m_{q}}\right)}. \tag{32}\]
We define
\[r=-\ \frac{\langle\bar{q}q\rangle_{\rm vac}}{M_{N}^{3}}. \tag{33}\]
From [46, 53], we assume \(\sigma_{N}=45\) MeV and \(m_{q}=2.5\) MeV to write
\[\frac{\Sigma_{V}}{M_{N}^{*}}=-\ \frac{4}{9-3\pi^{2}r/2x^{3}}. \tag{34}\]
Now, let's use the condition that in the limit of zero density, the effective nucleon mass equals the physical nucleon mass: \(y\to 1\) as \(x\to 0\). This gives us the Borel mass parameter:
\[\frac{M}{M_{N}}=\left[-\frac{8\pi^{2}}{M_{N}^{3}}\langle\bar{q}q \rangle_{\rm vac}\right]^{1/2}=[8\pi^{2}r]^{1/2}. \tag{35}\]
Using Eq. (33), Eq. (34) and Eq. (35) in Eq. (31) and defining
\[z=r-\frac{6}{\pi^{2}}x^{3}, \tag{36}\]
we can finally write
\[z-ry+\frac{x^{3}y}{6\pi^{4}r}\Big{[}\big{(}x^{2}+y^{2}\big{)}^{1/2}+ \frac{8x^{3}y}{3\pi^{2}z}\Big{]}=0. \tag{37}\]
For a value of \(r\), we can plot in-medium effective mass of a nucleon against medium density using Eq. (37), as shown in Sec. 4.
### Saturation Curve of the Large Nuclear Medium
The nucleons interact predominantly by exchanging mesons. Within a large nuclear medium, nuclear interactions can have long wavelength and be over large distances. In such cases, all the hadronic d.o.f. of the medium are expected to be dealt dynamically. To describe such a system, we use a relativistic mean field theory model as proposed by Walecka [11, 12, 54]. We have already derived the self-energies of a quasinucleon in Sec. 3.3 which we will use in the Walecka model to find the saturation curve of a large system of nucleons from where we should be able to obtain the minimum of their individual in-medium energy(_i.e._ saturation energy) at a certain value of density(_i.e._ saturation density).
Following [54], let us start with a nucleon field \(\psi\) has two isospin components, each with two spin degrees of freedom
\[\psi=\begin{bmatrix}p_{\uparrow\downarrow}\\ n_{\uparrow\downarrow}\end{bmatrix}, \tag{38}\]
where \(p\) and \(n\) denote proton and neutron fields respectively. Consider a system of volume \(\mathcal{V}\) where exists uniformly \(B\) nucleons. Let us further imagine a scenario in which the system is highly compressed so that the nucleon density \(\rho_{N}=B/\mathcal{V}\) is large. Each nucleon acts as the source for a neutral scalar meson field \(\phi\) with bare mass \(m_{S}\) coupled to the scalar density, \(\bar{\psi}\psi\), of the nucleon and a massive neutral vector meson field \(V_{\mu}\) with bare mass \(m_{V}\) coupled to the nucleon current, \(i\bar{\psi}\gamma_{\mu}\psi\). The choice of the above mentioned degrees of freedom seems to be justified in view of the large attractive scalar and repulsive vector potential empirically found in nucleon scattering phenomena. The lagrangian density of such a system can be written using these fields:
\[\mathcal{L}=-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}-\frac{1}{2}m_{V}^{2 }V_{\mu}V^{\mu}-\frac{1}{2}\left[\partial_{\mu}\phi\partial^{\mu}\phi+m_{S}^{ 2}\phi^{2}\right]-\bar{\psi}\left[\gamma_{\mu}\left(\partial_{\mu}-ig_{V}V_{ \mu}\right)+\left(M_{N}-g_{S}\phi\right)\right]\psi \tag{39}\]
where
\[F_{\mu\nu}=\partial_{\mu}V_{\nu}-\partial_{\nu}V_{\mu}. \tag{40}\]
We work with this theory in imaginary time so that \(x_{\mu}\equiv(\vec{x},ix_{0})=(\vec{x},it)\). In the limit where the system is sufficiently compressed, the meson fields become classical and can be approximated to their expectation values:
\[\phi \rightarrow \langle\phi\rangle\ \equiv\phi_{0}, \tag{41}\] \[V_{\mu} \rightarrow \langle V_{\mu}\rangle\equiv i\delta_{\mu 4}V_{0}. \tag{42}\]
Notice here that the spatial part of the vector field vanishes due to the rotational symmetry of a sufficiently large system. Example of such a system can be the interior of a neutron star. But for nuclei, even for the heavy ones such as \({}^{208}\)Pb, such an assumption will not hold true in general. Furthermore, translational symmetry of such large, uniform medium tells us that the mean values of the meson fields, \(\phi_{0}\) and \(V_{0}\) will be constants. Calling for variation of action in the vector meson field along with the symmetry consideration
that only the fourth component of the nucleon current \(i\psi\gamma_{\mu}\psi\) survives because of the \(\delta_{\mu 4}\) in the vector field, we get
\[V_{0}=\frac{g_{V}}{m_{V}^{2}}\rho_{N}. \tag{43}\]
Conservation of nucleon number will give us a constant \(\rho_{N}\) and hence value of \(V_{0}\) in terms of conserved quantities. Considering all these findings in this section so far, we can reduce the lagrangian in Eq. (39) to the mean field lagrangian
\[{\cal L}_{MFT}=\frac{1}{2}m_{V}^{2}V_{0}^{2}-\frac{1}{2}m_{S}^{2} \phi_{0}^{2}-\bar{\psi}\left[\gamma_{\mu}\partial^{\mu}+\gamma_{4}g_{V}V_{0}+ M_{N}^{*}\right]\psi, \tag{44}\]
where \(M_{N}^{*}\) is the effective mass of the nucleon:
\[M_{N}^{*}=M_{N}-g_{S}\phi_{0}. \tag{45}\]
The nucleon field can be expanded in quantum field operators and in Schrodinger picture
\[\hat{\psi}(\vec{x})=\frac{1}{\sqrt{\cal V}}\sum_{\vec{k},\lambda} \left[u(\vec{k},\lambda)A_{\vec{k},\lambda}e^{i\vec{k}.\vec{x}}+v(-\vec{k}, \lambda)B_{\vec{k},\lambda}e^{-i\vec{k}.\vec{x}}\right] \tag{46}\]
with periodic boundary conditions in volume \({\cal V}\). Here \(u,v\) denote Dirac spinors. Quantization of this theory is achieved by imposing anticommutation relations:
\[\left\{A_{\vec{k},\lambda},A^{\dagger}_{\vec{k}^{\prime},\lambda ^{\prime}}\right\} = \delta_{\vec{k}\vec{k}^{\prime}}\delta_{\lambda\lambda^{\prime}}, \tag{47}\] \[\left\{B_{\vec{k},\lambda},B^{\dagger}_{\vec{k}^{\prime},\lambda ^{\prime}}\right\} = \delta_{\vec{k}\vec{k}^{\prime}}\delta_{\lambda\lambda^{\prime}}. \tag{48}\]
The corresponding Hamiltonian density is given by
\[{\cal H}_{MFT} = \left(\frac{\partial{\cal L}_{MFT}}{\partial\dot{\psi}}\right) \dot{\psi}-{\cal L}_{MFT} \tag{49}\] \[= \frac{1}{2}m_{S}^{2}\phi_{0}^{2}-\frac{1}{2}m_{V}^{2}V_{0}^{2}+g_ {V}V_{0}\rho_{N}+\frac{1}{{\cal V}}\sum_{\vec{k},\lambda}(\vec{k}^{2}+{M_{N}^{ *}}^{2})^{1/2}\left(A^{\dagger}_{\vec{k}\lambda}A_{\vec{k}\lambda}+B^{\dagger }_{\vec{k}\lambda}B_{\vec{k}\lambda}\right). \tag{50}\]
For uniform nuclear matter in ground state, the nucleons can approximated as a free Fermi gas (a nucleon can move freely throughout the volume at a mean potential generated by all other nucleons in the system) that fills up all the momentum states up to the Fermi level \(k_{F}\) with a degeneracy of \(\gamma\). Considering degeneracy in the spin and isospin degrees of freedom of the nucleon field for each momentum state, \(\gamma=4\). In the above expression, \(\left(A^{\dagger}_{\vec{k}\lambda}A_{\vec{k}\lambda}+B^{\dagger}_{\vec{k} \lambda}B_{\vec{k}\lambda}\right)/{\cal V}\) is the number density which we can replace with \(\frac{\gamma}{(2\pi)^{3}}\int_{0}^{k_{F}}d^{3}\vec{k}\) for a large enough system. We Additionally use Eq. (45) to finally write the expression for the energy density of the medium
\[{\cal E}(\rho_{N},\phi_{0}) = \frac{1}{2}m_{S}^{2}\phi_{0}^{2}-\frac{1}{2}m_{V}^{2}V_{0}^{2}+g_ {V}V_{0}\rho_{N}+\frac{\gamma}{(2\pi)^{3}}\int_{0}^{k_{F}}d^{3}\vec{k}\ (\vec{k}^{2}+{M_{N}^{*}}^{2})^{1/2}. \tag{51}\]
## 4 Results
Comparing the interaction terms of the lagrangian in Eq. (39) with the pole in the two-point correlator in Eq. (13), we can write
\[\phi_{0}=\Sigma_{S}/g_{S}\,\qquad V_{0}=\Sigma_{V}/g_{V}. \tag{52}\]
Using Eq. (30, 52) in Eq. (51), we find the average energy per nucleon,
\[\frac{{\cal E}}{\rho_{N}} = \frac{M_{N}}{2C_{S}^{2}}\frac{(\Sigma_{S}/M_{N})^{2}}{(\gamma/6 \pi^{2})x_{\rm sat}^{3}}+\frac{1}{2}\Sigma_{V}+\frac{\gamma}{(2\pi)^{3}(\gamma/ 6\pi^{2})x_{\rm sat}^{3}M_{N}^{3}}\int_{0}^{k_{F}}d^{3}\vec{k}\ (\vec{k}^{2}+{M_{N}^{*}}^{2})^{1/2}. \tag{53}\]
where we write \(k/M_{N}\) as \(x\) and \(k_{F}/M_{N}\) as \(x_{\rm sat}\). \(C_{S}^{2}\) and \(C_{V}^{2}\) are defined as:
\[C_{S}^{2}=g_{s}^{2}\left(\frac{M_{N}^{2}}{m_{S}^{2}}\right),\qquad C_{V}^{2}=g_ {V}^{2}\left(\frac{M_{N}^{2}}{m_{V}^{2}}\right). \tag{54}\]
These are the two parameters in the relativistic mean field theory of nuclear matter which will be found in this section from the experimentally accessible properties such as the binding energy and density of uniform nuclear matter. We further use \(M_{N}^{*}/M_{N}\) as \(y\) and Eq. (32) with \(\gamma=4\) to express the binding energy per nucleon as,
\[\frac{{\cal E}}{\rho_{N}}-M_{N} = \frac{3\pi^{2}M_{N}(1-y_{\rm sat})^{2}}{4x_{\rm sat}^{3}C_{S}^{2} }+\frac{8M_{N}x_{\rm sat}^{3}y_{\rm sat}}{3(\pi^{2}r-6x_{\rm sat}^{3})}+\frac{ 3M_{N}}{x_{\rm sat}^{3}}\int_{0}^{x_{\rm sat}}dx\ x^{2}\ \sqrt{x^{2}+y^{2}}-M_{N}. \tag{55}\]
Let's take a moment here to understand how quark-hadron duality is exploited in our calculations so far. In Sec. 2, we derived the OPE of the two-point correlator of local nucleon current in terms of simplest quark condensates of dimension three. In our calculations, certain quark condensate was taken into account along with the vacuum condensate which vanishes otherwise in the absence of a surrounding nuclear medium. Assuming that the OPE will have a pole at the effective nucleon mass, we could estimate in-medium scalar and vector self-energies. We use these self-energies in the theory assumed in Sec. 3.4 written in terms of hadron fields to calculate the binding energy per nucleon in a sufficiently large nuclear medium (so that finite size effect can be ignored) and extract the coupling parameters of the relativistic mean field theory lagrangian in Eq. (39). Values of these coupling parameters could be estimated by minimizing the energy of the system (i.e. Hamiltonian in Eq. (49)) as well (see [54]) but in this work, minimization of energy is achieved naturally from the trade-off between average individual nucleon energy and the interaction energy arising from in-medium scalar and vector coupling (given in Eq. (21), (22), and (23)) that are dependent on the light quark condensates to the lowest order.
We evaluate Eq. (37) and Eq. (55) for different values of \(x_{\rm sat}=k_{F}/M_{N}\) to generate the saturation curve for the infinite nuclear matter. We find that the saturation curve, \({\cal E}/\rho_{N}-M_{N}\), attains the minimum value of \(-15.6\ {\rm MeV}\) at the nuclear matter saturation density at \(k_{F}=1.33\ {\rm fm}^{-1}\) for:
* \(r=0.029\) giving us \(\langle\bar{q}q\rangle_{\rm vac}=-\) (0.288 \({\rm GeV})^{3}\) from Eq. (33),
* \(C_{S}^{2}=235.62\),
* \(C_{V}^{2}=142.59\).
Using the above values of \(r,C_{S}^{2}\) and \(C_{V}^{2}\), we obtain the plots for the saturation curve of the infinite nuclear medium (Fig. 1), effective mass of nucleon (Fig. 2), and in-medium vector self-energy (Fig. 3), as provided below. Here the value of effective nucleon mass, \(M_{N}^{*}/M_{N}\), at the nuclear matter saturation density is 0.608 (corresponding to \(M_{N}^{*}=0.570\ {\rm GeV}\)). The value of in-medium vector self-energy at the saturation density(\(\Sigma_{V}(x_{\rm sat})\)) is found to be around 0.178 \({\rm GeV}\). We use these results in Sec. 5.1 and Sec. 5.2 to derive and discuss the critical mass of a neutron star.
The values of the coupling parameters are comparable to the values found by Walecka:
* \(C_{S}^{2}=266.9\), \(C_{V}^{2}=195.7\) in ref. [11],
* \(C_{S}^{2}=357.4\), \(C_{V}^{2}=273.8\) in ref. [12].
The value of the light quark condensate obtained in this work is comparable to the values obtained in [55, 56]_etc._. We should also note that Furnstahl _et. al._ used Monte Carlo approach to generate the condensate values which was used in the Sum Rules to obtain \(M_{N}^{*}=0.64^{+0.13}_{-0.09}\ {\rm GeV}\) and \(\Sigma_{V}=0.29^{+0.06}_{-0.10}\ {\rm GeV}\) at the saturation density [57]. These results had large uncertainties due to which determination of the binding energy of the order of -16 MeV was beyond the scope of that work.
Since we have considered only the operators till mass dimension 3 to produce this curve, behavior of the curve near the strong coupling domain (with matter density \(>>x_{\rm sat}\)) obtained from this work may not be
accurate. Nevertheless, we can incorporate operators of higher dimension in the OPE to be able to predict the behavior near that domain.
## 5 Obtaining the Critical Mass of Neutron Star and the Mass Gap
During the supernova explosion, a star can derive large enough energy so that its inner core collapses into a stable configuration consisting of closely packed nucleons. This new configuration would have a very small radius and extremely high density, and is called neutron star [58, 59, 60, 61]. The mass and size of the neutron star depend critically on the equation of state of its constituent matter, \(P\equiv P({\cal E})\), obeying general relativistic equations in hydrostatic equilibrium (here \(P({\cal E})\) denotes the pressure profile for a given density profile within the stellar medium). In general relativity, there exists an upper limit for the mass of a star made of incompressible matter beyond which the internal pressure needed for the hydrostatic equilibrium of the star becomes infinite. It can be shown that, for a stable non-rotating cold neutron star with incompressible, constant-density nuclear matter treated in the general relativistic framework, the maximum possible mass is definitely less than \(5M_{\odot}\)[62]. The interaction between nucleons in a neutron star plays an important role in determining the maximum mass of a neutron star. Inside a high-density nuclear medium, the degenerate nucleons have enough energy to produce pions and hyperons. In such a scenario, the interior of a neutron star becomes more compressible due to reduction in the effective short range repulsion between nucleons. This effect results in an increase in the maximum mass limit of a neutron star.
The critical mass of a neutron star can depend on a number of other parameters as well, such as differential rotation [63], anisotropic pressure [64]_etc._. But the greatest uncertainty in the estimate of the critical mass comes from the fact that the equation of state within a neutron star is poorly known as the strong interactions within the constituent nuclear medium within a typical neutron star is expected to lie in the non-perturbative regime [65]. Rhoades and Ruffini [66] considered the most extreme equation of state with three conditions: (a)causality, (b)Le Chatelier's principle, (c)general relativistic equations in hydrostatic equilibrium, to conclude that the critical mass of a neutron star with density always greater than \(4.6\times 10^{14}\) g/cm\({}^{3}\) cannot be larger than \(3.2M_{\odot}\). Furthermore, in the presence of non-zero angular momentum, this critical mass will be affected by a factor smaller than 1.5. Strobel and Weigel suggested that [67] the minimum possible mass of a neutron star depends on the earliest stage of its evolution, specifically the deleptonization period. For a cold, non-rotating neutron star, they estimated the minimum mass to be \(\sim(0.88-1.28)M_{\odot}\) and the maximum mass to be \(\sim(1.70-2.66)M_{\odot}\). Strobel and Weigel further showed that these mass limits are negligibly affected in the presence of hyperon in the nuclear medium.
Figure 3: Prediction for in-medium vector self-energy (\(\Sigma_{V}\)) as a function of normalized Fermi wavenumber (\(k_{F}/M_{N}\)). At the saturation density (\(k_{F}=1.33\) fm\({}^{-1}\)), we obtain \(\Sigma_{V}=0.178\) GeV.
### Solving Tolman-Oppenheimer-Volkoff Equations
Making the assumption of spherical symmetry, zero velocities (_i.e._ static spacetime) and an ideal fluid model, Tolman, Oppenheimer and Volkoff (TOV) found the equation of state of neutron star in a General Relativisitic (GR) framework [68]. Here we write the first two TOV equations:
\[\frac{dP}{dr} = -\frac{G\big{[}{\cal E}(r)+P(r)\big{]}\big{[}m(r)+\frac{4\pi r^{3} P(r)}{c^{2}}\big{]}}{c^{2}r^{2}\big{[}1-\frac{2Gm(r)}{c^{2}r}\big{]}}, \tag{56}\] \[\frac{dm}{dr} = 4\pi r^{2}\rho(r)=\frac{4\pi r^{2}{\cal E}(r)}{c^{2}}, \tag{57}\]
where \(P(r)\), \(\rho(r)\) and \({\cal E}(r)\) are the steller pressure, mass density and energy density at radius \(r\), and \(m(r)\) is the total mass contained within \(r\). There is considerable uncertainty in the equation of state for the densities of the nuclear medium that exist in the neutron star interior. In such a scenario, we can constrain the equation of state with assumptions and general principles to derive the critical mass. Here we assume uniform energy density of the neutron star interior which is incompressible at any finite pressure. With this assumption, solving Eq. (57) becomes trivial. With the boundary conditions, \(m(r=0)=0\) and \({\cal E}(r)={\cal E}_{0}\) for all \(r<R\) (\(R\) is the neutron star radius), we get
\[m(r)=\frac{4\pi{\cal E}_{0}r^{3}}{3c^{2}}. \tag{58}\]
In the next section, we will plug in the value of the nuclear saturation energy density in the above equation to get the \(m(r)\). Using the boundary conditions (\(x=0,P=P_{0}\)) and (\(x=R,P=0\)), we solve Eq. (56) to get:
\[\ln\left[\frac{P_{0}+{\cal E}_{0}}{3P_{0}+{\cal E}_{0}}\right]= \frac{1}{2}\ln\left[1-\left(\frac{8\pi G{\cal E}_{0}}{3c^{4}}\right)R^{2} \right]. \tag{59}\]
It is evident from above that the critical mass of a neutron star strongly depends on the equation of state of its interior. We provide now an order-of-magnitude calculation to examine validity of the assumption of uniform medium density. Using the observation that the maximum angular velocity of a neutron star is \(\omega_{max}\approx 7\times 10^{3}/s\)[69], the angular momentum of a nucleon in the outermost shell is given by,
\[L=m_{N}^{*}\omega_{max}R^{2}, \tag{60}\]
where \(m_{N}^{*}=\) effective mass of a nucleon, \(R\) is the neutron star radius. Hypothesizing that the nucleons within a neutron star are in various quantized orbits, we use Bohr's quantization rule to get:
\[L=m_{N}^{*}\omega_{max}R^{2}\sim n\hbar. \tag{61}\]
Assuming \(m_{N}^{*}\approx 922\) MeV, \(R=10^{4}\) m, we get \(n\sim 10^{19}\). Uniformly dense interior implies that the neutron star contains around \(n^{3}\sim 10^{57}\) nucleons, giving a critical mass \({\cal O}[M_{\odot}]\). Therefore, uniform density of neutron star interior seems to be a reasonable assumption to begin with for the the bulk of the neutron star before we explore the possibility of various equations of states [70, 71, 72].
### Deconfinement, Rigid Vector Repulsion, and Black Hole Regimes
As the density of the nuclear medium increases, the interaction of the vector field between the nucleons is enhanced. It can be argued in such a scenario that the equation of state, \(P_{0}={\cal E}_{0}\), is realized asymptotically [73] as an essential feature of a relativistic field theory with the lagrangian that has a massive vector field. In this rigid vector repulsion(RVR) regime, the speed of sound in the interior of neutron star approaches the causal limit (\(c_{s}=c\)) giving us the critical radius of neutron star (from Eq. (59)),
\[R_{\rm RVR}=\left[\frac{9c^{4}}{32\pi G{\cal E}_{0}}\right]^{1/ 2}\approx 21.5\ {\rm km}, \tag{62}\]
corresponding to the maximum possible mass,
\[M_{NS}^{RVR}=\left[\frac{81c^{8}}{2048\pi G^{3}{\cal E}_{0}}\right]^{1/2}=5.47M_{ \odot}\, \tag{63}\]
from Eq. (58). In this regime, the large vector repulsion between nucleons allows a neutron star to be more massive before its internal pressure becomes infinite and pushes the maximum mass of a neutron star to be \(5.47M_{\odot}\). This is the hard limit in the mass of a neutron star beyond which it becomes a black hole.
However, the interior of a cold neutron star possibly goes through phase transition much before the density of a neutron star hits the rigid vector repulsion regime [74]. This is attributed to the fact that the chiral symmetry gets restored in the medium much before the density approaches rigid vector repulsion regime. Quantitatively, this is understood from the behavior of the most dominant order parameter of chiral symmetry breaking _i.e._ light quark condensate. The value of this condensate decreases in presence of a nuclear medium. In a scenario where the medium has sufficiently high density, the value of the condensate approaches zero signifying chiral symmetry restoration. This is the deconfinement regime where QCD state transits from hadronic to QGP phase.
From Eq. (30), the saturation density of nuclear matter is estimated to be
\[\rho_{N}^{\rm sat}=0.159\ {\rm fm}^{-3}\, \tag{64}\]
while, From Eq. (26), we can write for restoration of chiral symmetry:
\[\langle\bar{q}q\rangle_{\rho^{\prime}_{N}} \approx \langle\bar{q}q\rangle_{\rm vac}+\frac{\rho^{\prime}_{N}\sigma_{ N}}{2m_{q}}=0, \tag{65}\] \[\rho^{\prime}_{N} = 0.345\ {\rm fm}^{-3}. \tag{66}\]
Eq. (64) and (66) together imply that the interior of neutron star enters phase transition for a baryon density which is approximately twice the saturation density (similar result was found in [75]). This is the relativistic regime where the asymptotic equation of state is given by \(P_{0}={\cal E}_{0}/3\) and the speed of sound, \(c_{s}=c/\sqrt{3}\). The critical radius of neutron star in this limit (from Eq. (59)),
\[R_{\rm rel}=\left[\frac{5c^{4}}{24\pi G{\cal E}_{0}}\right]^{1/2}\approx 18.5 \ {\rm km}. \tag{67}\]
In this regime, the maximum mass of a neutron star is given by,
\[M_{NS}^{rel}=\left[\frac{125c^{8}}{7776\pi G^{3}{\cal E}_{0}}\right]^{1/2}=3.4 8M_{\odot}\, \tag{68}\]
from Eq. (58). Once the neutron star core enters the deconfinement phase, its interior becomes more compressible and for a sufficiently dense QCD matter core, the speed of sound asymptotically hits the conformal limit, \(c_{s}=c/\sqrt{3}\), where the trace of its energy-momentum tensor (\(3P-{\cal E}\)) vanishes and the neutron star reaches its maximum mass, \(M_{NS}^{conf}=3.48M_{\odot}\). This sets a soft limit on neutron star mass because it is only less likely that a neutron star core can evade deconfinement to enter the RVR phase.
We have assumed in our calculation of maximum neutron star mass in all three regimes that the density of the neutron star interior is equal to saturation density. Neutron star with a highly compressible interior may have a density which is much higher than the saturation density. As per Eq. (58) and (62), the maximum mass is inversely proportional to the square root of the medium density, lowering even more the critical mass of a neutron star in all three regimes. Therefore, it should be rare to find an isolated neutron star with a mass greater than \(3.48M_{\odot}\) and impossible to find a neutron star with a mass beyond \(5.47M_{\odot}\). Additionally, in the absence of any conceivable mechanism for the speed of sound to reach the causal limit within the hadronic or QGP core of a star with a mass less than the critical value derived in Eq. (63), it seems unlikely
that a black hole with a mass smaller than \(5.47M_{\odot}\) can be formed through supernova explosion. Hence, there should plausibly exist a gap, \(\Delta\in[3.48M_{\odot},5.47M_{\odot}]\), in the stellar mass spectrum where \(\Delta\) is not heavy enough to evolve into a black hole and is not light enough for a neutron star to exist as a stable configuration with quark-matter core.
This hypothesis is consistent with the observations of supermassive neutron stars/light black hole candidates and their masses, as listed in [76]. Advanced LIGO and Advanced Virgo recently listed 35 binary coalescence candidates in their third Gravitational Wave Transient Catalog [77] out of which only four binaries are found to involve compact objects whose estimated masses lies statistically within \([3.48M_{\odot},5.47M_{\odot}]\) with the expected values beyond the same gap proposed in our work:
* GW200115 042309: involves an object \(5.9^{+2.0}_{-2.5}\)\(M_{\odot}\).
* GW191113 071753: involves an object \(5.9^{+4.4}_{-1.3}\)\(M_{\odot}\).
* GW200316 215756: involves an object \(7.8^{+1.9}_{-2.9}\)\(M_{\odot}\).
* GW200322 091133: involves an object \(14.0^{+16.8}_{-8.7}\)\(M_{\odot}\).
Recently, a compact binary system that involves a \(22.2-24.3\)\(M_{\odot}\) black hole and a compact object with a mass of \(2.50-2.67\)\(M_{\odot}\) from the gravitational wave signal GW190814 has been reported in [78] with the speculation that the latter component is either a light black hole or the heaviest neutron star observed till date. The heaviest neutron star confirmed so far is the companion of PSR J0952-0607 with a mass \(2.35\pm 0.17M_{\odot}\)[79]. Ref. [80] presents a photometric-isometric combined analysis of five neutron star/stellar mass black hole candidates identified in gravitational microlensing surveys [81, 82, 83], and one candidate MOA-2011-BLG-191/OGLE-2011-BLG-0462 is shown to have a mass of \(1.6-4.4\)\(M_{\odot}\). This claim is negated though in [84] where the lens mass obtained is \(7.1\pm 1.3\)\(M_{\odot}\) which is much beyond the upper end of our proposed limit. This claim is further supported by [85] which claims that there are systematic errors in the analysis provided in [80] and the lens mass of OGLE-2011-BLG-0462 should be \(7.88\pm 0.82\)\(M_{\odot}\). Thompson _et. al._ recently reported their observation [86] about the presence of an unseen companion of the red giant 2MASS J05215658+4359220 with a mass \(3.3^{+2.8}_{-0.7}\)\(M_{\odot}\). But van den Heuvel and Tauris argued in [87] that the unseen candidate can be a close binary of two main-sequence stars, primarily because no emission of X-ray is detected so far from the candidate. For recent reviews on the neutron star-black hole mass gap, see [88, 89, 90].
## 6 Conclusion
We started with an OPE of the two-point correlator of local nucleon current in terms of all the operators up to mass dimension three. We exploited quark-hadron duality to compare the OPE with hadron phenomenological spectrum to calculate the scalar and vector self-energy of a nucleon as functions of density of the surrounding infinite nuclear medium. We provided the effective mass plot of the in-medium nucleon as a function of Fermi momentum. We used the self-energy terms in the relativistic mean field lagrangian as proposed by Walecka to generate the nuclear saturation curve. Fitting the minimum of the saturation curve to the nuclear binding energy of \(-15.6\) MeV at the Fermi momentum of \(k_{F}=1.33\) fm\({}^{-1}\), we obtained the value of light quark condensate \(\langle\bar{q}q\rangle=-(0.288\) GeV\()^{3}\) and the coupling parameters of the Walecka lagrangian: \(C_{S}^{2}=235.62\), \(C_{V}^{2}=142.59\). We used Tolman-Oppenheimer-Volkoff Equations to calculate the critical mass of a neutron star for a uniform nuclear saturation energy density in its interior. From the restoration of chiral symmetry, and the conformal and causal limit of speed of sound in a high-density nuclear medium, we argued that that it would be rare to find an isolated neutron star with a mass greater than \(3.48M_{\odot}\) and impossible to find a neutron star with a mass greater than \(5.47M_{\odot}\) (a neutron star turns into a black hole beyond this mass limit), implying a plausible gap in the neutron star-black hole mass spectrum expressed in terms of universal constants,
\[\Delta\in\left[\sqrt{\frac{125c^{8}}{7776\pi G^{3}{\cal E}_{0}}},\sqrt{\frac{8 1c^{8}}{2048\pi G^{3}{\cal E}_{0}}}\right]=[3.48M_{\odot},5.47M_{\odot}]\]
where \({\cal E}_{0}\) = saturation energy density of infinite nuclear matter. A natural extension of this framework will include gluon condensate and other higher order condensates in the OPE, and consider various equations of states to analyze the properties of the neutron star interior in more detail.
## 7 Acknowledgements
Author D.D. acknowledges the facilities of Saha Institute of Nuclear Physics, Kolkata, India. Author L.S.K. acknowledges support from the P25 group at Los Alamos National laboratory.
|
2305.03280 | Signless Laplacian spectral radius of graphs without short cycles or
long cycles | The signless Laplacian spectral radius of a graph $G$, denoted by $q(G)$, is
the largest eigenvalue of its signless Laplacian matrix. In this paper, we
investigate extremal signless Laplacian spectral radius for graphs without
short cycles or long cycles. Let $\mathcal{G}(m,g)$ be the family of graphs on
$m$ edges with girth $g$ and $\mathcal{H}(m,c)$ be the family of graphs on $m$
edges with circumference $c$. More precisely, we obtain the unique extremal
graph with maximal $q(G)$ in $\mathcal{G}(m,g)$ and $\mathcal{H}(m,c)$,
respectively. | Wenwen Chen, Bing Wang, Mingqing Zhai | 2023-05-05T04:46:08Z | http://arxiv.org/abs/2305.03280v1 | # Signless Laplacian spectral radius of graphs without short cycles or long cycles 1
# Signless Laplacian spectral radius of graphs without short cycles or long cycles 1
Footnote 1: Supported by the National Natural Science Foundation of China (Nos. 12171066 and 11871222), Anhui Provincial Natural Science Foundation (Nos. 2108085MA13 and KJ2020B05).
**Wenwen Chen, Bing Wang, Mingqing Zhai**
School of Mathematics and Finance, Chuzhou University, Anhui, Chuzhou, 239012, China
**Abstract** The signless Laplacian spectral radius of a graph \(G\), denoted by \(q(G)\), is the largest eigenvalue of its signless Laplacian matrix. In this paper, we investigate extremal signless Laplacian spectral radius for graphs without short cycles or long cycles. Let \(\mathcal{G}(m,g)\) be the family of graphs on \(m\) edges with girth \(g\) and \(\mathcal{H}(m,c)\) be the family of graphs on \(m\) edges with circumference \(c\). More precisely, we obtain the unique extremal graph with maximal \(q(G)\) in \(\mathcal{G}(m,g)\) and \(\mathcal{H}(m,c)\), respectively.
**Keywords:** Signless Laplacian spectral radius; Extremal graph; Girth; Circumference
**AMS Classification:** 05C50; 05C35
## 1 Introduction
All graphs considered in this paper are simple, undirected and without isolated vertices. Let \(G\) be a graph with vertex set \(V(G)\) and edge set \(E(G)\). The _neighborhood_ of a vertex \(u\in V(G)\) is denoted by \(N_{G}(u)\). Let \(N_{G}[u]:=N_{G}(u)\cup\{u\}\), which is called the _closed neighborhood_ of \(u\). As usual, \(d_{G}(u)\) is the _degree_ of a vertex \(u\) and \(\Delta(G)\) is the _maximal degree_ of \(G\). The _average 2-degree_ of a vertex \(u\) is defined as \(m_{G}(u)=\frac{1}{d_{G}(u)}\sum_{v\in N_{G}(u)}d_{G}(v)\). We use \(A(G)\), \(D(G)\) and \(Q(G)=A(G)+D(G)\) to denote the _adjacency matrix_, _degree diagonal matrix_ and _signless Laplacian matrix_ of \(G\), respectively. The _spectral radius_\(\rho(G)\) and the _signless Laplacian spectral radius_\(q(G)\) are the largest moduli of eigenvalues of \(A(G)\) and \(Q(G)\), respectively. From Perron-Frobenius theorem, there exists a non-negative unit eigenvector corresponding to \(q(G)\), which is called the _Perron vector_ of \(Q(G)\). Moreover, the Perron vector of \(Q(G)\) is a positive vector for a connected graph \(G\).
A graph \(G\) is said to be \(H\)_-free_, if \(G\) does not contain \(H\) as a subgraph. A classic problem in extremal graph theory, known as Turan's problem, asks what is the maximum number of edges in an \(H\)-free graph of order \(n\)? Nikiforov [21] proposed a spectral version of Turan's problem as follows: what is the maximum spectral radius of an \(H\)-free graph of order \(n\)? This spectral Turan-type problem attracted much attention in the past decades
(see three surveys [4, 16, 21] and some recent results [3, 5, 6, 23]). In contrast, the spectral Turan-type problem for graphs with given size can be traced back to Nosal's [22] result in 1970, which states that if \(G\) is \(C_{3}\)-free then \(\rho(G)\leq\sqrt{m}\). This result was extended by Nikiforov, who proved in [19] that if \(G\) is \(K_{\omega+1}\)-free then \(\rho(G)\leq\sqrt{2m(1-1/\omega)}\), and completely characterized the equality in [20]. In 2007, Bollobas and Nikiforov [2] posed a stronger conjecture: if \(G\) is \(K_{\omega+1}\)-free then \(\lambda_{1}^{2}+\lambda_{2}^{2}\leq 2m(1-1/\omega)\), where \(\lambda_{1}\) and \(\lambda_{2}\) are the first two largest eigenvalues of \(A(G)\). Lin, Ning and Wu [18] confirmed Bollobas-Nikiforov conjecture for \(\omega=2\). Li, Sun and Yu [15] generalized this result by giving an upper bound of \(\lambda_{1}^{2k}+\lambda_{2}^{2k}\) for \(\{C_{3},C_{5},\ldots,C_{2k+1}\}\)-free graphs. Elphick, Linz and Wocjan [10] conjectured that \(\lambda_{1}^{2}+\lambda_{2}^{2}+\cdots+\lambda_{l}^{2}\leq 2m(1-1/\omega)\) for \(K_{\omega+1}\)-free graphs, where \(l=min(n^{+},\omega)\) and \(n^{+}\) is the positive inertia index.
Recently, Gao and Hou [13] characterized the extremal graphs with maximal \(\rho(G)\) over all graphs of order \(n\) without cycles of length at least \(k\). Very recently, Li, Sun, Yu [15] and Lin, Guo [17] independently determined the extremal graphs with maximal \(\rho(G)\) over all non-bipartite graphs of order \(n\) without odd cycles of length at most \(2k-1\). In this paper, we consider a variation of above problems by replacing \(\rho(G)\) with \(q(G)\) and order with size, that is, what is the maximum \(q(G)\) over all graphs of fixed size without short cycles or long cycles?
The _girth_ and _circumference_ of a graph \(G\) are the minimum and maximum lengths of cycles in \(G\), respectively. We now introduce two families of graphs. For two positive integers \(g,c\) with \(\min\{g,c\}\geq 3\), let \(\mathcal{G}(m,g)\) be the set of graphs on \(m\) edges with girth \(g\), and \(\mathcal{H}(m,c)\) be the set of graphs on \(m\) edges with circumference \(c\). In this paper, we obtain the following two results.
**Theorem 1.1**.: _Let \(G_{m,g}\) be the graph obtained from a cycle \(C_{g}\) by linking a vertex of the cycle to \(m-g\) isolated vertices. Then \(q(G)\leq q(G_{m,g})\) for every \(G\in\mathcal{G}(m,g)\), with equality if and only if \(G\cong G_{m,g}\)._
**Theorem 1.2**.: _Let \(H_{m,c}\) be the graph obtained from a cycle \(C_{c}\) by linking a vertex of the cycle to \(c-3\) vertices of \(C_{c}\) and \(m-2c+3\) isolated vertices. If \(m\geq 3c-4\), then \(q(H)\leq q(H_{m,c})\) for every \(H\in\mathcal{H}(m,c)\), with equality if and only if \(H\cong H_{m,c}\)._
The rest of the paper is organized as follows. In Section 2, we introduce some tools to study the signless Laplacian spectral radius, which will be used in subsequent sections. In Sections 3 and 4, we give the proofs of Theorem 1.1 and Theorem 1.2, respectively.
## 2 Preliminaries
The signless Laplacian matrix plays a very important role in spectral graph theory. In this section, several lemmas on signless Laplacian spectral radius will be introduced. For more results on signless Laplacian matrix, the readers can refer to three surveys due to Cvetkovic and Simic (see [7, 8, 9]).
The following lemmas concern an operation on edge switching.
**Lemma 2.1**.: _(Hong and Zhang [14]) Let \(G\) be a connected graph, \(X\) be a positive eigenvector of \(Q(G)\) with \(x_{i}\) corresponding to the vertex \(i\in V(G)\), and \(\{v_{1},\ldots,v_{s}\}\subseteq N_{G}(v)\setminus N_{G}(u)\) for some two vertices \(u,v\) of \(G\). Let \(G^{*}\) be the graph obtained from \(G\) by deleting the edges \(vv_{i}\) and adding the edges \(uv_{i}\) for \(1\leq i\leq s\). If \(x_{u}\geq x_{v}\), then \(q(G^{*})>q(G)\)._
The following two lemmas give upper bounds on signless Laplacian spectral radius.
**Lemma 2.2**.: _(Feng and Yu [12]) Let \(G\) be a connected graph. Then \(q(G)\leq\max\{d_{G}(u)+m_{G}(u):u\in V(G)\}\), with equality if and only if \(G\) is either a semiregular bipartite graph or a regular graph._
**Lemma 2.3**.: _(Zhai, Xue and Lou [24]) Let \(G\) be a graph with clique number \(\omega\) and size \(m\). Then \(q(G)\leq q(K_{\omega}^{m-s})\), with equality if and only if \(G\cong K_{\omega}^{m-s}\), where \(s=\binom{\omega}{2}\) and \(K_{\omega}^{m-s}\) is obtained from a complete graph \(K_{\omega}\) by linking \(m-s\) edges to a vertex of \(K_{\omega}\)._
Let \(k\geq 2\). A walk \(u_{1}u_{2}\ldots u_{k}\) in a graph \(G\) is called an _internal path_, if these \(k\) vertices are distinct (except possibly \(u_{1}=u_{k}\)), \(\min\{d_{G}(u_{1}),d_{G}(u_{k})\}\geq 3\) and \(d_{G}(u_{2})=\cdots=d_{G}(u_{k-1})=2\) (unless \(k=2\)). The following lemma concerns an operation on subdividing edges.
**Lemma 2.4**.: _(Feng, Li and Zhang [11]) Let \(G\) be a connected graph and \(uv\) be a cut edge on an internal path of \(G\). If we subdivide \(uv\), that is, add a new vertex \(w\) and substitute \(uv\) by a path \(uv\), and denote the new graph by \(G_{uv}\), then \(q(G_{uv})<q(G)\)._
Let \(Y\) be a real vector. We denote \(Y>\mathbf{0}\), if each coordinate of \(Y\) is non-negative and at least one is positive.
**Lemma 2.5**.: _(Berman and Plemmons [1]) Let \(M\) be a non-negative irreducible square matrix with spectral radius \(\lambda(M)\). If there exists a positive vector \(Y\) such that \(\alpha Y<MY<\beta Y\), then \(\alpha<\lambda(M)<\beta\)._
With the help of Lemmas 2.4 and 2.5, we obtain the following result by replacing edge subdivision with edge contraction.
**Lemma 2.6**.: _Let \(G\) be a connected graph and \(uv\) be an edge on an internal path of \(G\) with \(N_{G}(u)\cap N_{G}(v)=\varnothing\). If we contract \(uv\), that is, delete \(uv\) and identify \(u,v\) as a new vertex \(u^{*}\), and denote the new graph by \(G^{uv}\), then \(q(G^{uv})>q(G)\)._
**Proof.** If \(d_{G}(u)=2\) or \(d_{G}(v)=2\), then \(G\) can be seen as a subdivision of \(G^{uv}\), and the result follows from Lemma 2.4. Next, assume that \(\min\{d_{G}(u),d_{G}(v)\}\geq 3\).
Let \(N_{G}(u)\setminus\{v\}=\{u_{1},\ldots,u_{s}\}\) and \(N_{G}(v)\setminus\{u\}=\{v_{1},\ldots,v_{t}\}\), where \(\min\{s,t\}\geq 2\). To apply Lemma 2.5, we need to find a positive vector \(Y\) such that \(Q(G)Y<q(G^{uv})Y\). Let \(X\) be the Perron vector of \(Q(G^{uv})\), and \(Y\) be a vector defined as
\[y_{w}=\left\{\begin{array}{ll}\frac{1}{p}\big{(}\sum_{i=1}^{t}x_{v_{i}}+(q-t -1)\sum_{i=1}^{s}x_{u_{i}}\big{)},&w=u,\\ \frac{1}{p}\big{(}\sum_{i=1}^{s}x_{u_{i}}+(q-s-1)\sum_{i=1}^{t}x_{v_{i}}\big{)},&w=v,\\ x_{w},&w\in V(G)\setminus\{u,v\},\end{array}\right.\]
where \(q=q(G^{uv})\) and \(p=(q-t-1)(q-s-1)-1\). Then we have
\[(Q(G)Y)_{u} =\sum_{i=1}^{s}y_{u}+y_{v}+(s+1)y_{u}\] \[=\sum_{i=1}^{s}x_{u_{i}}+\frac{1}{p}\Big{(}\sum_{i=1}^{s}x_{u_{i} }+(q-s-1)\sum_{i=1}^{t}x_{v_{i}}\Big{)}+\frac{s+1}{p}\Big{(}\sum_{i=1}^{t}x_{ v_{i}}+(q-t-1)\sum_{i=1}^{s}x_{u_{i}}\Big{)}\] \[=qy_{u},\]
and we can similarly obtain that \((Q(G)Y)_{v}=qy_{v}\).
For each vertex \(w\in V(G)\setminus(N_{G}(u)\cup N_{G}(v))\), we have \(y_{w}=x_{w}\). Thus,
\[(Q(G)Y)_{w}=\sum_{z\in N_{G}(w)}y_{z}+d_{G}(w)y_{w}=\sum_{z\in N_{G^{uv}}(w)}x _{z}+d_{G^{uv}}(w)x_{w}=qx_{w}=qy_{w}.\]
Since \(X\) is an eigenvector of \(G^{uv}\) corresponding to \(q(G^{uv})\), we obtain
\[(q-s-t)x_{u^{\prime}}=\sum_{i=1}^{s}x_{u_{i}}+\sum_{i=1}^{t}x_{v_{i}}.\]
Note that \(G^{uv}\) contains \(K_{1,s+t}\) as a subgraph, we have \(q\geq s+t+1\), and hence \(Y\) is a positive vector. Moreover, recall that \(\min\{s,t\}\geq 2\), it follows that
\[p =(q-s-1)(q-t-1)-1\] \[=(q-s-t)(q-t-1)+(t-1)(q-t-1)-1\] \[>(q-s-t)(q-t-1).\]
Then we have
\[y_{u}-x_{u^{\prime}} =\Big{(}\frac{q-t-1}{p}-\frac{1}{q-s-t}\Big{)}\sum_{i=1}^{s}x_{u_{ i}}+\Big{(}\frac{1}{p}-\frac{1}{q-s-t}\Big{)}\sum_{i=1}^{t}x_{v_{i}}\] \[<\Big{(}\frac{1}{(q-s-t)(q-t-1)}-\frac{1}{q-s-t}\Big{)}\sum_{i=1} ^{t}x_{v_{i}}\] \[<0.\]
Thus, for each \(u_{i}\) (\(i=1,\ldots,s\)), we have
\[(Q(G)Y)_{u_{i}}=d_{G}(u_{i})y_{u_{i}}+y_{u}+\sum_{w\in N_{G}(u_{i})\setminus\{ u\}}y_{w}<d_{G^{uv}}(u_{i})x_{u_{i}}+x_{u^{\prime}}+\sum_{w\in N_{G^{uv}}(u_{i}) \setminus\{u\}}x_{w}=qx_{u_{i}}=qy_{u_{i}}.\]
By symmetry, \(y_{v}<x_{u^{\prime}}\) and \((Q(G)Y)_{v_{i}}<qy_{v_{i}}\) for each \(v_{i}\) (\(i=1,\ldots,t\)).
Based on the above analyses, we obtain \(Q(G)Y<qY\). It follows from Lemma 2.5 that \(q(G)<q=q(G^{uv})\)
Proof of Theorem 1.1
For convenience, we use \(|G|\) and \(e(G)\) to denote the numbers of vertices and edges of a graph \(G\), respectively. Let \(G^{*}\) denote an extremal graph with maximal signless Laplacian spectral radius in \(\mathcal{G}(m,g)\) and \(X\) be the Perron vector of \(Q(G^{*})\) with coordinate \(x_{v}\) corresponding to \(v\in V(G)\). Now we give the proof of Theorem 1.1.
**Proof of Theorem 1.1**
First, we consider the case \(g=3\). By Lemma 2.3, we see that \(q(G)\leq q(K_{\omega}^{m-\binom{e}{2}})\) for every graph \(G\) of size \(m\) with clique number \(\omega\). Moreover, if \(\omega\geq 3\), then \(q(K_{\omega}^{m-\binom{e}{2}})\) is strictly decreasing on \(\omega\) (see [24], Lemma 2.6). This implies that \(K_{3}^{m-3}\) attains uniquely the maximum signless Laplacian spectral radius among all graphs of fixed size \(m\) with clique number \(\omega\geq 3\). Note that \(K_{3}^{m-3}\cong G_{m,3}\) and every graph \(G\in\mathcal{G}(m,3)\) has clique number \(\omega\geq 3\). It follows that \(q(G)\leq q(K_{3}^{m-3})\) for every \(G\in\mathcal{G}(m,3)\), with equality if and only if \(G\cong G_{m,3}\).
In the following we assume that \(g\geq 4\). We shall show that \(G^{*}\cong G_{m,g}\). The proof is divided into five claims.
**Claim 3.1**.: \(G^{*}\) _is connected._
**Proof.** Recall that throughout the paper we investigate graphs without isolated vertices. Suppose that \(G^{*}\) is not connected and it consists of \(k\) components \(G_{1},G_{2},\ldots,G_{k}\). Then \(q(G^{*})=q(G_{i_{0}})\) for some \(i_{0}\in\{1,2,\ldots,k\}\). Now, select a vertex \(u_{i}\in V(G_{i})\) for each \(i\in\{1,2,\ldots,k\}\), and let \(G\) be the graph obtained from \(G^{*}\) by identifying \(u_{1},u_{2},\ldots,u_{k}\). Then \(G\in\mathcal{G}(m,g)\). Moreover, \(G_{i_{0}}\) is a proper subgraph of \(G\), and so \(q(G)>q(G_{i_{0}})=q(G^{*})\). This contradicts the choice of \(G^{*}\). Therefore, \(G^{*}\) is connected. \(\Box\)
**Claim 3.2**.: _Let \(u_{0}\in V(G^{*})\) with \(x_{u_{0}}=\max_{u\in V(G^{*})}x_{u}\). If \(G^{*}\ncong C_{g}\), then \(d_{G^{*}}(u_{0})\geq 3\)._
**Proof.** Suppose to the contrary that \(d_{G^{*}}(u_{0})\leq 2\). Then
\[q(G^{*})x_{u_{0}}=d_{G^{*}}(u_{0})x_{u_{0}}+\sum_{u\in N_{G^{*}}(u_{0})}x_{u} \leq 4x_{u_{0}},\]
which gives that \(q(G^{*})\leq 4\). However, \(C_{g}\) is a proper subgraph of \(G^{*}\), since \(G^{*}\ncong C_{g}\). Thus, \(q(G^{*})>q(C_{g})=4\), a contradiction. The claim follows. \(\Box\)
**Claim 3.3**.: _There exists a cycle \(C\) in \(G^{*}\) with \(u_{0}\in V(C)\)._
**Proof.** Let \(S\) be the set of vertices which are contained in cycles of \(G^{*}\). Suppose to the contrary that \(u_{0}\not\in S\). Then we can find a shortest path from \(u_{0}\) to \(S\), say \(P:=u_{0}u_{1}\ldots u_{k}\), where \(k\geq 1\) and \(u_{k}\in S\). Clearly, \(V(P)\cap S=\{u_{k}\}\), and hence every edge in \(E(P)\) is a cut edge of \(G^{*}\). Now define
\[G=G^{*}-\{u_{k}u:u\in N_{G^{*}}(u_{k})\setminus\{u_{0}\}\}+\{u_{0}u:u\in N_{G ^{*}}(u_{k})\setminus\{u_{0}\}\}.\]
One can observe that \(P\) is a pendent path starting from \(u_{0}\) in \(G\), and so \(G\in\mathcal{G}(m,g)\). Moreover, since \(x_{u_{0}}\geq x_{u_{k}}\), we have \(q(G)>q(G^{*})\) by Lemma 2.1, which contradicts the maximality of \(q(G^{*})\). Therefore, \(u_{0}\in S\), and the claim holds.
**Claim 3.4**.: _There exists a cycle \(C^{*}\) of length \(g\) in \(G^{*}\) with \(u_{0}\in V(C^{*})\)._
Proof.: Let \(C\) be a shortest cycle containing \(u_{0}\). We shall show \(|C|=g\). Suppose to the contrary that \(|C|\geq g+1\), and let \(C=u_{0}u_{1}\ldots u_{|C|-1}u_{0}\). Since the girth of \(G^{*}\) is \(g\geq 4\), we have \(N_{G^{*}}(u_{0})\cap N_{G^{*}}(u_{1})=\varnothing\). Now let \(G^{\prime}\) be a graph obtained from \(G^{*}\) by contracting \(u_{0}u_{1}\) as a new vertex \(u^{*}\) and adding a pendent edge to \(u^{*}\). Then \(e(G^{\prime})=e(G)=m\), and \(q(G^{\prime})>q(G^{*})\) by Lemma 2.6. Furthermore, we will see that \(G^{\prime}\in\mathcal{G}(m,g)\).
On the one hand, since \(|C|\geq g+1\), the edge \(u_{0}u_{1}\) does not belong to any cycle of length \(g\) in \(G^{*}\). Hence, contracting \(u_{0}u_{1}\) does not destroy cycles of length \(g\). On the other hand, since \(C\) is a shortest cycle containing \(u_{0}\) in \(G^{*}\), \(P=u_{1}\ldots u_{|C|-1}u_{0}\) is a shortest \((u_{0},u_{1})\)-path in \(G^{*}-\{u_{0}u_{1}\}\). Note that \(P\) is of length \(|C|-1\geq g\). Thus, contracting \(u_{0}u_{1}\) does not give cycles of lengths less than \(g\). Now we obtain that \(q(G^{\prime})>q(G^{*})\) and \(G^{\prime}\in G_{m,g}\), which contradicts the choice of \(G^{*}\). Therefore, the claim holds.
To complete the proof of Theorem 1.1, it suffices to show the following claim.
**Claim 3.5**.: _Every edge not on \(C^{*}\) is incident to \(u_{0}\)._
Proof.: Let \(E_{1}\) be the set of edges in \(E(G^{*})\setminus E(C^{*})\) which are not incident to \(u_{0}\). If \(E_{1}=\varnothing\), then the claim follows. Now assume that \(E_{1}\neq\varnothing\), and define \(E_{2}=\{u_{0}w_{i}:i=1,\ldots,|E_{1}|\}\), where \(w_{1},\ldots,w_{|E_{1}|}\) are isolated vertices added in \(G^{*}\). Let \(G^{\prime\prime}=G^{*}-E_{1}+E_{2}\), and let \(X,Y\) be the Perron vectors of \(Q(G^{*})\) and \(Q(G^{\prime\prime})\), respectively. Then
\[X^{T}Y(q(G^{\prime\prime})-q(G^{*}))=\sum_{u_{0}w_{i}\in E_{2}}(x_{u_{0}}+x_{ w_{i}})(y_{u_{0}}+y_{w_{i}})-\sum_{uv\in E_{1}}(x_{u}+x_{v})(y_{u}+y_{v}). \tag{1}\]
We now estimate entries in \(X\) and \(Y\). Since \(w_{1},\ldots,w_{|E_{1}|}\) are isolated vertices in \(G^{*}\) and pendent vertices in \(G^{\prime\prime}\), we have
\[x_{u_{0}}+x_{w_{i}}=x_{u_{0}}\quad\text{and}\quad y_{u_{0}}+y_{w_{i}}>y_{u_{0}} \tag{2}\]
for each edge \(u_{0}w_{i}\in E_{2}\).
Next consider edges in \(E_{1}\). For each edge \(uv\in E_{1}\), it is obvious that
\[x_{u}+x_{v}\leq 2x_{u_{0}}. \tag{3}\]
Moreover, we will see that if \(G^{*}\not\cong K_{2,3}\), then
\[y_{u}+y_{v}\leq\frac{1}{2}y_{u_{0}}. \tag{4}\]
If \(u,v\notin V(C^{*})\cup N_{G^{*}}(u_{0})\), then \(u,v\) are two isolated vertices in \(G^{\prime\prime}\), and so \(y_{u}+y_{v}=0\leq\frac{1}{2}y_{u_{0}}\). If \(u\in V(C^{*})\cup N_{G^{*}}(u_{0})\) and \(v\notin V(C^{*})\cup N_{G^{*}}(u_{0})\), then \(d_{G^{\prime\prime}}(u)\leq 2\) and \(d_{G^{\prime\prime}}(v)=0\). Now choose \(u^{*}\in V(C^{*})\cup N_{G^{*}}(u_{0})\) such that \(y_{u^{*}}=\max_{u\in(V(C^{*})\setminus(u_{0})]\cup N_{G^{*}}(u_{0})}y_{w}\). Then \(q(G^{\prime\prime})y_{u^{*}}\leq 2y_{u^{*}}+y_{u_{0}}+y_{u^{*}}\) and \(y_{v}=0\), which also implies that \(y_{u}+y_{v}\leq y_{u^{*}}\leq\frac{1}{2}y_{u_{0}}\) as \(q(G^{\prime\prime})\geq\Delta(G^{\prime\prime})+1\geq 5\). It remains the case \(u,v\in V(C^{*})\cup N_{G^{*}}(u_{0})\). Note that \(C^{*}\) is a shortest cycle in \(G^{*}\) and \(|C^{*}|\geq 4\). Thus we may assume that \(u\in V(C^{*})\) and \(v\in N_{G^{*}}(u_{0})\setminus V(C^{*})\). Moreover, we can see that the distance between \(u\) and \(u_{0}\) in \(C^{*}\) is exactly two. Now we have \(N_{G^{\prime\prime}}(v)=\{u_{0}\}\) and so \(y_{v}=\frac{y_{u_{0}}}{q(G^{\prime\prime})-1}\).
Let \(u_{1},u_{g-1}\in V(C^{*})\cap N_{G^{\prime\prime}}(u_{0})\). By symmetry, \(y_{u_{g-1}}=y_{u_{1}}\), and clearly, \(y_{u_{1}}>y_{w}\) for every \(w\in N_{G^{\prime\prime}}(u_{0})\setminus\{u_{1},u_{g-1}\}\). We will further see that \(y_{u_{1}}=y_{u^{\prime}}\). Otherwise, \(y_{u^{*}}\neq y_{u_{1}}\), then \(u^{*}\) is not adjacent to \(u_{0}\). Thus, \(q(G^{\prime\prime})y_{u^{*}}\leq 2y_{u^{*}}+2y_{u^{*}}\), which gives that \(q(G^{\prime\prime})\leq 4\), a contradiction. Now choose \(u_{2}\in V(C^{*})\) with \(y_{u_{2}}=\max_{w\in V(C^{*})\setminus\{u_{0},u_{1},u_{g-1}\}}y_{w}\). Then \(q(G^{\prime\prime})y_{u_{1}}\leq 2y_{u_{1}}+y_{u_{0}}+y_{u_{2}}\) and \(q(G^{\prime\prime})y_{u_{2}}\leq 2y_{u_{2}}+\sum_{w\in N_{C^{*}}(u_{2})}y_{w}\). If \(g\geq 5\), then \(\sum_{w\in N_{C^{*}}(u_{2})}y_{w}\leq y_{u_{1}}+y_{u_{2}}\) and thus \(y_{u_{2}}\leq\frac{y_{u_{1}}}{q(G^{\prime\prime})-5}\leq\frac{1}{2}y_{u_{1}}\). Combining \(q(G^{\prime\prime})y_{u_{1}}\leq 2y_{u_{1}}+y_{u_{0}}+y_{u_{2}}\) gives \(y_{u_{1}}\leq\frac{y_{u_{0}}}{q(G^{\prime\prime})-\frac{5}{2}}\) and \(y_{u_{2}}\leq\frac{y_{u_{0}}}{2q(G^{\prime\prime})-5}\leq\frac{1}{5}y_{u_{0}}\). It follows that \(y_{u}+y_{v}\leq y_{u_{2}}+\frac{1}{4}y_{u_{0}}<\frac{1}{2}y_{u_{0}}\), as desired. If \(g=4\), then \(m\geq 7\) and \(q(G^{\prime\prime})\geq\Delta(G^{\prime\prime})+1\geq 6\) as \(G^{*}\not\cong K_{2,3}\). Now \(\sum_{w\in N_{C^{*}}(u_{2})}y_{w}=y_{u_{1}}+y_{u_{g-1}}=2y_{u_{1}}\), and hence \(y_{u_{2}}\leq\frac{2y_{u_{1}}}{q(G^{\prime\prime})-2}\leq\frac{1}{2}y_{u_{1}}\). Combining \(q(G^{\prime\prime})y_{u_{1}}\leq 2y_{u_{1}}+y_{u_{0}}+y_{u_{2}}\) gives \(y_{u_{1}}\leq\frac{y_{u_{0}}}{q(G^{\prime\prime})-\frac{5}{2}}\) and \(y_{u_{2}}\leq\frac{y_{u_{0}}}{2q(G^{\prime\prime})-5}\leq\frac{1}{7}y_{u_{0}}\). We also have \(y_{u}+y_{v}\leq y_{u_{2}}+\frac{1}{5}y_{u_{0}}<\frac{1}{2}y_{u_{0}}\).
Observe that \(|E_{1}|=|E_{2}|\). Combining with (1-4), we obtain that if \(G^{*}\not\cong K_{2,3}\), then
\[X^{T}Y(q(G^{\prime\prime})-q(G^{*}))>|E_{2}|x_{u_{0}}y_{u_{0}}-|E_{1}|x_{u_{0} }y_{u_{0}}=0.\]
Since \(X^{T}Y>0\), we have \(q(G^{\prime\prime})>q(G^{*})\), a contradiction. If \(G^{*}\cong K_{2,3}\), then \((m,g)=(6,4)\) and \(G^{\prime\prime}\cong G_{6,4}\) (see Fig. 1). Straightforward calculation shows that \(q(G_{6,4})=3+\sqrt{5}>5=q(K_{2,3})\). This completes the proof.
## 4 Proof of Theorem 1.2
Recall that \(m\geq 3c-4\) and \(\mathcal{H}(m,c)\) is the set of graphs of size \(m\) with circumference \(c\). Note that \(\mathcal{H}(m,3)\subseteq\mathcal{G}(m,3)\) and \(H_{m,3}\cong G_{m,3}\). By Theorem 1.1, the case \(c=3\) is solved. In the following we assume \(c\geq 4\). To prove Theorem 1.2, we consider a bigger graph family \(\mathcal{H}(m,\geq c)\), where \(\mathcal{H}(m,\geq c)\) is the set of graphs of size \(m\) with circumference at least \(c\). We similarly use \(G^{*}\) to denote an extremal graph with maximal signless Laplacian spectral radius in \(\mathcal{H}(m,\geq c)\) and \(X\) to denote the Perron vector of \(Q(G^{*})\) with \(x_{u_{0}}=\max_{u\in V(G^{*})}x_{u}\). For simplicity, the proof is divided into some claims.
**Claim 4.1**.: \(G^{*}\) _is connected._
**Proof.** The proof of connectivity is similar as Claim 3.1.
Now denote by \(\mathcal{C}_{max}\) the set of longest cycles in \(G^{*}\). Let \(C^{*}\) have maximal \(\sum_{u\in V(C^{*})}x_{u}\) among all cycles in \(\mathcal{C}_{max}\).
**Claim 4.2**.: _For each \(u\in V(C^{*})\) and \(v\in V(G^{*})\setminus V(C^{*})\), we have \(x_{u}\geq x_{v}\), and so \(u_{0}\in V(C^{*})\)._
**Proof.** Suppose to the contrary that there exist \(u\in V(C^{*})\) and \(v\in V(G^{*})\setminus V(C^{*})\) such that \(x_{v}>x_{u}\). Let \(u^{-}\) and \(u^{+}\) be the predecessor and the successor of \(u\) in \(C^{*}\), respectively. Since \(C^{*}\) has maximal sum of Perron entries over all longest cycles, we have \(v\notin N_{G^{*}}(u^{-})\cap N_{G^{*}}(u^{+})\). Now we define \(G=G^{*}-\{uu^{-},uu^{+}\}+\{vu^{-},vu^{+}\}\) if \(v\notin N_{G^{*}}(u^{-})\cup N_{G^{*}}(u^{+})\); \(G=G^{*}-\{uu^{-}\}+\{vu^{-}\}\) if \(v\in N_{G^{*}}(u^{+})\setminus N_{G^{*}}(u^{-})\); and \(G=G^{*}-\{uu^{+}\}+\{vu^{+}\}\) if \(v\in N_{G^{*}}(u^{-})\setminus N_{G^{*}}(u^{+})\). Clear, \(G\in\mathcal{H}(m,\geq c)\), as \(G\) still contains a cycle of length \(|C^{*}|\). However, by Lemma 2.1 we have \(q(G)>q(G^{*})\), a contradiction. The claim holds.
A vertex \(u\) in a graph \(G\) is called a _dominating vertex_, if \(N_{G}[u]=V(G)\). If there is a vertex subset \(S\subseteq N_{G}[u]\), then we say that \(u\) dominates \(S\).
**Claim 4.3**.: _If \(uv\in E(G^{*})\) with \(v\in V(G^{*})\setminus V(C^{*})\), then \(u\) dominates \(V(C^{*})\)._
Proof.: Otherwise, say \(u^{\prime}\notin N_{G^{*}}(u)\) for some \(u^{\prime}\in V(C^{*})\), then by Claim 4.2\(x_{u^{\prime}}\geq x_{v}\). Now we define \(G=G^{*}-\{uv\}+\{uu^{\prime}\}\). Then \(G\in\mathcal{H}(m,\geq c)\), as \(C^{*}\subseteq G\). However, by Lemma 2.1 we have \(q(G)>q(G^{*})\), a contradiction.
A _vertex cover_ of a graph \(G\) is a vertex subset that covers all edges of \(G\).
**Claim 4.4**.: \(V(C^{*})\) _is a vertex cover of \(G^{*}\)._
Proof.: Suppose that \(V(C^{*})\) does not cover all edges, that is, there exists an edge \(vv^{\prime}\) with \(v,v^{\prime}\notin V(C^{*})\). Then by Claim 4.3, both \(v\) and \(v^{\prime}\) dominate \(V(C^{*})\). Consequently, we can easily find a cycle of length \(|C^{*}|+1\), which contradicts the definition of \(C^{*}\).
**Claim 4.5**.: _If \(V(G^{*})\setminus V(C^{*})\neq\varnothing\), then \(N_{G^{*}}(v)=\{u_{0}\}\) for each \(v\in V(G^{*})\setminus V(C^{*})\)._
Proof.: Let \(v\) be an arbitrary vertex in \(V(G^{*})\setminus V(C^{*})\). Then \(u_{0}\in N_{G^{*}}(v)\) (otherwise, say \(u\in N_{G^{*}}(v)\), then by Lemma 2.1\(q(G^{*}-\{uv\}+\{u_{0}v\})>q(G^{*})\), as \(x_{u_{0}}\geq x_{u}\)). It follows from Claim 4.3 that \(u_{0}\) dominates \(V(C^{*})\), and hence \(u_{0}\) dominates \(V(G^{*})\) by the arbitrariness of \(v\in V(G^{*})\setminus V(C^{*})\). Next we consider two cases.
**Case 1.**\(d_{G^{*}}(v)=2\).
Assume that \(N_{G^{*}}(v)=\{u_{0},u^{*}\}\). Then by Claim 4.4, \(u^{*}\in V(C^{*})\) and so \(d_{G^{*}}(u^{*})\geq 3\). Let \(G=G^{*}-\{u^{*}v\}+\{u_{0}w\}\), where \(w\) is an isolated vertex added in \(G^{*}\). Clearly, \(G\in\mathcal{H}(m,\geq c)\), as \(C^{*}\subseteq G\). Let \(Y\) be the Perron vector of \(Q(G)\). Then
\[q(G)y_{u_{0}}=(d_{G^{*}}(u_{0})+1)y_{u_{0}}+\sum_{u\in N_{G^{*}}(u_{0})\setminus \{u^{*}\}}y_{u}+y_{u^{*}}+y_{w}, \tag{5}\]
\[q(G)y_{u^{*}}=(d_{G^{*}}(u^{*})-1)y_{u^{*}}+\sum_{u\in N_{G^{*}}(u^{*})\setminus \{u_{0}\}}y_{u}+y_{u_{0}}-y_{v}. \tag{6}\]
Note that \(N_{G^{*}}[u^{*}]\subseteq N_{G^{*}}[u_{0}]\) and \(N_{G}(w)=N_{G}(v)=\{u_{0}\}\), then \(\sum_{u\in N_{G^{*}}(u^{*})}y_{u}\leq\sum_{u\in N_{G^{*}}(u_{0})}y_{u}\), \(d_{G^{*}}(u^{*})\leq d_{G^{*}}(u_{0})\) and \(y_{w}=y_{v}\). Combining with (5-6), we have
\[q(G)y_{u_{0}}-q(G)y_{u^{*}}\geq(d_{G^{*}}(u^{*})+1)y_{u_{0}}-(d_{G^{*}}(u^{*}) -1)y_{u^{*}}+y_{u^{*}}-y_{u_{0}}+2y_{v}.\]
It follows that \((q(G)-d_{G^{*}}(u^{*}))(y_{u_{0}}+y_{v})\geq(q(G)-d_{G^{*}}(u^{*})+2)(y_{u^{*} }+y_{v})\). Equivalently,
\[y_{u_{0}}+y_{w}\geq\frac{q(G)-d_{G^{*}}(u^{*})+2}{q(G)-d_{G^{*}}(u^{*})}(y_{u^ {*}}+y_{v}), \tag{7}\]
as \(y_{v}=y_{w}\). On the other hand, since \(w\) is an isolated vertex in \(G^{*}\), we have
\[x_{u_{0}}+x_{w}=x_{u_{0}}. \tag{8}\]
Moreover, \(N_{G^{*}}(v)=\{u_{0},u^{*}\}\) implies that \(q(G^{*})x_{v}=2x_{v}+x_{u_{0}}+x_{u^{*}}\leq 2x_{v}+2x_{u_{0}}\), and so
\[x_{u^{*}}+x_{v}\leq x_{u_{0}}+\frac{2}{q(G^{*})-2}x_{u_{0}}=\frac{q(G^{*})}{q( G^{*})-2}x_{u_{0}}. \tag{9}\]
Combining with (7-9), we obtain that
\[X^{T}Y(q(G)-q(G^{*})) =(x_{u_{0}}+x_{w})(y_{u_{0}}+y_{w})-(x_{u^{*}}+x_{v})(y_{u^{*}}+y_{v})\] \[\geq\Big{(}\frac{q(G)-d_{G^{*}}(u^{*})+2}{q(G)-d_{G^{*}}(u^{*})}- \frac{q(G^{*})}{q(G^{*})-2}\Big{)}x_{u_{0}}(y_{u^{*}}+y_{v})\] \[>0,\]
where the last inequality follows from \(q(G^{*})\geq q(G)\) and \(d_{G^{*}}(u^{*})\geq 3\). Therefore, \(q(G)>q(G^{*})\), a contradiction.
**Case 2.**\(d_{G^{*}}(v)\geq 3\).
We first partition \(V(G^{*})\setminus V(C^{*})\) into \(V_{1}\cup V_{2}\), where \(V_{1}\) is the set of pendent vertices. Clearly, \(V_{1}\subseteq N_{G^{*}}(u_{0})\); and Case 1 implies that \(d_{G^{*}}(v)\geq 3\) for each \(v\in V_{2}\). Now let \(K_{s}^{t}\) be the graph obtained from \(K_{s}\) by attaching \(t\) pendent edges at a vertex of \(K_{s}\). Then by Lemma 2.2, we have \(q(K_{s}^{t})\leq 2(s-1)+t\). Observe that \(G^{*}\subseteq K_{|C^{*}|+|V_{2}|}^{|V_{1}|}\). Thus,
\[q(G^{*})\leq q(K_{|C^{*}|+|V_{2}|}^{|V_{1}|})\leq 2(|C^{*}|+|V_{2}|-1)+|V_{1}|. \tag{10}\]
Now we partition \(E(G^{*})\setminus E(C^{*})\) into \(E_{1}\cup E_{2}\), where \(E_{2}\) is the set of chords of \(C^{*}\). Since by Claim 4.4\(V_{1}\cup V_{2}\) is an independent set, we have \(|E_{1}|=\sum_{v\in V_{1}\cup V_{2}}d_{G^{*}}(v)\geq|V_{1}|+3|V_{2}|\). Moreover, note that \(v\) has at least three neighbors. By Claim 4.4, \(N_{G^{*}}(v)\subseteq V(C^{*})\); and by Claim 4.3, each of these neighbors dominates \(V(C^{*})\). Thus, \(|E_{2}|\geq 3|C^{*}|-12\). Now let \(G\) be the graph obtained from \(C^{*}\) by attaching \(|E_{1}|+|E_{2}|\) pendent edges at \(u_{0}\). Then, \(G\in\mathcal{H}(m,\geq c)\), as \(C^{*}\subseteq G\). Furthermore, \(\Delta(G)=|E_{1}|+|E_{2}|+2\). It follows that
\[q(G)>q(K_{1,\Delta(G)})=\Delta(G)+1\geq 3|C^{*}|+|V_{1}|+3|V_{2}|-9. \tag{11}\]
Note that \(v\in V_{2}\) and it has at least three neighbors in \(V(C^{*})\). Then \(|V_{2}|\geq 1\), and neighbors of \(v\) are not consecutive in \(C^{*}\) (otherwise, we have a cycle of length greater than \(|C^{*}|\)). This implies that \(|C^{*}|\geq 6\). Comparing (10) with (11), we get that \(q(G)>q(K_{|C^{*}|+|V_{2}|}^{|V_{1}|})\geq q(G^{*})\), a contradiction. This completes the proof.
By Claim 4.5, \(E(G^{*})\setminus E(C^{*})=E_{1}\cup E_{2}\), where \(E_{1}\) consists of pendent edges incident to \(u_{0}\) and \(E_{2}\) consists of chords of \(C^{*}\).
**Claim 4.6**.: \(|C^{*}|=c\).
**Proof.** Recall that \(|C^{*}|\geq c\geq 4\). If \(|C^{*}|=c\), then we are done. Now suppose that \(|C^{*}|\geq c+1\). Let \(u_{1}\in V(C^{*})\) with \(x_{u_{1}}=\min_{u\in V(C^{*})}x_{u}\). We will see that \(u_{1}^{-}u_{1}^{+}\) is a chord of \(C^{*}\). Otherwise, define \(G:=G^{*}-\{u_{1}u_{1}^{+}\}+\{u_{1}^{-}u_{1}^{+}\}\), then \(G\) has a cycle of length \(|C^{*}|-1\geq c\), and so \(G\in\mathcal{H}(m,\geq c)\). Moreover, since \(x_{u_{1}^{-}}\geq x_{u_{1}}\), by Lemma 2.1 we have \(q(G)>q(G^{*})\), a contradiction.
Now \(G^{*}\) contains a \((|C^{*}|-1)\)-cycle \(C\) with \(V(C)=V(C^{*})\setminus\{u_{1}\}\). Subsequently, each neighbor of \(u_{1}\) dominates \(V(C^{*})\), since \(N_{G^{*}}(u_{1})\subseteq V(C^{*})\) and \(x_{u}\geq x_{u_{1}}\) for each \(u\in V(C^{*})\). Furthermore, \(u_{0}u_{1}\in E(G^{*})\) (otherwise, \(q(G^{*}-\{u_{1}^{+}u_{1}\}+\{u_{0}u_{1}\})>q(G^{*})\), as \(x_{u_{0}}\geq x_{u_{1}^{*}}\)). It follows that each of \(u_{1}^{-}\), \(u_{1}^{+}\) and \(u_{0}\) dominates \(V(C^{*})\).
Now, if \(d_{G^{*}}(u_{1})\geq 3\), then similarly as Case 2 of Claim 4.5, we can get a graph \(G\in\mathcal{H}(m,\geq c)\) with \(q(G)>q(K_{|C^{*}|}^{|E_{1}|})\geq q(G^{*})\), a contradiction. Therefore, \(d_{G^{*}}(u_{1})=2\), which
implies that \(u_{0}\in\{u_{1}^{-},u_{1}^{+}\}\). For convenience, we may assume that \(C^{*}=u_{0}u_{1}\ldots u_{|C^{*}|-1}u_{0}\), where both \(u_{0}\) and \(u_{2}\) dominate \(V(C^{*})\). Next we consider two cases.
**Case 1.** There exists a chord of \(C^{*}\) not incident to \(u_{0}\) and \(u_{2}\).
In this case, we can see that \(C^{*}\) has at least \(2|C^{*}|-6\) chords, and thus \(m=e(G^{*})\geq|E_{1}|+3|C^{*}|-6\). Let \(G\) be the graph obtained from a \((|C^{*}|-1)\)-cycle by attaching \(m-|C^{*}|+1\) pendent edges at a vertex. Then \(G\in\mathcal{H}(m,\geq c)\) as \(|C^{*}|\geq c+1\), and \(q(G)>q(K_{1,\Delta(G)})=\Delta(G)+1\geq|E_{1}|+2|C^{*}|-2\). On the other hand, since \(G^{*}\subseteq K_{|C^{*}|}^{|E_{1}|}\), we have \(q(G^{*})\leq q(K_{|C^{*}|}^{|E_{1}|})\leq|E_{1}|+2|C^{*}|-2\). It follows that \(q(G)>q(G^{*})\), a contradiction.
**Case 2.** All chords of \(C^{*}\) are incident to \(u_{0}\) or \(u_{2}\).
In this case, we can see that \(C^{*}\) has exactly \(2|C^{*}|-7\) chords, and so \(m=|E_{1}|+3|C^{*}|-7\). Let \(G\) be defined as in Case 1. Then \(G\in\mathcal{H}(m,\geq c)\) and \(q(G)>q(K_{1,\Delta(G)})=|E_{1}|+2|C^{*}|-3\). On the other hand, note that \(d_{G^{*}}(u_{0})=|E_{1}|+|C^{*}|-1\), \(d_{G^{*}}(u_{1})=2\), \(d_{G^{*}}(u_{2})=|C^{*}|-1\), \(d_{G^{*}}(u_{3})=d_{G^{*}}(u_{|C^{*}|})=3\), and \(d_{G^{*}}(u)=4\) for each of other \(|C^{*}|-5\) vertices in \(V(C^{*})\). By straightforward computation, we can check that \(d_{G^{*}}(u)+m_{G^{*}}(u)\leq|E_{1}|+2|C^{*}|-3\) for each \(u\in V(G^{*})\), and by Lemma 2.2\(q(G^{*})\leq|E_{1}|+2|C^{*}|-3\). Therefore, \(q(G)>q(G^{*})\), a contradiction. This completes the proof of the claim.
Now we have Claim 4.5 and Claim 4.6 in hand. To complete the proof of Theorem 1.2, it suffices to show the following claim.
**Claim 4.7**.: _If \(m\geq 3c-4\), then \(u_{0}\) dominates \(V(C^{*})\) and all chords of \(C^{*}\) are incident to \(u_{0}\)._
**Proof.** Note that \(|C^{*}|=c\) and \(G^{*}\) contains \(|E_{1}|\) pendent edges. If \(|E_{1}|=0\), then \(q(G^{*})\leq q(K_{c})=2c-2\). Let \(G\) be the graph obtained from a \(c\)-cycle by attaching \(m-c\) pendent edges at a vertex. Then \(q(G)>\Delta(G)+1=m-c+2\geq 2c-2\). Consequently, \(q(G)>q(G^{*})\), a contradiction. Therefore, \(|E_{1}|\geq 1\), and by Claim 4.3\(u_{0}\) dominates \(V(C^{*})\).
Recall that \(E_{2}\) is the set of chords of \(C^{*}\). Let \(E_{2}^{\prime}\) be the subset of \(E_{2}\) in which each chord is not incident to \(u_{0}\). In the following it suffices to show \(E_{2}^{\prime}=\varnothing\). Suppose to the contrary that \(|E_{2}^{\prime}|\geq 1\). Note that
\[|E_{2}^{\prime}|+|E_{1}|=m-2c+3\ \ \ \text{and}\ \ \ d_{G^{*}}(u_{0})=|E_{1}|+c-1. \tag{12}\]
Now we define \(G=G^{*}-E_{2}^{\prime}+\{u_{0}w_{i}:i=1,\ldots,|E_{2}^{\prime}|\}\), where \(w_{1},\ldots,w_{|E_{2}^{\prime}|}\) are isolated vertices added in \(G^{*}\). Then \(G\in\mathcal{H}(m,\geq c)\), and by (12) we obtain
\[q(G)>\Delta(G)+1=|E_{2}^{\prime}|+d_{G^{*}}(u_{0})+1=|E_{2}^{\prime}|+|E_{1}|+ c=m-c+3. \tag{13}\]
We first assume that \(|E_{2}^{\prime}|=1\), say \(E_{2}^{\prime}=\{u_{i}u_{j}\}\), and let \(u_{k}\in V(C^{*})\) with \(x_{u_{k}}=\max_{u\in V(G^{*})\setminus\{u_{0}\}}x_{u}\). Then
\[q(G^{*})x_{u_{k}}=d_{G^{*}}(u_{k})x_{u_{k}}+\sum_{u\in N_{G^{*}}(u_{k})}x_{u} \leq(2d_{G^{*}}(u_{k})-1)x_{u_{k}}+x_{u_{0}}.\]
It follows that \(x_{u_{k}}\leq\frac{x_{u_{0}}}{(G^{*})-2d_{G^{*}}(u_{k})+1}\). If \(c=4\), then \(G^{*}\cong K_{c}^{|E_{1}|}\) and so \(d_{G^{*}}(u_{k})=3\). Thus \(x_{u_{i}}+x_{u_{j}}\leq 2x_{u_{k}}\leq\frac{2}{q(G^{*})-5}x_{u_{0}}\). If \(c\geq 5\), then \(d_{G^{*}}(u_{k})\leq 4\). Thus \(x_{u_{i}}+x_{u_{j}}\leq 2x_{u_{k}}\leq\frac{2}{q(G^{*})-7}x_{u_{0}}\).
Note that \(m\geq 3c-4\). By (13), we have \(q(G^{*})\geq q(G)>2c-1\). Hence, in both cases \(x_{u_{i}}+x_{u_{j}}<x_{u_{0}}.\) It follows that
\[q(G)-q(G^{*})\geq X^{T}(Q(G)-Q(G^{*}))X\geq(x_{u_{0}}+x_{w_{1}})^{2}-(x_{u_{i}}+ x_{u_{j}})^{2}>0,\]
a contradiction. Therefore, \(|E_{2}^{\prime}|\geq 2\), which also implies that \(c\geq 5\).
Now let \(u^{*}\in V(G^{*})\) such that \(d_{G^{*}}(u^{*})+m_{G^{*}}(u^{*})\) is maximal. If \(u^{*}\notin V(C^{*})\), then \(N_{G^{*}}(u^{*})=\{u_{0}\}\). By Lemma 2.2, \(q(G^{*})\leq d_{G^{*}}(u^{*})+m_{G^{*}}(u^{*})=1+d_{G^{*}}(u_{0})=|E_{1}|+c\). Combining with (13), we have \(q(G)>q(G^{*})\), a contradiction. Hence, \(u^{*}\in V(C^{*})\). In the following, it remains to consider two cases.
**Case 1.**\(u^{*}=u\) for some \(u\in V(C^{*})\setminus\{u_{0}\}\).
Observe that \(d_{G^{*}}(u)+m_{G^{*}}(u)\leq d_{G^{*}}(u_{0})+3(d_{G^{*}}(u)-1)+2|E_{2}^{ \prime}|\). Combining with (12),
\[d_{G^{*}}(u)+m_{G^{*}}(u)\leq 2m-3c+2-|E_{1}|+3d_{G^{*}}(u)\leq 2m-3c+1+3d_{G^{* }}(u),\]
as \(|E_{1}|\geq 1\). It follows that
\[q(G^{*})\leq d_{G^{*}}(u)+m_{G^{*}}(u)\leq d_{G^{*}}(u)+3+\frac{2m-3c+1}{d_{G ^{*}}(u)}.\]
Since \(u\in V(C^{*})\setminus\{u_{0}\}\), we have \(2\leq d_{G^{*}}(u)\leq c-1\), and hence
\[q(G^{*})\leq\max\left\{5+\frac{2m-3c+1}{2},c+2+\frac{2m-3c+1}{c-1}\right\} \leq m-c+3, \tag{14}\]
as \(c\geq 5\) and \(m\geq 3c-4\). Comparing (14) with (13), we have \(q(G)>q(G^{*})\), a contradiction.
**Case 2.**\(u^{*}=u_{0}\).
Since \(u_{0}\) is a dominating vertex, \(d_{G^{*}}(u_{0})+m_{G^{*}}(u_{0})=2m-d_{G^{*}}(u_{0}).\) It follows that
\[q(G^{*})\leq d_{G^{*}}(u_{0})+m_{G^{*}}(u_{0})=d_{G^{*}}(u_{0})-1+\frac{2m}{d _{G^{*}}(u_{0})}.\]
Note that \(|E_{1}|\geq 1\) and \(|E_{2}^{\prime}|\geq 2\). Then \(d_{G^{*}}(u_{0})=|E_{1}|+c-1\geq c\). Moreover, by (12) \(|E_{1}|=m-2c+3-|E_{2}^{\prime}|\leq m-2c+1\), and so \(d_{G^{*}}(u_{0})=|E_{1}|+c-1\leq m-c.\) It follows that
\[q(G^{*})\leq\max\left\{c-1+\frac{2m}{c},m-c-1+\frac{2m}{m-c}\right\}\leq m- c+3,\]
as \(c\geq 5\) and \(m\geq 3c-4\). Combining with (13), we also get a contradiction. This completes the proof.
## 5 Conclusion remarks
To end this paper, we present a question for further research. For \(m\geq 3c-4\), Theorem 1.2 determines the maximum \(q(G)\) over all graphs in \(\mathcal{H}(m,c)\). A natural question is to consider the case \(c+1\leq m\leq 3c-5\).
**Question 5.1**.: _For \(c+1\leq m\leq 3c-5\), what is the maximum signless Laplacian spectral radius over all graphs in \(\mathcal{H}(m,c)\)?_
## Acknowledgements
The authors are grateful to the referees for their careful reading and many valuable suggestions.
|
2302.07041 | Doubly stochastic continuous time random walk | Since its introduction, some sixty years ago, the Montroll-Weiss continuous
time random walk has found numerous applications due its ease of use and
ability to describe both regular and anomalous diffusion. Yet, despite its
broad applicability and generality, the model cannot account for effects coming
from random diffusivity fluctuations which have been observed in the motion of
asset prices and molecules. To bridge this gap, we introduce a doubly
stochastic version of the model in which waiting times between jumps are
replaced with a fluctuating jump rate. We show that this newly added layer of
randomness gives rise to a rich phenomenology while keeping the model fully
tractable -- allowing us to explore general properties and illustrate them with
examples. In particular, we show that the model presented herein provides an
alternative pathway to Brownian yet non-Gaussian diffusion which has been
observed and explained via diffusing diffusivity approaches. | Maxence Arutkin, Shlomi Reuveni | 2023-02-14T13:44:15Z | http://arxiv.org/abs/2302.07041v3 | # Doubly stochastic continuous time random walk
###### Abstract
Since its introduction, some sixty years ago, the Montroll-Weiss continuous time random walk has found numerous applications due its ease of use and ability to describe both regular and anomalous diffusion. Yet, despite its broad applicability and generality, the model cannot account for effects coming from random diffusivity fluctuations which have been observed in the motion of asset prices and molecules. To bridge this gap, we introduce a doubly stochastic version of the model in which waiting times between jumps are replaced with a fluctuating jump rate. We show that this newly added layer of randomness gives rise to a rich phenomenology while keeping the model fully tractable -- allowing us to explore general properties and illustrate them with examples. In particular, we show that the model presented herein provides an alternative pathway to Brownian yet non-Gaussian diffusion which has been observed and explained via diffusing diffusivity approaches.
In heterogeneous environments, such as porous media and biological cells, the random motion of molecules and particles may deviate from normal diffusion in many different ways and forms [1; 2; 3; 4; 5]. In particular, a series of recent experiments and computer simulations exhibit a similar pattern of anomalous behaviour: while the mean squared displacement is linear at all timescales, the displacement distribution is Gaussian only for very long times or not at all (Fickian yet non-Gaussian) [6; 7; 8; 9; 10; 11; 12].
An explanation to this unexpected behaviour, which seemingly breaks the central limit theorem, was given through the concept of superstatistics [13; 14; 7]. This idea describes a complex medium as one which has a distribution of potential diffusion coefficients. The displacement probability distribution of particles is then built by averaging over many diffusion pathways, each of which draws its particular diffusion coefficient from a distribution prescribed by the complex medium. This approach can explain the observed experimental phenomenon of a system having a mean square displacement which is linear in time, alongside a non-Gaussian displacement probability distribution. Specifically, the latter is a weighted average (over the distribution of the diffusion coefficients) of Gaussian displacement probability distributions. Yet, the superstatistics approach cannot explain observed crossovers and transitions, e.g., from non-Gaussian distributions at short timescales to a Gaussian distribution at long timescales [15].
To address this issue, the concept of diffusing diffusivity was introduced by Chubinsky and Slater [16] and further developed and explored by many others [17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30]. The basic idea is to describe the diffusion of particles with a diffusion equation in which the diffusion coefficient is itself diffusing. The Gaussian distribution at long timescales is then found universal for a diffusivity that is self-averaging in time. The displacement distribution at short timescales is, on the contrary, non-universal and depends on the equilibrium distribution of the diffusion coefficient. We note that similar phenomenology was observed and explained using similar models a few years earlier in the field of quantitative finance. The log-return of a stock price diffuses with a certain volatility which is analogous to the diffusion coefficient. As this volatility also diffuses in time [31; 32; 33], the log-return exhibits similar transitions between short and long-time behaviour.
Diffusion can be seen as a continuum limit of a large class of random walk processes that occur on the microscopic scale. Yet, a general description of random walks with "diffusing diffusivity" is currently missing from the physics literature. To bridge this gap, we introduce a new modelling framework that sets the foundations for the study of the diffusing diffusivity phenomenology from a random walks perspective. We start from the widely applied
Figure 1: The doubly stochastic CTRW has three layers of randomness. The fluctuating jump rate \(\lambda_{t}\) in panel (c) gives the probability \(\lambda_{t}dt\) for a jump to occur in the time interval \([t,t+dt]\). When a jump event occurs, bar-coded in panel (b), the random walker draws a random step and jumps to a new position as shown in panel (a). Here, we illustrate this general modeling framework using a jump rate \(\lambda_{t}\) that is redrawn, every \(\tau_{r}=50\) units of time, from an exponential distribution with unit mean. Jumps are symmetric, with equal probability to make a right/left step (\(\pm 1\)). The resulting process exhibits periods of low and high activity, which give rise to deviations from Gaussian diffusion as observed in heterogeneous environments.
(CTRW) [34; 35; 36; 37], where the walker jumps instantaneously from one position to another following a waiting period. In its simplest form, which gives rise to normal diffusion, waiting times in the CTRW are taken from an exponential distribution that is characterized by a constant jump rate \(\lambda\). To introduce the equivalent of a diffusing diffusivity, we instead consider a diffusing jump rate \(\lambda_{t}\)[38; 39], such that the probability to make a jump during the time interval \([t,t+dt]\) is given by \(\lambda_{t}dt\). This results in a doubly stochastic version of the continuous time random walk. Next, we show that this newly added layer of randomness gives rise to a rich phenomenology which captures, inter alia, both regular and anomalous diffusion.
_Doubly stochastic continuous time random walk (DSCTRW)._--The DSCTRW is schematised in Fig. 1, the random walker takes steps at random times which are determined by a Poisson process whose jump rate \(\lambda_{t}\) is itself a random function of time. This doubly stochastic Poisson process is also known as a Cox process [38]. When a jump event occurs, the random walker takes a step \(X\) from a distribution whose characteristic function is \(\hat{\phi}\left(k\right)=\langle e^{ikX}\rangle\). The random walker's displacement is determined by \(P(x,t)\), the probability to find it in position \(x\) at time \(t\). To compute this probability, let us first assume a given path for the jump rate process \(\lambda_{t}\). The resulting jump process is then a time inhomogeneous Poisson process, and therefore the number of steps made until time \(t\) will be given by the Poisson distribution with mean \(\Lambda_{t}=\int_{0}^{t}\lambda_{t^{\prime}}\mathrm{d}t^{\prime}\)[40; 41]. Letting \(\chi_{n}(t)\) denote the probability that the random walker made exactly \(n\) steps until time \(t\) we thus have \(\chi_{n}(t)=\frac{\Lambda_{t}^{n}}{n!}e^{-\Lambda_{t}}\).
The Fourier transform of the probability distribution of the displacement, conditioned on a given path of the diffusing jump rate, reads
\[\hat{P}(k,t|\lambda_{t})=\sum_{n=0}^{+\infty}\chi_{n}(t)\hat{\phi}(k)^{n}=e^{ -(1-\hat{\phi}(k))\Lambda_{t}}. \tag{1}\]
Note, that this distribution only depends on the diffusing jump rate \(\lambda_{t}\) through its time integral \(\Lambda_{t}\). Thus, the random walk dynamics does not depend on every detail of the diffusing jump rate but rather on its integrated properties. Averaging Eq. (1) with respect to the distribution of \(\Lambda_{t}\), we obtain
\[\hat{P}(k,t)=\tilde{\Lambda}_{t}(1-\hat{\phi}(k)), \tag{2}\]
where \(\tilde{\Lambda}_{t}(1-\hat{\phi}(k))\) is the Laplace transform of the integrated diffusing jump rate \(\Lambda_{t}\) evaluated at \(1-\hat{\phi}(k)\). Equation (2) is the DSCTRW analog of the Montroll-Weiss formula. We emphasize its generality, and note that it holds for both stochastic and deterministic paths of \(\lambda_{t}\).
_Long-time behaviour._-- Starting from Eq. (2), we can understand both the short and long time behavior of the DSCTRW. Specifically, under mild assumptions, the long-time behavior is universally Gaussian. To show this, we need the following assumptions: (i) the mean and variance of the step distribution are finite; and (ii) the integrated jump rate has the following long-time asymptotics: \(\Lambda_{t}=\int_{0}^{t}\lambda_{t}\mathrm{d}t^{\prime}\simeq\overline{ \lambda}t+\zeta_{t}\), where \(\overline{\lambda}\) is the long time average of the fluctuating jump rate and where \(\zeta_{t}\) is asymptotically Gaussian with \(\langle\zeta_{t}\rangle=0\) and \(\langle\zeta_{t}^{2}\rangle=O(t)\). In particular, note that this condition holds for the typical case, \(\langle\zeta_{t}^{2}\rangle\sim t\), that arises (due to the central limit theorem) for a fluctuating jump rate that has a finite correlation time and a steady-state distribution with a finite mean and variance. Under these assumptions, we find that the Fourier transform of the scaled displacement converges to [42]
\[\langle e^{ik\frac{\overline{\lambda}-\overline{\lambda}\langle x\rangle}{ \overline{\sigma}(t)}}\rangle\to e^{-\frac{k^{2}}{2}}, \tag{3}\]
which is the Fourier transform of the standard normal.
The result in Eq. (3) is universal given the aforementioned assumptions. Yet, we note that these can be broken in several different ways. For example, diverging moments of the step distribution prevent moment expansion of \(\hat{\phi}(k)\) and yield, in the spirit of the generalized central limit theorem [43; 37], \(\alpha\)-stable Levy asymptotics for the displacement. Stable distributions also arise when the jump rate has a steady-state with an infinite mean or variance. Such situations give rise to stable distributions for \(\Lambda_{t}\), and the position distribution can then be computed via Eq. (2). Finally, as yet another example, one can think of cases where \(\langle\lambda_{t}\rangle\propto\lambda t^{\alpha-1}\). These give \(\langle\Lambda_{t}\rangle\propto\lambda t^{\alpha}\), which in turn yield sub-diffusion for \(\alpha<1\) and super-diffusion for \(\alpha>1\).
_Short-time behaviour._--Contrary to long times, the short time displacement distribution is not universal and its shape depends on the steady-state distribution of the diffusing jump rate. Assuming the latter exists, we consider a case where the diffusing jump rate has been evolving for a very long time prior to the start of the experiment, such that it has converged to its steady-state. In the short time limit \(t\ll\tau_{r}\), with \(\tau_{r}\) being the typical relaxation time of the diffusing jump rate process, the integrated rate behaves like \(\Lambda_{t}=\int_{0}^{t}\lambda_{t^{\prime}}\mathrm{d}t^{\prime}\simeq t \lambda_{e}\) with \(\lambda_{e}\) drawn from the steady-state distribution whose Laplace transform we denote by \(\tilde{\lambda}_{e}(s)=\langle e^{-s\lambda_{e}}\rangle\).
Under these assumptions, the Laplace transform of the integrated rate can be expressed using the Laplace transform of the jump rate, \(\tilde{\Lambda}_{t}(s)=\tilde{\lambda}_{e}(ts)\) and together with Eq. (2) we have
\[\hat{P}(k,t)=\tilde{\lambda}_{e}\left(t(1-\hat{\phi}(k))\right). \tag{4}\]
Thus if we study the tail of this distribution in the case of a symmetric walk, we obtain \(\hat{P}(k,t)\simeq\tilde{\lambda}_{e}(tk^{2}\langle X^{2}\rangle/2)\), which is a symmetric version of the rate probability distribution, that has no reason of being universal. For example, if \(\lambda_{e}\) is exponentially distributed with mean \(\overline{\lambda}\) - which is the cased for a jump rate process that is diffusing on \(\mathbb{R}_{+}\) and with a negative drift pointing towards the origin - we obtain \(\hat{P}(k,t)\simeq\frac{1}{1+\lambda k^{2}\langle X^{2}\rangle/2}\) which is the Fourier transform of the Laplace distribution, also known as the bi-exponential. The probability distribution is then given by
\[P(x,t)=\frac{1}{2\sigma_{t}}e^{-\frac{|x|}{\hat{\alpha}}}, \tag{5}\]
with \(\sigma_{t}=\sqrt{\overline{\lambda}t\langle X^{2}\rangle/2}\). While the form in Eq. (5) is not universal, it is noteworthy since drift-diffusion often provides an
excellent approximation to more complicated processes that may govern the fluctuating jump rate. This could, perhaps, explain the prevalence of exponential tails observed experimentally at short-times. For an alternative explanation we refer to Barkai & Burov [44].
_Moments.--_Moments in the DSCTRW can be computed by taking derivatives of Eq. (2): \(\langle x_{t}^{n}\rangle=\lim_{k\to 0}(-i)^{n}\frac{\mathrm{d}^{n}\hat{P}(k,t)}{ \mathrm{d}^{n}}\). Specifically, we find that while the displacement probability distribution displays a non-Gaussian to Gaussian transition, the MSD of an unbiased walk is linear at all times \(\langle x_{t}^{2}\rangle=\langle X^{2}\rangle\langle\Lambda_{t}\rangle= \langle X^{2}\rangle\overline{\lambda}t\)[42], provided \(\lambda_{t}\) starts equilibrated and has a finite mean. More generally, the time dependence of the MSD is completely determined by the first moment of the integrated diffusing jump rate process and the second moment of the jump. The fourth moment of a symmetric DSCTRW is given by \(\langle x_{t}^{4}\rangle=\langle X^{4}\rangle\langle\Lambda_{t}\rangle=3 \langle X^{2}\rangle^{2}\langle\Lambda_{t}^{2}\rangle\)[42].
_An exactly solvable DSCTRW.--_To further illustrate the DSCTRW we now analyze a minimal model that captures its essential features. Namely, we focus on the model illustrated in Fig. 1, where the diffusing jump rate is drawn every \(\tau_{r}\) from a given distribution, which we hereby assume is exponential with unit mean. Note that the resulting jump rate is piecewise constant, and that it changes abruptly when a new time window starts. This can be seen as a simplified picture of more complicated diffusing jump rate processes whose auto-correlation time \(\tau_{r}\) is finite.
To get the propagator using Eq. (2), we compute the Laplace transform of the integrated jump rate, which in our model is a sum of independent random variables \(\Lambda_{t}=\sum_{i=1}^{n_{\tau}}\lambda_{i}\tau_{r}+\lambda_{n_{\varphi+1}} \delta_{t}\) with \(n_{\tau_{r}}=\left\lfloor\frac{t}{\tau_{r}}\right\rfloor\) and \(\delta_{t}=t-n_{\tau_{r}}\tau_{r}\). Therefore the Laplace transform, in Eq. (2) is a product of Laplace transforms of exponentially distributed random variables, and we obtain
\[\hat{P}(k,t)=\tilde{\lambda}_{e}\left((1-\hat{\phi}(k))\tau_{r}\right)^{n_{ \tau_{r}}}\tilde{\lambda}_{e}\left((1-\hat{\phi}(k))\delta_{t}\right), \tag{6}\]
with \(\tilde{\lambda}_{e}(s)=1/(1+s)\). Taking the walk to be simple symmetric we have \(\phi(k)=\cos(k)\), and a Laplace to Gaussian transition is observed when going from the short-time limit \(t\ll\tau_{r}\) to the long-time limit \(t\gg\tau_{r}\) (Fig. 2).
_First passage statistics.--_The random time at which a stochastic process reaches a certain threshold, e.g. the encounter time of two molecules or the time a stock hits a certain price, can trigger a series of events. Thus, understanding the properties of first passage times is key to the explanation of many phenomena in statistical physics, chemistry, and finance [45; 46; 47]. To show how first-passage problems are solved in the context of the DSCTRW, we continue with the model illustrated in Fig. 1 and derive its first exit time from an interval. This exit time exhibits interesting phenomenology due to the competition between different timescales.
To obtain the exit time, we apply two simple steps: we first calculate the propagator conditioned on the number of steps taken, and then translate steps to time by summing over the corresponding probabilities \(\chi_{n}(t)\). The propagator of the simple symmetric random walk inside an interval \([0,L]\) with absorbing boundaries is given by [45]
\[P(x,n|x_{0})=\frac{2}{L}\sum_{p=0}^{L}\nu_{p}^{n}\sin(\nu_{p}x_{0})\sin(\nu_{p }x), \tag{7}\]
where \(x_{0}\) is the initial position and \(\nu_{p}=\frac{p\pi}{L}\). In the following we set \(u_{p}(x_{0},x)=\sin(\nu_{p}x_{0})\sin(\nu_{p}x)\). Translating steps to time, averaging over \(\Lambda_{t}\), and summing over all lattice sites inside the interval, we obtain the survival probability [42]
\[S(t|x_{0})=\frac{2}{L}\sum_{x=1}^{L-1}\sum_{p=0}^{L}u_{p}(x_{0},x)\tilde{ \Lambda}_{t}(1-\nu_{p}), \tag{8}\]
where \(\tilde{\Lambda}_{t}\) which previously appeared in Eq. (2) should now be evaluated at \(1-\nu_{p}\) instead of \(1-\hat{\phi}(k)\). Taking the negative time derivative of \(S(t|x_{0})\), we obtain the probability density of the exit time
\[f(t|x_{0})=\frac{2}{L}\sum_{x=1}^{L-1}\sum_{p=0}^{L}\frac{u_{p}(x_{0},x)(1- \nu_{p})\tilde{\Lambda}_{t}(1-\nu_{p})}{1+(1-\nu_{p})\delta_{t}}. \tag{9}\]
In Fig. 3, we plot the result of Eq. (9) for three different values of the jump rate relaxation time \(\tau_{r}\). We observe unexpected "jumps" of the first exit densities localized at integer multiples of \(\tau_{r}\). This feature is a manifestation of the sharp transitions experienced by the diffusing jump rate in our model (Fig. 1c). Indeed, particles that initially drew a smaller than average jump rate have a higher probability to survive simply by virtue of moving less. As a result, the system becomes enriched with such particles, until a new jump
Figure 2: Displacement probability distributions for the DSCTRW illustrated in Fig. 1. Circles come from simulations and bold lines give analytical results obtained by inverting the Fourier transform in Eq. (6). Here, the mean jump rate was set to unity and its relaxation time to \(\tau_{r}=10\). Distributions are plotted for three different times: \(t=0.1\tau_{r}\) (orange) that exhibits a Laplace distribution, \(t=5\tau_{r}\) (cyan) for which we see the beginning of a transition towards the Gaussian distribution, and \(t=100\tau_{r}\) (gray) where the displacement distribution has converged to the Gaussian.
rate is redrawn at \(t=\tau_{r}\). The spike in the first-exit density then comes from slowly moving particles that turned fast and perished, and this process repeats itself every \(\tau_{r}\). Analyzing the density in Eq. (9) right before and just after integer multiples of \(\tau_{r}\), we analytically show that it experiences jumps whose magnitude increases with \(\tau_{r}\)[42]. This explains why for very small \(\tau_{r}\)s jumps are almost indistinguishable whereas for large \(\tau_{r}\)s they are more pronounced.
The jump rate relaxation time has a profound impact on the mean first exit time (MFET) from the interval. The latter can be obtained by averaging over the first exit time distribution in Eq. (9). Results are given in Fig. 4, where we distinguish between two limiting behaviours. When relaxation times are fast, jump rate fluctuations play a lesser role as they are averaged over. In this limit, the jump rate can be seen as if it was fixed and equal to the mean jump rate \(\overline{\lambda}\). The MFET can then be approximated as \(\simeq(L-x_{0})x_{0}/\overline{\lambda}\), which is the MFET of a simple CTRW with a fixed jump rate \(\overline{\lambda}\). Situation is different for slow relaxation times \(\tau_{r}\gg\overline{\lambda}^{-1}(L-x_{0})x_{0}\). In this case, a typical particle is "stuck" with its initially drawn jump rate until it leaves the interval. The MFET can then be approximated by averaging \((L-x_{0})x_{0}/\lambda\) over the initial distribution of the jump rate \(\lambda\) (here assumed equivalent to the steady-state distribution). Doing so for an exponential distribution of jump rates leads to a logarithmic divergence, which we regularize by noting that the relaxation rate, \(\tau_{r}^{-1}\), serves a lower cutoff for the integration. Indeed, very slow particles that do not leave the interval by the relaxation time, will consequently draw a new jump rate. We thus find that in this limit the MFET scales as \(\sim ln(\tau_{r})\).
We note that the first-passage results obtained above can be generalized for many other geometries and boundary conditions. This can be done whenever the propagator in real, or Fourier space, admits a standard eigenmode expansion of the form which appears in Eq. (7). Crucially, when terms in the series are proportional to \(\nu_{p}^{n}\), one can easily translate steps to time by summing over the corresponding probabilities \(\chi_{n}(t)\) and taking expectations. For additional examples that can be solved in a similar fashion see [48], which gives exact results for propagators of random walks in confined geometries.
_Conclusions.--_In this letter, we introduced a doubly stochastic version of the renowned CTRW. The model allows for a general description of a random walk which is driven by a time dependent jump rate that in itself is a stochastic process. Despite this added layer of complexity, the model remains fully tractable and we obtained a general formula for the displacement probability distribution. A rich phenomenology emerged from the analysis of the latter, asserting that the doubly stochastic continuous time random walk can be used to describe not only super, sub, and normal diffusion, but also the Brownian yet non-Gaussian diffusion that has recently been observed in various systems. The tractability of the model further lends itself to the computation of first-passage times which--similar to the displacement distribution--display striking transitions as a function of the jump rate relaxation time. The random walk approach developed herein complements the diffusing diffusivity approach that was developed for Brownian motion, and further extends it by allowing for unlimited freedom in the interplay between the distribution of jumps and the properties of the fluctuating jump rate. This interplay will be further explored elsewhere.
_Acknowledgments.--_ M.A. acknowledges discussion with Samuel Gelman and Ulysse Mizrahi. S.R. acknowledges sup
Figure 3: First exit time distributions from the interval \([0,10]\), starting at \(x_{0}=5\), for the DSCTRW illustrated in Fig. 1. Results are plotted for three different jump rate relaxation times: \(\tau_{r}=10^{-1}\) (cyan), \(\tau_{r}=10^{1}\) (orange), and \(\tau_{r}=10^{2}\) (gray). Bold lines are plotted using Eq. (9), and circles come from numerical simulations.
Figure 4: Mean first exit time (MFET) from the interval \([0,10]\), starting at \(x_{0}=5\), for the DSCTRW illustrated in Fig. 1. For fast jump rate relaxation (small \(\tau_{r}\)), we recover the MFET of a simple CTRW (dashed line) with a fixed jump rate corresponding to the mean jump rate, \(\overline{\lambda}=1\), of the DSCTRW. In the other extreme, i.e., for slow relaxation times (large \(\tau_{r}\)), the MFET scales as \(\sim ln(\tau_{r})\). The bold line is plotted by averaging over Eq. (9) and circles denote MFETs that correspond to the distributions in Fig. 3.
port from the Israel Science Foundation (grant No. 394/19). This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (Grant agreement No. 947731).
|
2308.12371 | Open-set Face Recognition with Neural Ensemble, Maximal Entropy Loss and
Feature Augmentation | Open-set face recognition refers to a scenario in which biometric systems
have incomplete knowledge of all existing subjects. Therefore, they are
expected to prevent face samples of unregistered subjects from being identified
as previously enrolled identities. This watchlist context adds an arduous
requirement that calls for the dismissal of irrelevant faces by focusing mainly
on subjects of interest. As a response, this work introduces a novel method
that associates an ensemble of compact neural networks with a margin-based cost
function that explores additional samples. Supplementary negative samples can
be obtained from external databases or synthetically built at the
representation level in training time with a new mix-up feature augmentation
approach. Deep neural networks pre-trained on large face datasets serve as the
preliminary feature extraction module. We carry out experiments on well-known
LFW and IJB-C datasets where results show that the approach is able to boost
closed and open-set identification rates. | Rafael Henrique Vareto, Manuel Günther, William Robson Schwartz | 2023-08-23T18:22:03Z | http://arxiv.org/abs/2308.12371v1 | # Open-set Face Recognition with Neural Ensemble, Maximal Entropy Loss and Feature Augmentation
###### Abstract
Open-set face recognition is a scenario in which biometric systems have incomplete knowledge of all existing subjects. This arduous requirement must dismiss irrelevant faces and focus on subjects of interest only. For this reason, this work introduces a novel method that associates an ensemble of compact neural networks with data augmentation at the feature level and an entropy-based cost function. Deep neural networks pre-trained on large face datasets serve as the preliminary feature extraction module. The neural adapter ensemble consists of binary models trained on original feature representations along with negative synthetic mix-up embeddings, which are adequately handled by the designed open-set loss since they do not belong to any known identity. We carry out experiments on well-known lfw and ljb-c datasets where results show that the approach is capable of boosting closed and open-set identification accuracy.
## I Introduction
Not only are face recognition systems sufficiently advanced nowadays to be used in social networks or photo library tagging, but also a leading mechanism to support governments [1], law enforcement agencies, and private companies [2]. Despite the significant recent progress, face recognition remains limited when facing poor image quality, a frequent condition in surveillance and ctv environments. In addition, few researchers have devoted their efforts to solving problems that either require strong generalization or bounded open space risk, a scenario in which input samples are far from any known class and likely to represent unknown distributions.
Open-set face recognition characterizes the scenario where anonymous individuals, unseen during training and enrollment stages, only come into sight during evaluation time [3, 4]. As an illustration, one can think of immigration control at airports taking advantage of automated gates and smart face recognition. The system is expected to dismiss all law-abiding passengers and alert the security personnel whenever criminal offenders turn up. However, recent newspaper articles have shown that people being misidentified is not a hypothetical exercise but has actually occurred several times across the United States [5, 6]. To make matters worse, false alarms should be avoided by any means since a system identification error may bias the security approach and mistakenly hold up innocent people in custody [7].
Some late face recognition systems have tackled low-quality images and major improvements have been made with the introduction of specialized loss functions [8, 9, 10, 11]. Deep neural networks (dnn) are typically trained on large datasets of public people before being applied to particular face populations [2]. For this reason, the identification task becomes intrinsically domain-adaptive as none of the individuals selected to train the network composes the gallery set, a collection that only contains subjects of interest, also referred to as a watchlist. Therefore, they often fail to distinguish whether an input face sample is enrolled in the gallery of known individuals since they cannot foresee the unknown.
Several works explore transfer learning techniques or consist of traditional machine learning algorithms fitted on deep feature representations [4, 12, 13]. Hashing functions have also been used to solve open-set face recognition tasks [14, 15, 16]. Gunther _et al._[7] take an existing pretrained deep backbone and replaces its output classification layer with a Neural Adapter Network (nan). Other approaches rely on clustering techniques that act as a filtering barrier to unknown samples [17, 18]. Despite all contributions, the aforementioned methods present unbounded open-space risk [3] and are not very well suited for rejecting unknown individuals as generally required in the watchlist context.
Many investigators have examined the advantages of ensembles: Ma _et al._[19] developed an adaptive-boosting classification framework but did not conduct experiments following open-set protocols. Choi _et al._[20] combined a collection of deep neural networks with Gabor representations whereas Vareto _et al._[18] employed a clustering technique to filter out dissimilar candidates before training a compact ensemble of binary models. The former contains a collection of deep neural networks and, in consequence, presents a high computational complexity in both training and evaluation stages. Similarly, the latter consists of an online training module that ends up making its use in real-time tasks unfeasible.
Dhamija _et al._[21] noticed that unknown features are generally mapped near known classes and, with that in mind, proposed a novel loss function that maximizes the entropy of non-gallery samples. Data augmentation is widely adopted to prevent overfitting and strengthen the domain generalization capacity of dnns[22]. In furtherance of modifying data in the
feature space, Verma _et al._[23] came up with an interpolating strategy to generate new feature representations whereas Li _et al._[24] proposed a stochastic feature augmentation procedure to perturb embeddings with Gaussian noise. None of the previously mentioned works has been evaluated on face benchmarks containing numerous identities and a limited number of samples per class and, as a consequence, they are not an accurate portrayal of realistic biometric tasks.
In this work, we propose a Neural Adapter Ensemble (nae) of binary learners to handle unbalanced datasets. Ensembles are generally employed to reduce variance, minimize modeling bias and then decrease overfitting [25]. During inference, nae aggregates the scores of each inner model and builds a final ranking of candidates. Moreover, we introduce a margin-based cost function called Maximal Entropy Loss (mel) that not only produces more rigorous decision boundaries for known classes, but also increases the entropy for negative training samples. Since mel relies upon representative negative samples, we develop an Optimized Mix-Up (omu) feature augmentation method that synthesizes negative embeddings from feature representations of different subjects enrolled in the gallery set. The data augmentation contributes to the awareness of unknown identities since including artificial embeddings that are appropriately exploited by the cost function can improve the network's generalization performance [26].
We conduct experiments considering three pretrained face recognition networks: arcface[10], vogface2[27] and afffe[28]. Results are obtained on two widely-explored benchmarks, namely Labeled Faces in the Wild (lfw)[29] and iarpa Janus Benchmark C (ijb-c)[30]. Seeing that lfw was initially designed for the face verification task, we adhere to the open-set protocol designed by Gunther _et al._[4]. Contrarily, ijb-c specifies an open-set identification guideline named test-4 that determines how face recognition algorithms must be evaluated. We optimize the hyperparameters on lfw dataset using afffe backbone as the feature representation module. Then, the same parameters are employed to a subsequent comparison on ijb-c with arcface and vogface2 as feature descriptors to check its robustness to different domains.
**The major contributions of our work are:**_(I)_ We present a compact neural ensemble that replaces the computationally-expensive retraining of dnns for faster ensemble learning. _(II)_ We examine how the entropy-based loss and the feature augmentation mechanism enable the ensemble to better distinguish known from unknown samples. _(III)_ We carry out a parameter selection evaluation to show that the very same setting can be employed in more difficult domains, considering different feature extractors and datasets.
## II Proposed Approach
A robust open-set face recognition system is expected to determine the identity of those subjects who have been previously enrolled in the gallery and reject the ones of no interest. However, when deploying biometric applications in the real world, experts must be aware that still and motion probe images very likely present low-quality captures along with illumination variance and occlusion, to name a few [2]. These corrupted image samples obtained at the testing stage tend to misguide biometric systems and, therefore, end up defining the identification process.
We design three mechanisms to address the aforestated concerns: the Neural Adapter Ensemble (nae) encompasses binary neural networks and aims to establish a clear boundary between subjects of interest and unknown faces. The Optimized Mix-Up (omu) augmentation synthesizes negative samples at the feature-space level by interpolating representations of different gallery-enrolled individuals. Maximal-Entropy Loss (mel) comprehends an entropy and margin-based cost function that exploits negative samples derived from the original gallery set or extrinsic datasets. nae is capable of boosting the predictive performance of each standalone classifier by training multiple base learners and combining their predictions [31]. omu-made instances can enhance the generalization capability of each binary model that composes the ensemble. Additionally, mel supports the network in handling unknown samples by maximizing the entropy of negative samples or penalizing the target class of known samples with a specified margin.
### _Feature Extraction_
Given an image sample \(x\), the feature extraction module can be defined as \(\mathbf{z}=F_{\Theta}(x)\), a fragment of the pre-trained dnn's forward pass \(\hat{y}=C_{\psi}\circ F_{\Theta}(x)=C_{\psi}\circ F_{\Theta^{L}}\circ\cdots \circ F_{\Theta^{1}}(x)\) with classifier \(C_{\psi}\) and \(L\) embedding layers in \(F_{\Theta}(x)\). This process propagates image \(x\) forward up to the point prior to the last fully-connected layer with softmax activation and outputs the equivalent embedding \(z\) at that location. It is important to make sure that the face image \(x\) is aligned according to the requirements of the chosen dnn. Therefore, we rely on pre-determined alignment and feature extraction pipelines [32].
Fig. 1: Neural Ensemble and its Base Learners. Each standalone learner inputs features extracted with any deep network to learn parameters that minimize an open-set loss function and maps samples from \(|G|\) individuals from the gallery set into two classes. For each learner, the ensemble keeps a record of the identities that are randomly associated with class \(\odot\) (_output_ _0_) and class \(\oplus\) (_output_ _1_) to later identify these people and reject unknowns.
### _Maximal-Entropy Loss_
The Maximal-Entropy Loss (mel) is a cost function that addresses training samples in two different manners: (i) mel boosts both intra-class compactness and inter-class separability among known subjects by penalizing the target classes as well as (ii) maximizes the entropy of negative samples by proportionately scattering their output scores among all classes. mel encloses a soft-margin module (\(\mathcal{M}\)) with a margin \(m\geq 0\) that makes the classification more rigorous. Then, given a deep feature representation \(z\) extracted from a face sample \(x\), \(s_{c}(z)\) represents the network activation (logit) for class \(c\):
\[\mathcal{M}_{c}^{m}(z)=\frac{e^{s_{c}(z)-m}}{e^{s_{c}(z)-m}+\sum\limits_{c^{ \prime}\neq c}e^{s_{c^{\prime}}(z)}} \tag{1}\]
The formulation of mel (\(\mathcal{J}^{m}\)) only adds a handicapping penalty \(m\) to known classes, indicated in the first term. In favor of handling negative samples, mel absorbs the Entropic Open-Set (eos) loss [21], where \(y_{i}\) stores \(z_{i}\)'s corresponding target class \(c\in C\) and \(\bar{z}\in N\) represents a negative sample:
\[\begin{split}\mathcal{J}^{m}&=-\mathbb{E}_{(z_{i},y_{i})\in G}\log\mathcal{M}_{y_{i}}^{m}(z_{i})\\ &\quad-\mathbb{E}_{\bar{z}\in N}\frac{1}{|C|}\sum\limits_{c\in C }\log\mathcal{M}_{c}^{m=0}(\bar{z})\end{split} \tag{2}\]
mel maximizes the uncertainty of negative instances by inducing output activations to lie uniformly distributed over all known classes \(c\in C\). The insight of equalizing logit values for unknown samples lies behind not knowing anything about their corresponding class and, therefore, they must hold a similar likelihood of being assigned to any class [21]. For this reason, \(\mathcal{J}^{m}\) is expected to propagate the entropy learned with negative data \(\bar{z}\in N\) to unknown probe samples during inference.
### _Optimized Mix-Up Feature Augmentation_
We introduce an augmentation strategy called Optimized Mix-Up (omu) to build artificial negative samples. Differently from traditional data augmentation transforms, the designed data synthesis takes place directly at the latent space \(z\) and aims to generate tightened decision boundaries around known classes. omu interpolates two latent embedding representations \(z_{i}\) and \(z_{j}\) into a new representation \(\bar{z}\), which is assigned to the negative set \(N\). Such embedding is generated in consonance with a mingling coefficient \(\lambda\) that determines the weight of each original embedding:
\[\begin{split}\bar{z}=\lambda\cdot z_{i}\ +\ (1-\lambda)\cdot z_{j}\\ \text{s.t.}\ z_{j}=\operatorname*{arg\,max}_{(z_{i^{\prime}},g_{ i^{\prime}})\in G}\ \cos(z_{i},z_{i^{\prime}})\ \wedge\ g_{i}\neq g_{i^{\prime}}\end{split} \tag{3}\]
In summary, given feature vectors \(z_{i}\) and \(z_{j}\), respectively associated with identities \(g_{i}\neq g_{j}\), a synthetic negative feature \(\bar{z}\) is manufactured in between the closest pairs of known individuals. Unlike existing works [23, 33] where different feature embeddings are randomly selected, omu seeks the closest cosine-similar representations that, at the same time, belong to different subjects registered in gallery \(G\).
### _Neural Ensemble Models_
The ensemble is composed of multiple binary classifiers \(E_{n}\in E\). Each classifier is trained on a different random bisecting split of gallery identities, where the task is to discern these two random groups. For training our base model \(E_{n}\), we distribute the identities registered in gallery \(G\) into two equally-sized disjoint splits. The random segregation guarantees that half of the individuals are assigned to \(P_{n}^{\odot}\) (partition zero) and the other fraction is allocated in \(P_{n}^{\oplus}\) (partition one) so that both splits altogether, defined as \(P_{n}=\{P_{n}^{\odot},P_{n}^{\oplus}\}\), encompass all the subjects of interest available in the gallery. Even though all base learners share equivalent architecture and hyperparameters, each one of them is trained with an independent and identically distributed arrangement of known identities as class zero or class one.
Associating subject \(g\in G\) with one of the two subsets consists of sampling from a Bernoulli distribution with probability \(p=0.5\). This partitioning \(P_{n}\) operates as the function \(B_{n}:G\mapsto\{\odot,\oplus\}\) that attributes subjects \(g\) with new labels (\(\odot\) or \(\oplus\)). Then, \(G\subset\mathbb{Z}_{+}\) contains the original gallery identities and \(B_{n}\) holds the respective binary co-domain for partition \(P_{n}\). Theoretically, the probability of any two subjects of interest sharing the very same sequence of binary attributions decreases as the neural ensemble size expands.
The neural ensemble \(E\) comprises the main block of the approach since it is the stage in which the feature augmentation scheme and the open-set loss act together to build a set of discriminative base models. Each base model \(E_{n}\in E\) consists of a multi-layer _perceptron_ network with fully-connected layers. In fact, \(E_{n}\) incorporates an input layer \(L^{i}\), followed by a single hidden layer \(L^{h}\) and an output layer \(L^{o}\). The input layer takes deep feature representations \(z\) extracted with the selected pretrained deep neural network and, consequently, its input size varies according to the dnn's feature layer dimension. As indicated in Figure 1, the hidden layer \(L^{h}\) employs a rectified linear unit (relu) activation. Layer \(L^{o}\) contains two neurons and outputs the corresponding activations \((a^{\odot},a^{\oplus})\) of the two classes. Each base learner is trained using (2), where we use the categorical loss for two classes.
### _Inference with Rank of Candidates_
When given a test sample \(x_{p}\), we first extract the feature embedding \(z_{p}=F_{\Theta}(x_{p})\) and forward these through our ensemble of binary classifiers. By construction, the activations of the base classifier \(E_{n}\) will be close to zero when an unknown sample is presented, and large for the corresponding partition when facing a known sample [21]. Hence, for each gallery subject \(g\), we can simply add the activations of the partition the class \(g\) was initially assigned to:
\[\operatorname{sim}(z_{p},g)=\sum_{n}a_{n}^{B_{n}(g)}(z_{p}) \tag{4}\]
with \(B_{n}(g)\in\{\odot,\oplus\}\). The final similarity scores can be used to identify the probe by selecting the gallery subject with the highest score, or rejecting it as unknown when the maximal score is below a certain threshold \(\theta\).
## III Experiments
We developed the approach using PyTorch framework [34] along with other Python libraries such as Bob [32, 35] for feature extraction. The neural ensemble operates on representations obtained with senet50-vggface2 [27], resnet50-affffe [28] and resnet100-arcface [10] architectures. Training has been carried out on a dedicated server running Debian Linux on a amd epyc 7542 32-core 128-thread cpu, 512-gb ram, and multiple GeForce rtx 2080Ti gpus.
Evaluation MetricThe open-set Receiver Operating Characteristics (o-roc) is the canonical evaluation metric for open-set biometric systems [36]. The o-roc plots the True Positive Identification Rate (tpir) against False Positive Identification Rate (fpir) by varying threshold \(\theta\). The tpir specifies the probability that subjects from the gallery are correctly identified whereas fpir corresponds to the number of unknown subjects mistakenly identified as someone enrolled in the gallery. An optimal open-set face identification system has tpir of \(1\) at an fpir not far from \(0\), while the closed-set Rank-1 recognition rate can be obtained as tpir @fpir \(=1\).
Datasets and Evaluation ProtocolsWe adopt lfw[29] as well as ljb-c[30] benchmarks. We incorporate the open-set lfw partitioning [4] for parameter selection. ljb-c provides a widely adopted open-set protocol test-4 that consists of two disjoint gallery sets and a probe collection holding identities from both galleries. We use ljb-c's _gallery A_ for training so that probe samples corresponding to identities available in _gallery B_ become unknown. ljb-c contains mostly high-quality enrollment data but low-quality probe samples, such that the application of simple open-set techniques usually does not transfer from gallery to probes [37].
Evaluated ApproachesWe conduct a series of trials in the interest of verifying the enhancement provided by nae, mel, and omu. We compare the ensemble with nan[7], a compact network used for multi-class classification. We incorporate distinct cost functions in the evaluation stage: angular-based CosFace (cfl)[9], entropy-maximizing Entropic Open-set (eos)[21], and the categorical Cross-Entropy Loss (cel).
### _Parameter Selection_
For an unbiased assessment, we select the parameters obtained on lfw to be subsequently used on ibb-c dataset. Such practice reveals whether the method is sufficiently robust to generalize across multiple domains and settings. Table I presents results achieved with afffe feature vectors, where we report tpir values for various fpir. The parametric assessment seeks to find optimal parameters \(|E|\), \(m\) and \(\lambda\), respectively related to the neural ensemble size, loss function penalty, and feature augmentation coefficient.
Empirically, we initially set \(|E|=0.1\cdot|G|\), \(m=0.1\) and \(\lambda=0.55\), where \(|G|\) indicates the gallery size with \(610\) subjects. When one of the aforementioned parameters is being modified, the two remaining stand fixed. After fixing \(|E|=0.5\cdot|G|\) as a good compromise between speed and accuracy, we modify \(m\) and achieve better model discriminability power for \(m=0.3\). Results show that setting the augmentation coefficient \(\lambda=0.85\) (i.e. unknown samples are relatively similar to known samples) increases the ensemble's ability to distinguish subjects of interest from unknowns. Finally, we assess the optimal number of neurons for hidden layer \(L^{h}\) and, after ranging the number of hidden nodes from \(32\) to \(256\) in steps of \(16\), \(L^{h}\) is empirically set to \(160\).
### _Neural Network Evaluation_
The Neural Adapter Network (nan)[7] consists of a multi-layer perceptron with two fully-connected hidden layers. The first hidden layer encloses 512 neurons with ReLU activation whereas the second one holds 128 neurons. nan is trained in a multi-class fashion for 200 epochs where the output layer size corresponds to the number of subjects enrolled in the gallery set. We train the adapter network with four distinct cost functions adhering to the best hyper-parameter specified for margin \(m=0.3\) as shown in Table I. The supplementary data is synthesized utilizing the designed omu feature augmentation method with blending coefficient \(\lambda=0.75\).
Figure 1(a) demonstrates that the addition of synthetic samples holding equivalent underlying data distribution with the gallery set can significantly improve nan's accuracy. Not only are the improved results enhanced in the closed-set evaluation (Rank-1) but we can observe the superior performance being propagated to the o-roc metric when the false-positive identification rate decreases. In other words, mel attains better identification score when fpir\(=10^{-3}\) than cel or cfl lie under fpir\(=10^{-2}\). The chart also demonstrates the advantage of using mel when contrasted with eos as the proposed loss function achieves an analogous closed-set recognition rate but outperforms eos in the open-set evaluation.
cfl is not one of the most recent margin-based cost functions; still, two recent investigations [38, 39] demonstrated that it exceeds the results obtained with more sophisticated algorithms on ibb-c, such as ArcFace [10], CurricularFace [40] and MagFace [11]. As a result, we believe that CosFace corresponds to a good representation of most angular-margin variants of the traditional Cross-Entropy Loss. cfl demands a special parameter setting as it does not operate with probability scores. During the evaluation of cfl, we had to double the number of epochs as well as the quantity of neurons in the second hidden layer in order to obtain satisfactory results.
Feature Augmentation AnalysisWith omu's optimal blending parameter \(\lambda=0.75\) at hand, we conduct a small set of experiments on nan with arcface in order to check how much improvement can be obtained with the novel augmentation scheme in multi-class tasks. Table II compares the proposed augmentation strategy with the Manifold Mix-Up (mmu) [23], Stochastic Feature Augmentation (sfa)[24], and the addition of original lfw samples as the negative set. Results show that cel cannot keep up with cost functions that explore negative samples. We observe that either sfa or mmu achieves higher tpir values under higher false-positive identifications, that is, when more unknown samples are mistakenly identified as a gallery-enrolled subject. omu, however, is capable of achieving better detection and identification rate when fpir drops.
### _Neural Ensemble Evaluation_
Figure 2 provides a comprehensive comparison between nan and nae as both approaches are trained with analogous feature representations (arcface and vggface2) and cost functions (cel, eos and mel). Results show the dominant generalization power of ensembles when compared to multi-class models. We also adopt cos, an abbreviation for the cosine-similarity computation between probe samples and the gallery of templates, as our second baseline. cos is a common similarity metric for watchlist tasks and is widely employed in the most modern face recognition applications.
As exposed in Figure 1, the ensemble consists of multiple binary models containing a single hidden layer with 160 neurons and ReLU activation. nae exploits synthetic negative samples derived from the gallery set when combined with either eos or mel. This strategy guarantees a closer data distribution between original and artificially-made training data since omu performs a combination of the two closest gallery samples carrying different target classes. Figure 1(b) shows the experimental evaluation of cos and nae on ljb-c. The top four curves comprise experiments with arcface and the bottom four refer to vggface2 feature representations.
The association of nae and mel achieves superior open-set performance when feature representations are extracted with arcface and false-positive identifications range between \(10^{-1}\) and \(10^{-4}\). Moreover, mel outperforms eos across many fpir ranges, which shows the importance of learning a more compact feature space through margin \(m\). Experiments with vggface2 show that eos and mel cost functions attain competitive results as they outperform methods without negative samples. Apparently, mel cannot improve the performance over eos as we presume that original vggface2 representations do not include sufficient resources to handle ljb-c's low-quality probe samples and assist mel in the training stage.
Results demonstrate that the proposed approach presents an outstanding performance regardless of the adopted network backbone. It reveals that supplementary negative data derived from the gallery set itself equips the ensemble with relevant information and boosts the algorithm's overall performance. In addition, the proposed Maximal-Entropy Loss seems capable of driving each standalone base model toward greater discriminability among known identities as well as escalating the entropy for unknown samples. The ensemble acts as an alternate mechanism to prevent the recurrent retraining of very-deep neural networks every time new individuals are enrolled in the gallery set. As a consequence, it can be attached to the penultimate layer of any pretrained dnn, which eases the maintenance of real-world biometric applications.
Fig. 2: IJB-C Evaluation. We compare our proposed approach with several state-of-the-art approaches for open-set face recognition, using four different loss functions (cel, cfl, eos, mel) and two deep feature representations arcface (arc) and vggface2 (vgg). (a) indicates the improvement obtained with the Neural Adapter Network (nan) [7] trained with mel over long-established loss functions, whereas (b) shows the advance brought by Neural Ensemble (nae) over multi-class nan and the well-known cosine-similarity metric (cos) in the open-set identification task.
## IV Conclusion
We introduced three different approaches: a neural ensemble (nae), a cost function (mel), and a feature augmentation algorithm (omu). Results show that the three methods combined provide better open-set accuracy under the presence of extensive false-positive identifications of unknown samples. In opposition to most works in the literature, nae, mel and omu did not have their parameters optimized in such a way they would return favorable results on ijb-c. Instead, the hyper-parameters were selected during the evaluation of a surrogate dataset: lfw. We believe that the proposed method is likely to achieve higher performance on ijb-c dataset if we take its test set into consideration when tuning the hyper-parameters.
This work also provided an interesting insight: "transforming a gallery/training set with linear-alike perturbations may provide better generalization capability than external data". In fact, one of the experiments showed that synthesizing new samples derived from the gallery set preserves the underlying statistics of the training set and, therefore, ends up contributing more to a model generalization power than extrinsic datasets. Consequently, gallery sets with numerous identities but few available samples may not be an obscure limitation anymore when representative synthetic data can be created to assist loss functions in learning better weights.
|
2310.08163 | Combining Decentralized IDentifiers with Proof of Membership to Enable
Trust in IoT Networks | The Self-Sovereign Identity (SSI) is a decentralized paradigm enabling full
control over the data used to build and prove the identity. In Internet of
Things networks with security requirements, the Self-Sovereign Identity can
play a key role and bring benefits with respect to centralized identity
solutions. The challenge is to make the SSI compatible with resource-constraint
IoT networks. In line with this objective, the paper proposes and discusses an
alternative (mutual) authentication process for IoT nodes under the same
administration domain. The main idea is to combine the Decentralized IDentifier
(DID)-based verification of private key ownership with the verification of a
proof that the DID belongs to an evolving trusted set. The solution is built
around the proof of membership notion. The paper analyzes two membership
solutions, a novel solution designed by the Authors based on Merkle trees and a
second one based on the adaptation of Boneh, Boyen and Shacham (BBS) group
signature scheme. The paper concludes with a performance estimation and a
comparative analysis. | Alessandro Pino, Davide Margaria, Andrea Vesco | 2023-10-12T09:33:50Z | http://arxiv.org/abs/2310.08163v3 | # Combining Decentralized Identifiers with Proof of Membership to Enable Trust in IoT Networks
###### Abstract
The Self-Sovereign Identity (SSI) is a decentralized paradigm enabling full control over the data used to build and prove the identity. In Internet of Things networks with security requirements, the Self-Sovereign Identity can play a key role and bring benefits with respect to centralized identity solutions. The challenge is to make the SSI compatible with resource-constraint IoT networks. In line with this objective, the paper proposes and discusses an alternative (mutual) authentication process for IoT nodes under the same administration domain. The main idea is to combine the Decentralized IDentifier (DID)-based verification of private key ownership with the verification of a proof that the DID belongs to an evolving trusted set. The solution is built around the proof of membership notion. The paper analyzes two membership solutions, a novel solution designed by the Authors based on Merkle trees and a second one based on the adaptation of Boneh, Boyen and Shacham (BBS) group signature scheme. The paper concludes with a performance estimation and a comparative analysis.
Self-Sovereign Identity, Decentralized IDentifiers, Proof of Membership, Group Signatures, Merkle Trees, Trust, Internet of Things.
## I Introduction
The Self-Sovereign Identity (SSI) [1] is a decentralized digital identity paradigm that gives a peer full control over the data it uses to build and to prove its identity. The overall SSI stack, depicted in Fig. 1, enables a new model for trusted digital interactions.
The Layer 1 is implemented by means of any Distributed Ledger Technology (DLT) acting as the Root-of-Trust (RoT) for identity data. In fact, DLTs are distributed and immutable means of storage by design [2]. A Decentralized IDentifier (DID) [3] is the new type of globally unique identifier designed to verify a peer. The DID is a Uniform Resource Identifier (URI) of the following form:
_did:method-name:method-specific-id_
where _method-name_ is the name of the DID Method used to interact with the DLT and _method-specific-id_ is the pointer to the DID Document stored on the DLT, denoted as _index_ in this paper for simplicity.
Thus, DIDs associate a peer with a DID Document [3] to enable trustable interactions with it. The DID Method [3, 4] is the software implementation used by a peer to interact with the DLT. In accordance with W3C recommendation [3], a DID Method must provide the primitives to:
* _create_ a DID, that is, generate an identity key pair (\(sk_{id},pk_{id}\)) for authentication purposes, the corresponding DID Document containing the public key of the pair \(pk_{id}\) and store the DID Document into the distributed ledger at the _index_ pointed by the DID,
* _resolve_ a DID, that is, retrieve the DID Document from the _index_ on the ledger pointed to by the DID,
* _update_ a DID, that is, generate a new key pair (\(sk^{\prime}_{id},pk^{\prime}_{id}\)) and store a new DID Document at the same _index_ or at a new _index_ if the subject requires changing the DID, and
* _revoke_ a DID, that is, provide an immutable evidence on the ledger that a DID has been revoked by the owner.
The DID Method implementation is ledger-specific and it makes the upper layers independent of the DLT of choice.
The Layer 2 makes use of DIDs and DID Documents to establish a secure channel between two peers. In principle, both peers prove the ownership of their private key \(sk_{id}\) bound to the public key \(pk_{id}\) in their DID Document that is stored on the distributed ledger. While the Layer 2 leverages DID technology (i.e. the security foundation of the SSI stack) to begin the authentication procedure, the Layer 3 finalizes it and deals with authorization to services and resources with
Fig. 1: The Self-Sovereign Identity stack.
Verifiable Credentials (VCs) [5].
A VC is an unforgeable, secure, and machine verifiable digital credential that contains further characteristics of the digital identity of a peer than its key pair (\(sk_{id},pk_{id}\)), the DID and the related DID Document.
_The combination of the key pair (\(sk_{id},pk_{id}\)), the DID, the corresponding DID Document and at least one VC forms the digital identity in the SSI framework._ This composition of the digital identity reflects the decentralized nature of SSI. There is no authority that provides all the components of the identity to a peer, and no authority is able to revoke completely the identity of a peer. Moreover, a peer can enrich its identity with multiple VCs issued by different Issuers.
The Layer 3 works in accordance with the Triangle-of-Trust depicted in Fig. 1. Three different roles coexist:
* **Holder** is the peer that possesses one or more VCs and that generates a Verifiable Presentation (VP) to request a service or a resource from a Verifier;
* **Issuer** is the peer that asserts claims about a subject, creates a VC from these claims, and issues the VC to the Holders.
* **Verifier** is the peer that receives a VP from the Holder and verifies the two signatures made by the Issuer on the VC and by the Holder on the VP before granting the access to a service or a resource based on the claims.
The VC contains the metadata to describe properties of the credential (e.g. context, ID, type, Issuer of the VC, issuance and expiration dates) and most importantly, the DID and the claims about the identity of the peer in the credentialSubject field.
The Issuer signs the VC to make it an unforgeable and verifiable digital document. The Holder requests access to services and resources from the Verifier by presenting a VP. A VP is built as an envelope of the VC. The VC is issued by an Issuer and a signature is made by the Holder with his \(sk_{id}\). Issuers are also responsible for VCs revocation for cryptographic integrity and for status change purposes [5].
On top of these three layers, it is possible to build any ecosystem of trustable interactions among peers. The authentication process at the core of Trust between two SSI-aware peers is depicted in Fig. 2.
In principle, the peers use an (ephemeral) Diffie-Hellman key exchange to build up a confidential channel. Then, the peers exchange their respective DIDs and prove the possession of the \(sk_{id}\) associated to the \(pk_{id}\) stored in their corresponding DID Documents. This verification, in case of success, ends up with a cryptographic trust between the two peers while preventing the passive Man-in-the-Middle (MitM) attack. However, in a permissionless distributed ledger, anyone is entitled to create its own DID, therefore the procedure is still vulnerable to active MitM attack. In fact, the (mutual) authentication takes place only after the peers exchange and verify the respective VCs. At that point, the peers establish a secure communication channel.
There are Internet of Things (IoT) use cases in which networks of nodes support or make themselves digital infrastructures with security requirements, such as (mutual) authentication, confidentiality, and integrity. The SSI framework can play a key role and bring benefits with respect to centralized identity solutions [6]. The challenge is to make these solutions compatible with resource constraints [7].
With the aim of pursuing this objective, the paper proposes and discusses an alternative (mutual) authentication process for IoT nodes under the same administration domain. Finalizing the authentication by means of VC verification at Layer 3 is the most demanding operation of the authentication procedure depicted in Fig. 2 due to the complex data model of VCs [5]. The proposed alternative is to complement the DID-based verification of the \(sk_{id}\) ownership with the verification of a proof that a DID belongs to an evolving trusted set of DIDs (i.e. the DID has not been created by an adversarial node). In other words, the idea is to complement the DID-based verification of the \(sk_{id}\) ownership with the verification of a _proof of membership_ and avoid the use of VCs. For the sake of clarity, the membership concept refers to DIDs and not to nodes.
From an implementation point of view, a node combines the DID with the proof of membership and forwards them to the counterpart node that proceeds with the verification of the proof of membership and of the DID ownership (i.e. \(sk_{id}\) ownership) to complete the authentication procedure.
This paper proposes and analyzes two membership solutions for the purpose of implementing the new authentication procedure: a novel solution based on Merkle trees [8] designed by the Authors of this paper and a second solution built as an adaptation of a well-known and largely used group signature scheme proposed by Boneh, Boyen and Shacham and conventionally referred to as BBS [9].
The paper presents and critically reviews the two proposed solutions in four typical operational phases of an IoT network, namely:
1. **Provisioning**: corresponding to the initial setup of the IoT nodes in the network;
2. **Operation**: corresponding to the operation of the IoT nodes when deployed on the field;
3. **Secret rotation**: corresponding to the update of node identity keys (\(sk_{id},pk_{id}\)) and other relevant secret keys;
Fig. 2: Mutual authentication between two peers in the SSI framework.
4. **Network update**: corresponding to the action of either adding or removing an IoT node to/from the network.
## II Membership through Merkle trees
A Merkle tree, also known as a Hash tree, is a data structure that is used to efficiently verify the integrity of large amounts of data. It is named after its inventor, Ralph C. Merkle, who first introduced the concept in a patent filed in 1979 [8].
The Merkle tree is here used to solve the membership problem presented in Section I. In principle, the DIDs selected by a node must be part of a Merkle tree whose root is considered trusted.
### _Provisioning_
The provisioning phase consists of the typical configuration procedure, in a secure environment, of each node before their deployment on the field.
Upon configuration, each node selects a first set of DIDs that will use during the operation phase (i.e. defines the indexes \(idx_{i}\) of these DIDs). Then, the node generates autonomously its own Merkle tree, as depicted in Fig. 3.
Basically, the node uses the selected indexes \(idx_{i}\) as inputs for a Hash function \(H(\cdot)\): the outputs are the leaves of the Merkle tree (e.g. \(leaf_{1}=H(idx_{1})\)). Reminding that every element that is not a leaf is the digest of its child elements (e.g. \(parent\) = \(H(child_{1}|child_{2})\)), the construction consists in hashing the previous two values until only a value remains; this value is called \(ROOT\).
In a possible design, all the leaves of the Merkle tree can be calculated starting from a single master secret \(S\). A HMAC-based Extract-and-Expand Key Derivation Function [10] (HKDF) can generate a number of seeds \(s_{i}\), to be used as inputs for deriving the indexes of the DIDs. That way, the node is required to store securely only \(S\) and can regenerate the DIDs on the fly when needed during the operation phase, thus avoiding the secure storage of the entire set of DIDs.
After the construction of the Merkle tree, each node interacts with a Trusted Party (TP). The identity key pair of the TP is \((sk_{TP},pk_{TP})\). The TP provides the public key \(pk_{TP}\) to the node and the node shares the \(ROOT\) of its Merkle tree with the TP.
Once the TP has collected all the \(ROOT\) values from the \(N\) nodes, it builds and publishes on the distributed ledger, at a given well-know predefined index \(idx_{list}\), the list of trusted roots in the form:
\[\{ROOT_{1},\ldots,ROOT_{N},ROOT_{TP};TS;Signature_{sk_{TP}}\}\]
where \(TS\) is a timestamp that provides the date and time of list generation, and \(Signature_{sk_{TP}}\) is the signature of the TP made with its private key \(sk_{TP}\).
All nodes have simple access to this list of trusted roots by querying the DLT. The nodes verify \(Signature_{sk_{TP}}\) with the \(pk_{TP}\) before consuming the list during the operation phase.
### _Operation_
The nodes enter into digital interaction with the other nodes after (mutual) authentication. Upon the selection of a DID from the tree, a node generates the proof of membership that it will use during the authentication procedure with another node, as discussed in Section I. The proof of membership coincides with the value of the Siblings of the corresponding leaf. For example, if the node \(n_{1}\) selects \(DID_{1}\), the proof is:
\[(Sib_{1},Sib_{2})\doteq(H(idx_{2}),H(H(idx_{3})|H(idx_{4}))).\]
When an interaction between node \(n_{1}\) and a node \(n_{2}\) takes place, first \(n_{2}\) sends a nonce to \(n_{1}\), meant to avoid replay attacks. Then \(n_{1}\) sends \(DID_{1}\), the proof of membership and a signature with its \(sk_{id}\) on the message \(H((Sib_{1},Sib_{2})|nonce)\) (i.e. \(Sig_{sk_{id}}(H((Sib_{1},Sib_{2})|nonce))\).
At that point, \(n_{2}\) verifies the signature with the \(pk_{id}\) retrieved from the DID Document pointed by \(DID_{1}\), then it recalculates the root of the Merkle tree of node \(n_{1}\) as:
\[ROOT_{1}=H(H(H(idx_{1})|Sib_{1})|Sib_{2}).\]
The authentication succeeds if \(ROOT_{1}\) is in the list of trusted roots, otherwise \(n_{2}\) closes the communication. In case of mutual authentication, the same procedure takes place in both directions.
### _Secret rotation_
Secret rotation is the process of replacing cryptographic secrets with new ones periodically or in response to a critical event. Key rotation is an example of this practice.
In case the TP needs to update its keys \((sk_{TP},pk_{TP})\), the TP is required to provide to all nodes in the network its new public key \(pk^{\prime}_{TP}\) and, then, to sign and publish again the list of trusted roots on the distributed ledger at \(idx_{list}\).
Differently, when a node needs to update its identity keys (\(sk_{id},pk_{id}\)), it selects a new DID in its Merkle tree, revokes the previous one, and generates and stores the new DID Document on the distributed ledger at the index pointed by the new DID. The proof of membership it will use during the authentication with another node changes accordingly, but since the \(ROOT\) value does not change, the node does not need to interact with the TP. In other words, it updates the DID in full compliance with the SSI paradigm. Note that, identity key rotation does not necessarily imply the selection of a new
Fig. 3: Example of Merkle tree with four Decentralized IDentifiers.
DID, because W3C DID recommendation [3] allows a node to update its DID Document without changing the DID.
A special case occurs when a node needs to update its identity keys (\(sk_{id},pk_{id}\)) and it is using the last DID in the Merkle tree (e.g. \(idx_{4}\) in Fig. 3). The node autonomously generates a new Merkle tree and, then, shares the new \(ROOT^{\prime}\) value with the TP upon (mutual) authentication. At that point, the node updates the DID, selecting the DID from the new tree, and the TP updates and publishes the new list of trusted roots. This key rotation process can continue indefinitely in normal operation phase in full compliance with the SSI paradigm.
### _Network update_
A new node can be deployed on the network without disrupting the operation of the other nodes. When the provisioning of the new node is concluded, the TP updates the list of the trusted roots and publishes it on the distributed ledger to inform the other nodes. The same happen when a node is removed from the network for any reason; the TP updates the list of trusted roots (i.e. removes the corresponding \(ROOT\) value) and publishes it on the distributed ledger.
### _Critical analysis_
The following aspects are worth to be remarked:
* the solution is compliant with the SSI principles, since it does not affect the autonomy of a node to control its identity data (\(sk_{id}\), DID, DID document);
* the decentralized nature is respected; after the provisioning phase, a node follows the SSI paradigm without requiring major interactions with the TP;
* however, the role of the TP makes it a point of attack; the TP private key \(sk_{TP}\) must be properly protected, because an adversary capable to gain access to \(sk_{TP}\) can manipulate the list of trusted roots;
* the solution scales with the number of nodes and it seems to be appropriate also for networks with high update frequency. Adding or removing a node only affects the size of the list of trusted roots (i.e. proportional to the number of active nodes) but it does not imply interactions with the other nodes in the network;
* a trade-off exists between the size of the Merkle tree, the size of the proof of membership (i.e. the number of Siblings) and the number of interactions with the TP. The larger the tree size, the larger the proof, but the lower the number of interactions with the TP to send the root of a new Merkle tree;
* the security of the membership solution relies on the preimage resistance property of the hash function (i.e. it is hard to invert) used to build the tree and on the HKDF used as a source of randomness to build the seeds; in this sense, it is reasonable to consider it quantum safe [11];
* both the Merkle tree and HKDF are mature technologies. However, their combined use in the proposed solution may need further security validation.
## III Membership through BBS Group Signature
The BBS group signature scheme [9] has been developed to allow a member of the group to secretly sign a message on the group's behalf with its group private key \(gsk_{i}\); the signature of any member can be verified with the unique group public key \(gpk\). Moreover, the scheme allows a TP to revoke the private key of any member, triggering the update of the private keys of all the other members and of the group public key.
The BBS scheme is here adapted to solve the membership problem presented in Section I. In principle, a node proves that its DID belongs to a trusted set by means of a BBS signature. The paper here adopts the notation from [9] and, when appropriate, directly refers to specific BBS algorithms, namely \(KeyGen\), \(Sign\), \(Verify\), \(Update\), \(Open\), \(Join\), and \(Revoke\).
### _Provisioning_
The provisioning phase consists of the common configuration procedure in a secure environment of each node before deployment on the field.
A TP supervises this phase and begins the provisioning by performing the \(KeyGen\) algorithm. This step consists in the generation of the TP key pair (\(sk_{TP},pk_{TP}\)), the group public key \(gpk\), the TP group private key \(tpsk\), and the private keys \(gsk_{i}\) for all nodes in the network.
The TP provides any node with its group private key \(gsk_{i}\), the group public key \(gpk\), the \(index_{RL}\) on the distributed ledger where to retrieve a Revocation List, and the public key \(pk_{TP}\) to verify the TP signature on such list, as will be explained in detail in Section III-B.
According to the original \(Join\) algorithm in [9], the TP generates and gives a group private key \(gsk_{i}\) to each node; this protocol implies that the TP knows all the group private keys, making it a single point of attack. An alternative \(Join\) algorithm, proposed in [12], introduces the property of _Strong Exculpability_ (SE) and, for clarity, will be denoted as \(Join_{SE}\) in the following discussion. The SE concept is an evolution of the exculpability concept that was first introduced by [13]. In accordance with its definition in [12, 14], SE ensures that no member of the group and not even the entity that issues the private keys can forge a signature on behalf of another group member.
The authors of BBS in [9] suggested acquiring the SE property by generating each \(gsk_{i}\) via a procedure in which the TP only learns a share of \(gsk_{i}\). However, to the best of our knowledge, beside this suggestion, no practical implementation of the \(Join_{SE}\) algorithm for BBS has been published. Therefore, we here propose for the first time an implementation tailored to the BBS group signature.
Firstly, the \(KeyGen\) algorithm must be modified to add one more base to the original \(gpk=(g_{1},g_{2},h,u,v,w)\) where \(g_{1},\,h,\,u,\,v\in\mathbb{G}_{1}\) and \(g_{2},\,w\in\mathbb{G}_{2}\) with \(\mathbb{G}_{1},\,\mathbb{G}_{2}\) multiplicative cyclic groups of prime order. Accordingly, the TP selects at random \(h_{1}\xleftarrow{R}\mathbb{G}_{1}\) and adds it to the new \(gpk=(g_{1},g_{2},h,h_{1},u,v,w)\).
Then, the \(Join_{SE}\) algorithm can be constructed as follows:
1. the node \(n_{i}\) selects at random \(y_{i}\xleftarrow{R}\mathbb{Z}_{p}^{*}\) and sends \(Y=h_{1}^{-y_{i}}\) to the TP;
2. given \(\gamma\) a TP's random secret value defined as \(\gamma\in\mathbb{Z}_{p}^{*}\), the TP selects \(x_{i}\xleftarrow{R}\mathbb{Z}_{p}^{*}\), computes \(A_{i}=(g_{1}Y)^{\frac{1}{\gamma+x_{i}}}\) and \(H_{i}=h_{1}^{\frac{1}{\gamma+x_{i}}}\), and sends \(H_{i}\) to \(n_{i}\);
3. \(n_{i}\) sends \(B_{i}=H_{i}^{-y_{i}}\) back to the TP;
4. the TP computes \(A_{i}^{\prime}=B_{i}(g_{1}^{\frac{1}{\gamma+x_{i}}})\) and check if \(A_{i}^{\prime}=A_{i}\) to convince itself that \(n_{i}\) knows \(y_{i}\);
5. if and only if previous step 4) succeeds, the TP sends \((A_{i},x_{i})\) to \(n_{i}\);
6. finally, \(n_{i}\) builds its entire private key \(gsk_{i}=(A_{i},x_{i},y_{i})\).
It is worth noting that only \(n_{i}\) knows \(y_{i}\). The Discrete Logarithm's Problem (DLP) protects \(y_{i}\) from being discovered by the TP that, in fact, only knows \(Y=h_{1}^{-y_{i}}\) and \((A_{i},x_{i})\). In our solution \(n_{i}\) proves the knowledge of the private key share \(y_{i}\) in Zero-Knowledge.
The \(Sign\), \(Verify\), \(Update\), \(Open\), \(Join\), and \(Revoke\) algorithms in [9] must be accordingly adapted to the new definition of \(gsk_{i}=(A_{i},x_{i},y_{i})\). The adaptation consists in adding the Zero-Knowledge proof of knowledge of the entire group private key \(gsk_{i}\), following the same approach used in constructing the \(Join_{SE}\). These adaptations are here omitted for conciseness.
### _Operation_
A node generates the DID and enters into interaction with other nodes of the network after proper (mutual) authentication. A node \(n_{1}\) computes the proof of membership (i.e. the BBS signature) running the \(Sign\) algorithm using its \(gsk_{1}\) on a digest computed as \(H(DID_{n_{1}}|nonce)\), where the \(nonce\) is generated by the counterpart node \(n_{2}\) to avoid replay attacks. The node \(n_{2}\) can verify the proof of membership with the \(Verify\) algorithm using the group public key \(gpk\) and, then, the ownership of \(sk_{id}\) of \(n_{1}\). In case of mutual authentication, the same procedure takes place in the two directions.
In any case, each node must maintain its own group private key and the group public key up to date. After the revocation of a key \(gsk_{r}\), all nodes must update their own private key \(gsk_{i}\) and \(gpk\) with the \(Update\) algorithm (see Sect. 7 in [9]). The \(Update\) algorithm requires some knowledge of the revoked private keys. For this reason, the TP publishes a list of such knowledge, i.e. a Revocation List (RL), on the distributed ledger at a well-know \(index_{RL}\). All the nodes in the network can easily access this list by querying the distributed ledger.
According to the new \(Join_{SE}\) algorithm, the RL contains a processed version of the share of the revoked private keys \((gsk_{r}^{*},\ldots,gsk_{s}^{*})\) known by the TP. The RL has the form:
\[\{gsk_{r}^{*},\ldots,gsk_{s}^{*};TS;Signature_{sk_{TP}}\}\]
where \(TS\) is a timestamp that provides the date and time of the list, and \(Signature_{sk_{TP}}\) is the signature of the TP. The nodes whose group private key \(gsk_{r}\) is in the RL, cannot update their own \(gsk_{r}\) by design of the \(Update\) algorithm [9], hence they are no more able to generate a valid proof of membership.
### _Secret rotation_
A node is able to update its identity keys \((sk_{id},pk_{id})\) for key rotation purpose, and the respective DID and DID Document, in full compliance with SSI paradigm, without the need to update its \(gsk_{i}\). The node must only generate the new proof when starting the authentication procedure with another node of the network.
Differently, if the TP needs to update its keys \((sk_{TP},pk_{TP})\), the TP has to share the new \(pk_{TP}^{\prime}\) with all the nodes, then sign with \(sk_{TP}^{\prime}\) and publish again the RL.
Moreover, if the TP needs to update its group secret key \(tpsk\) for rotation purpose, the TP has to start a new provisioning phase to provide all the nodes with new group keys (i.e. \(gsk_{i}^{\prime}\), \(gpk^{\prime}\)).
### _Network update_
A new node can be deployed on the network without disrupting the operation of the other nodes. The TP concludes the \(Join_{SE}\) procedure with the new node and, then, shares the group public key \(gpk\), \(index_{RL}\), and \(pk_{TP}\) with it.
On the contrary, when a node is removed from the network for any reason, the TP performs the \(Revoke\) algorithm and publishes the new RL. This revocation action causes all the other nodes to update their group keys.
It is worth noting that the BBS group signature scheme provides a specific algorithm, named \(Open\), that can be used to trace a signature to a signer (i.e. retrieve a share of \(gsk_{r}\) of the signer from a signature). This tools can be useful to detect a misbehaving node (e.g. a compromised node) and revoke its group private key \(gsk_{r}\).
### _Critical Analysis_
The following aspects are worth to be remarked:
* the solution is compliant with the SSI principles, since it does not affect the autonomy of a node to control its identity data (\(sk_{id}\), DID, DID document);
* beside the provisioning phase where the TP provides the group keys to every node, the solution respects the decentralized nature of SSI;
* the revocation of a group private key implies some operations to be executed by all the other nodes (i.e. they check the latest RL to update their group private keys \(gsk_{i}\) and public key \(gpk\) with the \(Update\) algorithm);
* the TP can be identified as a single point of attack. Both private keys \(sk_{TP}\) and \(tpsk\) must be properly protected. An adversary gaining access to those secrets can add a malicious node to the network and revoke the capability of a legitimate node to make valid proofs of membership. The \(Join_{SE}\) protocol offers a protection against the adversaries willing to generate a valid proof of membership on behalf of another node, since the TP does not know the full \(gsk_{i}\);
* the solution scales as the number of nodes increases. However, each revocation triggers the update of the group keys in all the other nodes. Notably, the size of the RL could grow with the number of revocations;
* the solution ensures total flexibility for the node to deal with its DIDs. In fact, once a node is provisioned with a valid \(gsk_{i}\), it can freely create and update its DIDs, proving that they are in a trusted set by means of the BBS signature. Notably, the signature has a constant size and there is not a trade-off between the dimension of the proof and other parameters of the solution;
* the security of the BBS group signature scheme relies on the Linear assumption and on the Strong Diffie-Hellman assumption. As a consequence, it can be considered vulnerable to attack by quantum computers [15];
* finally, this solution is based on an already well-established and mature construction (i.e. the BBS scheme), that can be used with minor modifications.
## IV Performance estimation
The feasibility of the two solutions is addressed by estimating and comparing their computational load and expected performances on a target IoT node (i.e. Raspberry Pi\({}^{\text{\text{\textregistered}}}\) 4 Model B, 4 GB RAM, 1.5 GHz processor [16]).
This work adopts the same methodology applied in [17] to estimate and to compare the execution time of the cryptographic operations in the four different operational phases.
First of all we have measured on the selected IoT node the execution time of the specific cryptographic algorithms heavily used as elemental building blocks in the two solutions under evaluation (i.e. hash computation, scalar multiplication, exponentiation, and pairing). Table I shows the results of measurements assuming a 128-bit security level.
The initial benchmark shows that the sha256 hash computation lasts 4 us and it is the less expensive cryptographic algorithm, whereas the paring computation lasts 50.4 ms and it is the most expensive one, as expected. As an additional remark, the results in Table I are consistent with the values reported in [17], taking into account the different processor clock speed of the target nodes (i.e. 1.5 GHz versus 1.2 GHz).
These results are the basis for estimating and comparing the execution time of the two proposed solutions, as reported in the following subsections.
### _Results for Merkle tree-based solution_
The Merkle tree-based solution implies a computational load proportional to the size of the tree that, according to the structure in Fig. 3, depends on the number of leaves on the tree (i.e. DIDs).
Let us denote the number of leaves with \(k\). This value is the key parameter to estimate the execution time in the four operational phases. In fact, it corresponds to the number of seeds \(s_{i}\) to be generated by the HKDF and to the number of inputs to the hash algorithm to derive the indexes of the DIDs (i.e. leaves of the Merkle tree). In addition, the number of leaves \(k\) has an impact on the computational load to generate the Merkle tree, to create a proof of membership using the proper siblings and to verify a proof of membership given the siblings.
Table II reports the number of required computations and the estimated execution times for the selected IoT node, assuming a Merkle tree with 32 leaves (i.e. \(k=32\)). These results neglect the operations executed by the TP and other operations with a limited impact on the computational load of the node (e.g. random number generations). The rows of Table II represents the operational phases; it must be noted that the _Operation_ phase is split between the generation of a proof of membership (i.e. _Proof_) and its verification (i.e. _Verify_), since they can be executed by two distinct nodes (i.e. \(n_{1}\) and \(n_{2}\), as explained in Section II-B).
The generation of \(k\) seeds \(s_{i}\) with HKDF requires, according to [10], the following computations:
* \(2\mathbf{h}\) for the initial HKDF-Extract function, and
* \(2\mathbf{h}k\) to generate \(k\) seeds with HKDF-Expand function.
Moreover, the generation of a Merkle tree from \(k\) seeds requires \(2k-1\) hash computations and, thus, a time equal to \(\mathbf{h}(2k-1)\).
From these remarks, it is possible to state that the _Provisioning_ phase consists in the generation of \(k\) seeds with HKDF plus the construction of a Merkle tree and, thus, it requires \(2\mathbf{h}+2\mathbf{h}k+\mathbf{h}(2k-1)=\mathbf{h}(4k+1)\).
On the other hand, the verification of a proof of membership implies \(\mathbf{h}(\log_{2}(k)+1)\) to compare the siblings against the \(ROOT\) value.
Assuming that each node stores only a single master secret \(S\) and regenerates the seeds \(s_{i}\) and the Merkle tree on the fly when needed, thus avoiding the secure storage of the entire set of DIDs, the number of computations, hence the execution times, can be derived in the same way.
### _Results for BBS-based solution_
The computation load for the BBS-based solutions has been estimated by considering the analytical results in [17]. Table III shows the number of required computations and the estimated execution times for the selected IoT node.
This work considers all the optimizations suggested in [17], especially the general low level optimizations proposed in [18], the optimal Ate pairing implementation in [19], and the suggestions in Section 6 of [9] to eliminate all the pairings in the computation of a BSS signature and to perform only one pairing to verify a signature.
Moreover, the BBS scheme requires several multi-scalar multiplication and exponentiation operations, with a remarkable computational load. For this reason, Table III denotes a multi-scalar multiplication in \(\mathbb{G}_{1}\) with \(\ell\star\mathbf{m}\), while a multi-scalar exponentiation in \(\mathbb{G}_{T}\) is denoted as \(\ell\star\mathbf{e}\), where \(\ell\) is the total number of multiplication or exponentiation operations to be executed. For example, the second row of Table III reports a double scalar multiplication as \(2\star\mathbf{m}\), while \(3\star\mathbf{e}\) denotes a triple exponentiation.
This work considers also an optimization of these multi-scalar multiplication and exponentiation operations using a generalization of the Shamir's trick [20] that, according to [17], allows accelerating these computations by a factor equal to \(\frac{2^{\ell+1}-1}{3\times 2^{\ell-1}}\).
It must be noted that the results about the Operation phase (both Proof and verify) include a sha256 digest computation that is executed before running the BBS \(Sign\) and \(Verify\) algorithms, respectively, according to Section III-B.
## V Comparative analysis
The two proposed approaches show some similarities. Both solutions take advantage of mature building blocks (i.e. Merkle tree, HKDF, and BBS algorithms) and both comply with SSI principles (i.e. they do not interfere with the decision of a node to create, update or revoke a DID, just add the mechanism to prove that the DID belongs to an evolving trusted set). Moreover, both solutions respect the decentralized nature of SSI, because, apart from the initial provisioning phase, they do not strictly require other direct interactions between the TP and the nodes.
The TP is a single point of attack in both solutions, but with a difference. In the Merkle tree-based solution, an adversary capable to gain access to \(sk_{TP}\) can arbitrarily compromise the list of trusted roots; in the BBS-based solution the adversary must gain access also to \(tpsk\) to be able to add a malicious node to the network, or to revoke the capability of a legitimate node to make valid proofs of membership. In any case, a compromised TP has not direct access to the critical secrets of the nodes, especially their identity private keys \(sk_{id}\).
The main difference between the two solutions resides in the provisioning phase. In the Merkle tree-based solution a node builds on its own the knowledge to generate the proof of membership (i.e. the Merkle tree). In the BBS-based solution the TP generates and provides that knowledge to the node (i.e. \(gsk_{i}\)). The \(tpsk\) is the secret underpinning the group scheme. When the TP needs to update \(tpsk\), for rotation purpose, the TP must start a new provisioning phase with all single nodes. In the former solution the nodes cyclically refresh their secrets (i.e. the Merkle trees) and share the \(ROOT\) values with the TP during the operation phase without interrupting their operation. Moreover, the adoption of a group signature scheme implies a less efficient revocation procedure, because it requires all nodes to update their group keys \(gsk_{i}\) and \(gpk\) every time the TP revokes a group private key. However, apart from these disadvantages, the BBS-based solution provides full flexibility in DID creation, ensures a constant size for the proof that a DID belongs to a trusted set and does not impose any design constraint on the DID update. In fact, a node can potentially generate on the fly an unlimited number of DIDs, without the need to find a trade-off between the Merkle tree size and the number of interactions with the TP.
As far as the performance of the two solutions is concerned, the Merkle tree-based solution clearly outperforms the BBS-based solution in all the considered operational phases, especially in the Operation (Verify) and Network Update phases. It mainly relies only on fast hash computations and it does not require pairing computation at every verification or specific update operations at every revocation.
For these reasons, the Merkle tree-based approach can be considered the most appropriate solution for IoT networks. On the other hand, the BBS-based solution could be of interest for possible use cases that require a constant/small size for the proof of membership in order to minimise the data exchange between nodes.
## VI Conclusions and future works
This paper has proposed an alternative (mutual) authentication process for a network of IoT nodes leveraging SSI. The main idea is to complement the DID-based verification of the identity private key ownership with the verification of a proof of membership during the (mutual) authentication process. The paper has analyzed two membership solutions, a novel solution designed by the Authors based on Merkle trees and a second solution built as an adaptation of the BBS group signature scheme. The performance evaluation has provided an estimate of the computational load on an IoT node for each method, while the comparative analysis has highlighted the advantages and drawbacks of both solutions.
Future works will focus on (_i_) the adoption of threshold signature schemes to reduce the impact of a possible attack to the TP, and (_ii_) the adoption of dynamic accumulators and their properties to build another possible alternative.
## Acknowledgment
The Authors would like to thank Alberto Carelli for his technical support to the performance estimation of the two proposed solutions.
|
2305.05829 | Constant Approximation for Network Revenue Management with
Markovian-Correlated Customer Arrivals | The Network Revenue Management (NRM) problem is a well-known challenge in
dynamic decision-making under uncertainty. In this problem, fixed resources
must be allocated to serve customers over a finite horizon, while customers
arrive according to a stochastic process. The typical NRM model assumes that
customer arrivals are independent over time. However, in this paper, we explore
a more general setting where customer arrivals over different periods can be
correlated. We propose a model that assumes the existence of a system state,
which determines customer arrivals for the current period. This system state
evolves over time according to a time-inhomogeneous Markov chain. We show our
model can be used to represent correlation in various settings.
To solve the NRM problem under our correlated model, we derive a new linear
programming (LP) approximation of the optimal policy. Our approximation
provides an upper bound on the total expected value collected by the optimal
policy. We use our LP to develop a new bid price policy, which computes bid
prices for each system state and time period in a backward induction manner.
The decision is then made by comparing the reward of the customer against the
associated bid prices. Our policy guarantees to collect at least $1/(1+L)$
fraction of the total reward collected by the optimal policy, where $L$ denotes
the maximum number of resources required by a customer.
In summary, our work presents a Markovian model for correlated customer
arrivals in the NRM problem and provides a new LP approximation for solving the
problem under this model. We derive a new bid price policy and provides a
theoretical guarantee of the performance of the policy. | Jiashuo Jiang | 2023-05-10T01:22:59Z | http://arxiv.org/abs/2305.05829v3 | # Constant Approximation for Network Revenue Management with Markovian-Correlated Customer Arrivals
###### Abstract
The Network Revenue Management (NRM) problem is a well-known challenge in dynamic decision-making under uncertainty. In this problem, fixed resources must be allocated to serve customers over a finite horizon, while customers arrive according to a stochastic process. The typical NRM model assumes that customer arrivals are independent over time. However, in this paper, we explore a more general setting where customer arrivals over different periods can be correlated. We propose a new model that assumes the existence of a system state, which determines customer arrivals for the current period. This system state evolves over time according to a time-inhomogeneous Markov chain. Our model can be used to represent correlation in various settings and synthesizes correlation models developed in previous literature.
To solve the NRM problem under our correlated model, we derive a new linear programming (LP) approximation of the optimal policy. Our approximation provides a tighter upper bound on the total expected value collected by the optimal policy than existing upper bounds. We use our LP to develop a new bid price policy, which computes bid prices for each system state and time period in a backward induction manner. The decision is then made by comparing the reward of the customer against the associated bid prices. Our policy guarantees to collect at least \(1/(1+L)\) fraction of the total reward collected by the optimal policy, where \(L\) denotes the maximum number of resources required by a customer.
In summary, our work presents a new model for correlated customer arrivals in the NRM problem and provides an LP approximation for solving the problem under this model. We derive a new bid price policy and provides a theoretical guarantee of the performance of the policy.
revenue management, Markov chain, approximation algorithm, dynamic programming
## 1 Introduction
Network revenue management (NRM) is a classical problem in stochastic dynamic decision-making with resource constraints. The NRM problem involves allocating \(m\) resources, each with an individual initial capacity, over a finite time horizon of length \(T\). At each time step \(t=1,\ldots,T\), a customer \(t\) arrives, belonging to a type \(j_{t}\in[n]\), and demands a vector \(\boldsymbol{a}_{t}\in\{0,1\}^{m}\) of resources, with a corresponding reward \(r_{t}\). Both \(\boldsymbol{a}_{t}\) and \(r_{t}\) are dependent on the customer type. After observing \(\boldsymbol{a}_{t}\) and \(r_{t}\), an irrevocable decision must be made about whether to serve customer \(t\). If served, \(\boldsymbol{a}_{t}\) units are
consumed from the resources, and a \(r_{t}\) reward is collected. Customer \(t\) can only be served if there are enough remaining resources, i.e., each remaining resource is no smaller than the corresponding component of the demanding vector \(\mathbf{a}_{t}\). Note that even if there are enough remaining resources, a customer can be intentionally rejected to save resources for serving customers with higher values that may arrive in the future. The goal of the decision-maker is to maximize the total reward collected from serving customers.
One crucial aspect of the NRM model is how the size and reward \((\mathbf{a}_{t},r_{t})\) are generated, i.e., how the customer type is determined for each customer \(t\). The NRM problem is typically studied under the stochastic setting where \((\mathbf{a}_{t},r_{t})\) are drawn from a distribution. The broad literature on the NRM problem has focused on the _independent_ customer arrival models (e.g. Gallego and Van Ryzin (1994)), where \((\mathbf{a}_{t},r_{t})\) are drawn from a distribution (which can be time-inhomogeneous) independently for each period \(t\). However, as noted in recent studies (Bai et al., 2022; Aouad and Ma, 2022), a major shortcoming of the independent customer arrival model is that it cannot handle the coexistence of large demand volume and large demand variability. Indeed, for the independent customer arrival model, over the entire horizon, the variance of the type \(j\) customer arrivals cannot exceed its expectation, which can be violated in practice. Additionally, demand can be non-stationary and can evolve over time in many business settings due to factors such as macroeconomic issues, external shocks, and fashion trends (Chen et al., 2019; Keskin and Li, 2022). Therefore, to incorporate high variance demand patterns and demand evolvement in the marketplace, it is necessary to consider correlation between customer arrivals.
In this paper, we study the NRM problem with correlated customer arrivals and aim to answer the following two research questions. First, how should we model correlated customer arrivals in the NRM problem? Existing literature models the high variance demand pattern and demand evolvement pattern separately. We propose a unified model that incorporates both patterns in a single framework. Second, as the optimal policy is computationally intractable due to the curse of dimensionality, how can we design a near-optimal policy with strong theoretical guarantees? A key step in designing a near-optimal policy is to find a sound approximation of the optimal policy. We propose an approximation under the unified correlated customer arrival model and derive our policy. We measure the performance of our policy by the approximation ratio, which is defined as the relative difference between the expected total reward collected by our policy and that of the optimal policy. The formal definition of the approximation ratio is provided in Section 2 after introducing the notations and problem formulation.
### Main Results and Contributions
Our main results can be summarized into three parts. First, we propose a new model of correlated customer arrivals that unifies previous models. Our model assumes the existence of a system state
that transits according to a time-inhomogeneous Markov chain. This allows us to capture previous correlation models while providing greater flexibility in modeling customer arrival patterns. Second, we present a new approximation of the optimal policy that serves as an upper bound on the expected total reward collected by the optimal policy. We demonstrate that our upper bound is tighter than previous upper bounds developed in the literature for correlated arrivals. Third, we derive a near-optimal policy with an approximation ratio of \(1/(1+L)\), where \(L\) denotes the maximum number of resources that a customer would require. Our policy specifies bid prices for each system state and time period, and we serve the customer only if the reward exceeds the associated bid prices. In this way, our policy can be viewed as a generalization of the classical bid price control policy for the NRM problem (Talluri and Van Ryzin, 1998) to the correlated arrival setting. We now provide further illustrations over our main results and make comparisons with previous results in details.
#### 1.1.1 A unified model of correlated customer arrivals
To formalize the concept of having a system state and time-inhomogeneous transition probabilities, we denote the state of the system at each period \(t\) as \(\mathbf{s}_{t}\), which synthesizes the current system information. For example, in settings with finite types of customers, we can let \(\mathbf{s}_{t}\) represent the type \(j_{t}\) of customer \(t\), where each type refers to a certain subset of resources required by the customer and a corresponding reward. The meaning of the state can also extend beyond customer type. In inventory management literature (e.g., Song and Zipkin (1993), Sethi and Cheng (1997), Chen and Song (2001)), the system state refers to the quantity of demand, while in revenue management literature, the system state can represent other statistical characterizations of customers (e.g., Aviv and Pazgal (2005)). The system transits to a new state according to a given probability, denoted by \(p_{t}(\mathbf{s}_{t}=\mathbf{s},\mathbf{s}_{t+1}=\mathbf{s}^{\prime})\) for any possible states \(\mathbf{s}\) and \(\mathbf{s}^{\prime}\) in the state space. The transition of the system state corresponds to the transition on a time-inhomogeneous Markov chain. Therefore, we name our arrival model the Markovian-correlated arrival model. In contrast to the Markovian-modulated arrival model studied in previous literature, we allow the transition probabilities \(p_{t}(\mathbf{s}_{t}=\mathbf{s},\mathbf{s}_{t+1}=\mathbf{s}^{\prime})\) to vary across periods. This generalization enables us to capture high variance demand patterns, as illustrated below.
The high-variance demand pattern in Bai et al. (2022) models possible high variance over customer arrivals. There are \(n\) customer types and each type \(j\) customer has a size \(\mathbf{a}_{j}\) and a reward \(r_{j}\). A random variable \(D\) captures the number of customer arrivals, and a survival rate \(\rho_{t}=P(D\geq t+1|D\geq t)\) is defined. At each period \(t\), given \(D\geq t\), one customer arrives, and customer \(t\) is of type \(j\in[n]\) with probability \(\lambda_{j,t}\). Our Markovian-correlated model can capture the high variance model by letting the state space be \(\{0,1,\ldots,n\}\), where state \(j\in[m]\) denotes that a customer of
type \(j\) has arrived, and state \(0\) denotes no customer arrival. We define the transition probabilities in our model as follows:
\[p_{t}(j,j^{\prime})=\rho_{t-1}\cdot\lambda_{j^{\prime},t},\ p_{t}(j,0)=1-\rho_{t- 1},\ \text{and}\ p_{t}(0,0)=1,p_{t}(0,j)=0\ \forall j,j^{\prime}\in[n],\forall t\in[T],\]
to capture the high variance arrival model.
As illustrated above, our Markovian-correlated model unifies existing correlation models, and it is our modeling contribution to capturing customer arrival correlation in a time-inhomogeneous Markovian manner. In Section 2.1, we provide detailed comparisons with other correlated arrival models proposed in previous literature. Overall, our model provides a more flexible and accurate way to capture customer arrival patterns in many business and economic settings.
#### 1.1.2 A new LP upper bound of the optimal policy.
To derive the optimal policy, we typically solve the dynamic programming (DP) problem. However, due to the curse of dimensionality, solving the DP becomes computationally intractable for large-scale systems. Therefore, it is common to derive a sound approximation of the optimal policy that is computationally tractable. This approximation not only enables practical policy implementation but also serves as an upper bound for the expected total reward collected by the optimal policy, proving the performance guarantee of our policy. For independent customer arrival models, the most prevalent approximation is the fluid approximation (e.g., Gallego and Van Ryzin (1994)). The fluid approximation is derived by computing the expected consumption of resource capacities in the constraints and the total expected revenue in the objective function through customer arrival distributions. For independent customer arrivals, the fluid approximation is asymptotically tight as the system size scales up (Gallego and Van Ryzin 1994). Constant approximation ratios have also been obtained for the fluid approximation under various settings (e.g., Ma et al. (2020), Baek and Ma (2022)). However, when customer arrivals are correlated, the performance of the optimal policy can be arbitrarily worse than that of the fluid approximation, as shown in Bai et al. (2022) and Aouad and Ma (2022). Therefore, it is essential to propose new approximations for optimal policy with correlated customer arrivals.
Although the DP is computationally intractable, we exploit the DP formulation to derive our approximation. Motivated by the broad literature on approximate DP (e.g., Powell (2007)), we use a linear combination over the remaining resources to approximate the value-to-go function. Specifically, we denote \(V_{t}^{*}(\mathbf{c},\mathbf{s})\) as the value-to-go function of the DP at period \(t\), given the remaining capacities \(\mathbf{c}=(c_{1},\ldots,c_{m})\) and the current state \(\mathbf{s}\). We approximate \(V_{t}^{*}(\mathbf{c},\mathbf{s})\) by the following linear formulation:
\[V_{t}^{*}(\mathbf{c},\mathbf{s})\approx\theta^{t}(\mathbf{s})+\sum_{i\in[m]}c_{i}\cdot \beta_{i}^{t}(\mathbf{s}),\]
where \(\theta^{t}(\mathbf{s})\) and \(\{\beta_{i}^{t}(\mathbf{s}),\forall i\in[m]\}\) are weights determined for each period \(t\) and each state \(\mathbf{s}\). We develop a linear programming (LP) model for computing the weights \(\{\theta^{t}(\mathbf{s}),\beta_{i}^{t}(\mathbf{s}),\forall i\in[m],\forall t,\forall\mathbf{ s}\}\) and show that its optimal objective value is an upper bound on the expected total reward collected by the optimal policy (the DP value). We also demonstrate that our upper bound is tighter than the previous upper bounds established in the literature for correlated customer arrivals, as discussed in Section 3.1. Overall, our proposed LP well approximates the optimal policy and provides an efficient way to derive the policy for settings with correlated customer arrivals.
#### 1.1.3 A bid price control policy and performance guarantee.
Motivated by backward induction in the DP formulation, we develop a variant of the bid price control policy. Specifically, for each customer type \(j\), given the period \(t\) and the system state \(\mathbf{s}\), we assign a bid price \(\nu_{j}^{t}(\mathbf{s})\). We then use a linear combination of the bid prices to represent the marginal benefits of having an extra \(\mathbf{a}_{j_{t}}\) units of resources, and a customer is served only if its reward exceeds the marginal benefits. The bid prices are computed in a backward induction manner that mimics the backward induction in the approximate DP formulation. We also show that the constructed bid prices can be converted into a feasible solution for our LP upper bound of the optimal policy. By following these steps, we show that our policy enjoys an approximation ratio bound of \(1/(1+L)\), where \(L\) denotes the maximum number of resources that a customer will consume. To the best of our knowledge, we are the first to obtain a constant approximation for the NRM problem with correlated customer arrivals. Our approximation ratio bound guarantees that our policy performs reasonably well compared to the optimal policy, irrespective of the number of time periods, resources, and customer types. Finally, we extend all our results to the more general setting with customer choices, where the decision maker offers an assortment at each period, and the customer chooses one product from the assortment. As detailed in Section 6, our extension to the more general setting with customer choices enhances the practicality and applicability of our results.
### Other Related Literature
We now review other related literature. We first discuss previous methods in the literature on correlated customer arrivals and we then compare against the constrained Markov decision process literature. We finally discuss existing literature on developing near-optimal policies for NRM problems.
The Markovian-modulated demand process is a prevalent way to model correlated customer arrivals, where the system state transits according to a Markov chain over time. This process has been widely studied in the literature of inventory management and supply chain management, and interested readers can refer to Simchi-Levi et al. (2004) for an overview. The Markovian-modulated demand process has also been extensively studied in the pricing literature. For example,
Rustichini and Wolinsky (1995) assumes the system state evolves according to a two-state Markov chain, while Aviv and Pazgal (2005) considers a partially observed Markovian-modulated setting where the system state is unobserved, and only the demand can be observed. The pricing problem for Markovian-modulated settings has also been considered in den Boer et al. (2018) under the context of inventory control. Keskin and Li (2022) assumes the transition probabilities to be unknown and considers pricing with learning. Moreover, in Jia et al. (2023), the Markovian-modulated setting is considered for an online resource allocation problem with a single resource. Compared to the papers above, our Markov-correlated arrival model allows time-inhomoegenous transition probabilities, which enjoys a wider range of applications.
Notably, by assuming a Markovian-correlated customer arrival model, our problem falls into the broad literature of constrained Markov decision processes (CMDP), where there are resource constraints over the entire horizon, and the action taken at each period consumes certain resources. Various methods have been proposed to solve CMDP near-optimally, including a linear programming-based approach and a Lagrangian approach (see Altman (1999) and the references therein). The reinforcement learning counterpart of CMDP has also been studied in the literature, where the transition probabilities are assumed to be unknown. The problem is studied in various settings, including Wei et al. (2018), Qiu et al. (2020) for adversarial objectives, Zheng and Ratliff (2020) for bandit feedback, and Efroni et al. (2020), Germano et al. (2023) for more general settings. However, the methods developed in these papers are for the infinite horizon (discounted) average reward setting. For finite horizon settings, only sublinear regret bounds are proved. Compared to the literature above, we are the first to achieve a constant approximation ratio bound for a CMDP with an NRM formulation, where our bound holds irrespective of the number of time periods. Our results extend the existing literature and provide a new framework for solving CMDP problems with NRM formulations in finite horizon settings.
The optimal policy for the NRM problem can be characterized by DP. However, due to the curse of dimensionality, the DP is computationally intractable. Therefore, one mainstream of literature on the NRM problem is to develop approaches that approximate the DP solution. In Adelman (2007), the author proposes using a linear combination over the remaining capacities to approximate the value-to-go function, and the coefficients in the linear combination can be computed through solving a LP, which gives a tighter upper bound than the traditional fluid approximation. Subsequently, the approximate DP approach is further investigated in the literature (e.g., Zhang and Adelman (2009), Farias and Van Roy (2007), Zhang (2011), Meissner and Strauss (2012), Tong and Topaloglu (2014), Kunnumkal and Talluri (2016), Zhang et al. (2022)) under various settings for the NRM problem. Notably, Ma et al. (2020) develops a non-linear approximation to the DP value and obtains a constant approximation ratio. Baek and Ma (2022) and
Simchi-Levi et al. (2022) further investigate the reusable resource settings. Compared to the previous literature on approximate DP approach for NRM problem, we consider correlated customer arrivals. The Lagrangian relaxation approach has also been investigated in the literature for approximating the DP value. For example, Topaloglu (2009) applies the Lagrangian relaxation approach to the NRM problem, and Brown and Smith (2014) applies it to general DP. However, these previous methods do not enjoy a constant approximation ratio bound.
In addition to the approximation ratio bound, the revenue loss bound has also been widely studied for the NRM problem, which measures the additive difference between the expected total reward collected by the proposed policy and that of the optimal policy. One popular way to derive a policy with a strong revenue loss bound is to consider the fluid approximation and use its optimal solution to derive the policies. For example, Talluri and Van Ryzin (1998) proposes a static bid-price policy based on the dual variable of the fluid approximation and proves that the revenue loss is \(O(\sqrt{T})\). Subsequently, Reiman and Wang (2008) shows that by re-solving the fluid approximation once, one can obtain an \(o(\sqrt{T})\) upper bound on the revenue loss. Then, Jasin and Kumar (2012) shows that under a non-degeneracy condition for the fluid approximation, a policy that re-solves the fluid approximation at each time period will lead to an \(O(1)\) revenue loss, which is independent of the horizon \(T\). A later paper Jasin and Kumar (2013) further discusses the relationship between the performances of the control policies and the number of times of re-solving the fluid approximation. Recently, by considering a tighter prophet upper bound, Bumpensanti and Wang (2020) propose an infrequent re-solving policy and show that their policy achieves an \(O(1)\) upper bound on the revenue loss even without the "non-degeneracy" assumption. With a different approach, Vera and Banerjee (2021) proves the same \(O(1)\) upper bound for the NRM problem. Their approach is further generalized in Vera et al. (2021), Freund and Banerjee (2019), Freund and Zhao (2022) for other online decision-making problems. When there can be an infinite number of customer types, a logarithmic revenue loss is achieved in Balseiro et al. (2021), Besbes et al. (2022), Bray (2022), and Jiang et al. (2022) under various conditions. For price-based NRM problems, an \(O(1)\) revenue loss is proved in Wang and Wang (2022) for the resolving heuristics. Notably, Bai et al. (2022) derives a policy that is asymptotically optimal under the high variance demand pattern, which translates into a sublinear regret. Overall, the literature on the revenue loss bound for the NRM problem is extensive and covers various settings and approaches. These results provide valuable insights into the performance of different policies and help practitioners design more effective and efficient revenue management strategies. Our work complements the literature on NRM problem by providing a constant approximation ratio with correlated customer arrivals.
## 2 Problem Formulation
We consider a network revenue management problem, where there are \(m\) resources and each resource \(i\in[m]\) has an initial capacity \(C_{i}\in\mathbb{R}_{\geq 0}\). There are \(n\) products and we have a binary variable \(a_{i,j}\in\{0,1\}\) denoting whether product \(j\) would require one unit of resource \(i\) to produce, for each \(j\in[n]\) and \(i\in[m]\). There are \(T\) discrete time periods and at each period \(t\in[T]\), one customer arrives, denoted as customer \(t\). Customer \(t\) would require one product and we call customer \(t\) a _type-\(j\)_ customer if customer \(t\) requires product \(j\). Each customer is associated with a size and a reward. For a customer of type \(j\), the size is denoted by \(\boldsymbol{a}_{j}=(a_{j,1},\ldots,a_{j,m})\in\{0,1\}^{m}\) and the reward is denoted by \(r_{j}\). We assume that the type of customer \(t\) is realized as \(j\) with a given probability, for each \(j\in[m]\). However, this probability would depend on the type realizations of previous customers, which reflects the correlation of customer arrivals. To be more concrete, we use \(\boldsymbol{s}_{t}\) to denote the _state_ of period \(t\) and we denote by \(\mathcal{S}\) the state space. The type of customer \(t\) is determined by the system state. Given \(\boldsymbol{s}_{t}\), the type of customer \(t\) is determined as \(j(\boldsymbol{s}_{t})\).
After query \(t\) arrives and the state \(\boldsymbol{s}_{t}\) is revealed, the decision maker has to decide immediately and irrevocably whether or not to serve customer \(t\). Note that customer \(t\) can only be served if for every resource \(i\) its remaining capacity is at least \(a_{j(\boldsymbol{s}_{t}),i}\). By serving customer \(t\), each resource \(i\) will be consumed by \(a_{j(\boldsymbol{s}_{t}),i}\) units and a reward \(r_{j(\boldsymbol{s}_{t})}\) will be collected by the decision maker. Then, in the next period \(t+1\), the state would transfer to \(\boldsymbol{s}_{t+1}\) with probability \(p_{t}(\boldsymbol{s}_{t},\boldsymbol{s}_{t+1})\). The goal of the decision maker is to maximize the total reward collected during the entire horizon subject to the resource capacity constraints.
Any online policy \(\pi\) for the decision maker is specified by a set of decision variables \(\{x_{t}^{\pi}\}_{\forall t\in[T]}\), where \(x_{t}^{\pi}\) is a binary variable and denotes whether customer \(t\) is served or not, for all \(t\in[T]\). Note that \(x_{t}^{\pi}\) can be stochastic if \(\pi\) is a randomized policy. Any policy \(\pi\) is feasible if for all \(t\in[T]\), \(x_{t}^{\pi}\) depends only on the problem instance \(\{p_{t}(\boldsymbol{s},\boldsymbol{s}^{\prime}),\forall t\in[T],\forall \boldsymbol{s},\boldsymbol{s}^{\prime}\in\mathcal{S}\}\) and the realizations up to now \(\{\boldsymbol{s}_{1},\ldots,\boldsymbol{s}_{t}\}\), and the following capacity constraint is satisfied:
\[\sum_{t=1}^{T}a_{j(\boldsymbol{s}_{t}),i}\cdot x_{t}^{\pi}\leq C_{i},\ \ \forall i\in[m]. \tag{1}\]
The total collected value of policy \(\pi\) is given by \(V^{\pi}(I)=\sum_{t=1}^{T}r_{j(\boldsymbol{s}_{t})}\cdot x_{t}^{\pi}\), where \(I=\{\boldsymbol{s}_{t},\forall t\in[T]\}\) denotes the sample path.
Our goal is to develop a feasible polynomial-time online policy \(\pi\) for the decision maker to maximize \(\mathbb{E}_{I\sim\mathcal{F}}[V^{\pi}(I)]\), where we use \(\mathcal{F}=\{p_{t}(\boldsymbol{s},\boldsymbol{s}^{\prime}),\forall t\in[T], \forall\boldsymbol{s},\boldsymbol{s}^{\prime}\in\mathcal{S}\}\) to denote the problem instance for notation simplicity. The benchmark is the _optimal online policy_, which we denote by \(\pi^{*}\) and can be obtained as the solution to the following problem:
\[\pi^{*}=\operatorname*{argmax}_{\pi}\mathbb{E}_{I\sim\mathcal{F}}[V^{\pi}(I)] \tag{2}\]
For any feasible online policy \(\pi\), we use _approximation ratio_ to measure its performance, which is defined as follows:
\[\gamma(\pi):=\inf_{\mathcal{F}}\frac{\mathbb{E}_{I\sim\mathcal{F}}[V^{\pi}(I)]}{ \mathbb{E}_{I\sim\mathcal{F}}[V^{\pi^{*}}(I)]}. \tag{3}\]
Note that in the definition (3), the performance of the online policy \(\pi\) is minimized over the problem instance given by the probabilities \(\{p_{t}(\mathbf{s},\mathbf{s}^{\prime}),\forall t\in[T],\forall\mathbf{s},\mathbf{s}^{\prime} \in\mathcal{S}\}\), instead of the support for customers' size and reward \(\{(r_{j},\mathbf{a}_{j}),\forall j\in[n]\}\). Indeed, as we will show later (see discussions in Section 5.3), the approximation ratio of any feasible online policy would inevitably depend on the support \(\{(r_{j},\mathbf{a}_{j}),\forall j\in[n]\}\). As a result, the approximation ratio of our policy also depends on \(\{(r_{j},\mathbf{a}_{j}),\forall j\in[n]\}\).
### Customer Arrival Correlation Model
We now provide more discussions and illustrations over our approach to model the correlation of the customer arrival. We also compare with existing approaches in the literature that model correlated customer arrivals.
The transition of the state \(\mathbf{s}_{t}\) corresponds to a Markov chain. In this sense, our arrival model is analogous to the so-called _Markovian-modulated_ demand process, which has been extensively studied in the literature of inventory management (e.g., Song and Zipkin (1993), Sethi and Cheng (1997), Chen and Song (2001)). The use of Markovian-modulated demand processes has been reported in Simchi-Levi et al. (2004) for a wider range of applications in supply chain management. There has also been a use of Markovian-modulated demand processes in the revenue management and pricing literature, see, for example, Rustichini and Wolinsky (1995), Aviv and Pazgal (2005), and Keskin and Li (2022). Therefore, it is common in practice to assume that customer demand would arrive according to a Markov process, and the traditional Poisson arrival process for the NRM problem (Gallego and Van Ryzin 1994) can be viewed as a special case of the Markov arrival process. However, the Markovian-modulated demand process studied in the previous papers all assumes that the transition between states is homogeneous over the entire horizon. In contrast, in our customer arrival model, we allow the transition probabilities to be _non-homogeneous_ across time, which allows us to capture more customer arrival models, especially correlated ones, that have been studied in the literature as special cases. We illustrate in the following paragraphs.
**Independent arrival model.** It is clear that if the state transition probabilities in our model are independent of the current state, i.e., \(p_{t}(\mathbf{s},\mathbf{s}^{\prime})\) are common for each \(\mathbf{s}^{\prime}\in\mathcal{S}\), then our arrival model recovers the independent customer arrival model. Moreover, the non-homogeneity of \(\{p_{t}(\mathbf{s},\mathbf{s}^{\prime}),\forall\mathbf{s},\mathbf{s}^{\prime}\in\mathcal{S}\}\) over \(t\) allows us to capture the non-stationarity of customer arrivals, as studied in Ma et al. (2020) and Jiang et al. (2020). We now focus on the correlated arrival models studied in the literature.
**High-variance correlated model in Bai et al. (2022).** In Bai et al. (2022), the following model is adopted to capture the possible high variance in the arrival of each type of customer. A random variable \(D\) is used to capture the number of customer arrivals, and a survival rate \(\rho_{t}=P(D\geq t+1|D\geq t)\) is defined. Then, at each period \(t\), conditional on \(D\geq t\), one customer arrives, and customer \(t\) is of type \(j\in[n]\) with probability \(\lambda_{j,t}\). The high-variance model can be captured by letting \(\mathcal{S}=\{0,1,\ldots,n\}\), where state \(j\in[m]\) denotes the customer is of type \(j\), and state \(0\) denotes no customer arrival. Then, the transition probabilities in our model can be defined as follows:
\[p_{t}(j,j^{\prime})=\rho_{t-1}\cdot\lambda_{j^{\prime},t},\ p_{t}(j,0)=1-\rho_ {t-1},\text{ and }p_{t}(0,0)=1,\ \forall j,j^{\prime}\in[n],\forall t\in[T],\]
to capture the high variance arrival model.
**INDEP and CORREL correlated models in Aouad and Ma (2022).** The INDEP model decides the total number of arrivals of type \(j\) customers according to a distribution for each \(j\in[n]\), denoted as \(D_{j}\). The CORREL model samples the total number of arrivals \(D\) from a distribution and assigns each arrival to a type \(j\) with probability \(p_{j}\), for each \(j\in[n]\). Then, all customers in INDEP and CORREL arrive in a uniformly random order. We can define the state at each period to be the number of each type of customer up to that point and write out the transition probabilities accordingly. In this way, INDEP and CORREL can be expressed by our model with an exponentially large number of states.
**Correlation arrival model in Truong and Wang (2019).** The arrival model in Truong and Wang (2019) is similar to ours, where they assume there is an exogenous state information \(S_{t}\) at each period \(t\) that determines the type (or type distribution) of the customer. We can think of the information \(S_{t}\) in Truong and Wang (2019) as the state \(\boldsymbol{s}_{t}\) in our model. The requirement of knowing the joint distribution of \(S_{t},\forall t\in[T]\) in Truong and Wang (2019) is also analogous to the assumption of knowing the transition probabilities \(\{p_{t}(\boldsymbol{s},\boldsymbol{s}^{\prime}),\forall\boldsymbol{s}, \boldsymbol{s}^{\prime}\in\mathcal{S}\}\) in our model.
## 3 An Approximation of the Optimal Policy
In this section, we derive an approximation of the optimal policy to serve as an upper bound of the expected total reward collected by the optimal online policy. The upper bound is derived from the dynamic programming (DP) formulation of the optimal online policy, where we would regard the combination of the remaining capacities of each resource, denoted by \(\boldsymbol{c}_{t}=(c_{t,1},\ldots,c_{t,m})\), and \(\boldsymbol{s}_{t}\) as the current state in the DP formulation. However, due to the curse of dimensionality, the state space for the DP can be exponentially large. Therefore, we apply a linear approximation to the DP formulation, which not only enables us to derive linear programming (LP) to serve as an upper bound to the optimal online policy but also implies our online policy.
Denote by \(V_{t}^{*}(\mathbf{c},\mathbf{s})\) the value to go function at period \(t\), given the remaining capacity \(\mathbf{c}\) and the current state \(\mathbf{s}\). The backward induction can be given as follows:
\[V_{t}^{*}(\mathbf{c},\mathbf{s}) = \max\left\{\mathbbm{1}_{\mathbf{c}\geq\mathbf{a}_{j(\mathbf{s})}}\cdot\left(r_ {j(\mathbf{s})}+\sum_{\mathbf{s}^{\prime}\in\mathcal{S}}p_{t}(\mathbf{s},\mathbf{s}^{\prime}) \cdot V_{t+1}^{*}(\mathbf{c}-\mathbf{a}_{j(\mathbf{s})},\mathbf{s}^{\prime})\right),\sum_{\mathbf{ s}^{\prime}\in\mathcal{S}}p_{t}(\mathbf{s},\mathbf{s}^{\prime})\cdot V_{t+1}^{*}(\mathbf{c}, \mathbf{s}^{\prime})\right\}. \tag{4}\]
Then, the expected collected reward for the optimal online policy can be given by the DP value, i.e., \(\mathbb{E}_{I\sim\mathcal{F}}[V^{\pi^{*}}(I)]=\sum_{\mathbf{s}\in\mathcal{S}}p_{1} (\mathbf{s})\cdot V_{1}^{*}(\mathbf{C},\mathbf{s})\) where we use \(p_{1}(\mathbf{s})\) to denote the probability that the initial state \(\mathbf{s}_{1}\) is realized as \(\mathbf{s}\), for all \(\mathbf{s}\in\mathcal{S}\). Note that the backward induction (4) admits the following equivalent LP formulation:
\[\min \sum_{\mathbf{s}\in\mathcal{S}}p_{1}(\mathbf{s})\cdot V_{1}(\mathbf{C},\mathbf{s})\] (5a) s.t. \[V_{t}(\mathbf{c},\mathbf{s}) \geq r_{j(\mathbf{s})}+\sum_{\mathbf{s}^{\prime}\in\mathcal{S}}p_{t}(\mathbf{s}, \mathbf{s}^{\prime})\cdot V_{t+1}^{*}(\mathbf{c}-\mathbf{\alpha}_{j(\mathbf{s})},\mathbf{s}^{\prime }),\forall\mathbf{c}\geq\mathbf{a}_{j(\mathbf{s})},\forall t\in[T],\forall\mathbf{s}\in \mathcal{S} \tag{5b}\] \[V_{t}(\mathbf{c},\mathbf{s}) \geq \sum_{\mathbf{s}^{\prime}\in\mathcal{S}}p_{t}(\mathbf{s},\mathbf{s}^{\prime} )\cdot V_{t+1}^{*}(\mathbf{c},\mathbf{s}^{\prime}),\forall\mathbf{c},\forall t\in[T], \forall\mathbf{s}\in\mathcal{S}\] (5c) \[V_{t}(\mathbf{c},\mathbf{s}) \geq 0,\forall\mathbf{c},\forall t\in[T],\forall\mathbf{s}\in\mathcal{S}. \tag{5d}\]
Here, we regard each \(V_{t}(\mathbf{c},\mathbf{s})\) as a decision variable, which represents the dynamic programming value-to-go \(V_{t}^{*}(\mathbf{c},\mathbf{s})\). It is known that in an optimal solution, the decision variable \(V_{t}(\mathbf{c},\mathbf{s})\) equals \(V_{t}^{*}(\mathbf{c},\mathbf{s})\)(e.g. Schweitzer and Seidmann (1985), De Farias and Van Roy (2003), Adelman (2007)). However, since there can be exponentially many possible values for \(\mathbf{c}\), the LP (5) has exponentially many decision variables. We apply a linear approximation to reduce the number of decision variables. To be specific, we restrict the formulation of the decision variable \(V_{t}(\mathbf{c},\mathbf{s})\) into the following linear formulation:
\[V_{t}(\mathbf{c},\mathbf{s})=\theta^{t}(\mathbf{s})+\sum_{i\in[m]}c_{i}\cdot\beta_{i}^{t}( \mathbf{s}),\ \ \forall\mathbf{c} \tag{6}\]
where \(\{\theta^{t}(\mathbf{s}),\beta_{i}^{t}(\mathbf{s}),\forall i\in[m],\forall\mathbf{s}\in \mathcal{S},\forall t\in[T]\}\) is a set of _non-negative_ parameters to be determined later. Plugging the linear approximation (6) into the LP (5), we derive a simplified (and further relaxed) LP with decision variables being \(\{\theta^{t}(\mathbf{s}),\beta_{i}^{t}(\mathbf{s}),\forall i\in[m],\forall\mathbf{s}\in \mathcal{S},\forall t\in[T]\}\).
\[\min \sum_{\mathbf{s}\in\mathcal{S}}p_{1}(\mathbf{s})\cdot\left(\theta^{1}(\bm {s})+\sum_{i\in[m]}C_{i}\cdot\beta_{i}^{1}(\mathbf{s})\right) \tag{7a}\] \[\text{s.t.} \theta^{t}(\mathbf{s})-\sum_{\mathbf{s}^{\prime}\in\mathcal{S}}p_{t}(\bm {s},\mathbf{s}^{\prime})\cdot\theta^{t+1}(\mathbf{s}^{\prime})\geq\left[r_{j(\mathbf{s})} -\sum_{\mathbf{s}^{\prime}\in\mathcal{S}}p_{t}(\mathbf{s},\mathbf{s}^{\prime})\cdot\sum_{i \in[m]}a_{i,j(\mathbf{s})}\cdot\beta_{i}^{t+1}(\mathbf{s}^{\prime})\right]^{+}\] \[\qquad+\sum_{i\in[m]}C_{i}\cdot\left[\sum_{\mathbf{s}^{\prime}\in \mathcal{S}}p_{t}(\mathbf{s},\mathbf{s}^{\prime})\cdot\beta_{i}^{t+1}(\mathbf{s}^{\prime })-\beta_{i}^{t}(\mathbf{s})\right]^{+},\ \ \ \ \forall t\in[T],\forall\mathbf{s}\in \mathcal{S}\] (7b) \[\theta^{t}(\mathbf{s})\geq 0,\beta_{i}^{t}(\mathbf{s})\geq 0,\forall i\in[m], \ \ \ \ \forall t\in[T],\forall\mathbf{s}\in\mathcal{S}. \tag{7c}\]
Note that though there are \([\cdot]^{+}\) operators, the above optimization problem is indeed equivalent to an LP. To see this point, for each \([\cdot]^{+}\) operator, we can introduce a new non-negative variable to represent its value. For example, for each \(t\in[T]\) and \(\mathbf{s}\in\mathcal{S}\), we can introduce a new decision variable \(\eta_{t}(\mathbf{s})\) with new constraints
\[\eta_{t}(\mathbf{s})\geq 0\text{ and }\eta_{t}(\mathbf{s})\geq\sum_{\mathbf{s}^{\prime}\in \mathcal{S}}p_{t}(\mathbf{s},\mathbf{s}^{\prime})\cdot\beta_{i}^{t+1}(\mathbf{s}^{\prime}) -\beta_{i}^{t}(\mathbf{s}).\]
It is clear to see that in an optimal solution, \(\eta_{t}(\mathbf{s})\) would represent the value of \(\left[\sum_{\mathbf{s}^{\prime}\in\mathcal{S}}p_{t}(\mathbf{s},\mathbf{s}^{\prime})\cdot \beta_{i}^{t+1}(\mathbf{s}^{\prime})-\beta_{i}^{t}(\mathbf{s})\right]^{+}\) in the constraint (7b). The formulation (7) is chosen to simplify the derivation as we will construct a feasible solution to (7) to prove our approximation ratio bound, as detailed in Section 5.2.
In the following lemma, we show that the optimal objective value of LP (7) serves as an upper bound of the optimal online policy, with the formal proof relegated to Appendix A.
**Lemma 1**: _Denote by \(\hat{V}^{*}\) the optimal objective value of LP (7). It holds that_
\[\hat{V}^{*}\geq\mathbb{E}_{I\sim\mathcal{F}}[V^{\pi^{*}}(I)].\]
Therefore, throughout the paper, we compare against LP (7).
### Relationship with Other Upper Bounds
We now discuss the relationship between our upper bound LP (7) and other upper bounds established in the literature. The most recent result on the upper bound for the correlated customer arrival model is the _universal fluid_ upper bound established in Bai et al. (2022) for the high variance correlated model described in Section 2.1. A similar LP upper bound has also been established in Aouad and Ma (2022), named _conditioned LP_ upper bound. Both upper bounds share the same idea for not having the distribution information of the customer arrivals in the constraints, which is in contrast to the traditional fluid upper bound.
We now show that our upper bound LP (7) is actually tighter. We illustrate under the high-variance correlated model. In our language, as illustrated in Section 2.1, the high-variance model can be described as having the state \(\mathcal{S}=\{0,1,\ldots,n\}\), where state \(j\in[m]\) denotes the customer is of type \(j\) and the state \(0\) denotes there is no customer arrival. Also, for each period \(t\in[T]\), conditional on customer \(t\) arrives, customer \(t\) is of type \(j\in[n]\) with probability \(\lambda_{j,t}\). Then, the universal fluid upper bound can be given as follows.
\[V^{\mathrm{UF}}= \max\ \sum_{t\in[T]}\sum_{j\in[n]}P(\mathbf{s}_{t}>0)\cdot r_{j}\cdot x_{ j,t} \tag{8a}\] \[\text{s.t.}\ \sum_{t\in[T]}\sum_{j\in\mathcal{B}_{i}}x_{j,t}\leq C _{i},\ \ \forall i\in[m]\] (8b) \[\ x_{j,t}\leq\lambda_{j,t},\ \ \forall j\in[n],\forall t\in[T]. \tag{8c}\]
where \(\mathcal{B}_{i}\) denotes the set of customer types that require at least one unit of resource \(i\) to be served. Here, the variable \(x_{j,t}\) can be interpreted as the expected number of times that customer \(t\) of type \(j\) is served, conditional on customer \(t\) arriving.
**Proposition 1**: _Under the high-variance correlated model described in Section 2.1, it holds that_
\[\text{LP }(\ref{eq:1})\leq V^{\text{UF}}\]
_with \(V^{\text{UF}}\) given in (8) as the universal fluid upper bound of the optimal policy._
The formal proof of Proposition 1 is relegated to Appendix A. The key step of the proof is to consider the dual LP of \(V^{\text{UF}}\) in (8). Then, we use the optimal solution of the dual LP to construct a feasible solution to LP (7) with the same objective value. Note that it has been shown in Bai et al. (2022) that \(V^{\text{UF}}\) in (8) is asymptotically tight with respect to the optimal policy as the initial capacities \(\mathbf{C}\) are scaled up to infinity. Then, following Proposition 1, we know that LP (7) is also asymptotically tight as the initial capacities \(\mathbf{C}\) are scaled up to infinity.
## 4 Description of Our Policy
In this section, we derive our policy. Our policy is motivated by the DP formulation given in (4). Note that in the DP, given the state \(\mathbf{s}_{t}\), we serve customer \(t\) as long as \(\mathbf{c}_{t}\geq\mathbf{a}_{j(\mathbf{s}_{t})}\) and
\[r_{j(\mathbf{s}_{t})}\geq\sum_{\mathbf{s}^{\prime}\in\mathcal{S}}p_{t}( \mathbf{s}_{t},\mathbf{s}^{\prime})\cdot\left(V_{t+1}^{*}(\mathbf{c}_{t},\mathbf{s}^{\prime})- V_{t+1}^{*}(\mathbf{c}_{t}-\mathbf{a}_{j(\mathbf{s}_{t})},\mathbf{s}^{\prime})\right), \tag{9}\]
which follows directly from the DP formulation (4). We follow the same intuition of the decision rule in (9) to derive our policy. To be specific, denote by \(\pi\) our policy and denote by \(H_{t}^{\pi}(\mathbf{c},\mathbf{s})\) the total expected reward collected by the policy \(\pi\) from period \(t\) to period \(T\), given the remaining capacity at period \(t\) is \(\mathbf{c}\) and the state of period \(t\) is \(\mathbf{s}\). We set \(y_{t}^{\pi}(\mathbf{s}_{t})=1\) and serve customer \(t\) as long as \(\mathbf{c}_{t}\geq\mathbf{a}_{j(\mathbf{s}_{t})}\) and
\[r_{j(\mathbf{s}_{t})}\geq\sum_{\mathbf{s}^{\prime}\in\mathcal{S}}p_{t}( \mathbf{s}_{t},\mathbf{s}^{\prime})\cdot\left(H_{t+1}^{\pi}(\mathbf{c}_{t},\mathbf{s}^{\prime} )-H_{t+1}^{\pi}(\mathbf{c}_{t}-\mathbf{a}_{j(\mathbf{s}_{t})},\mathbf{s}^{\prime})\right). \tag{10}\]
Here, the term \(H_{t+1}^{\pi}(\mathbf{c}_{t},\mathbf{s}^{\prime})-H_{t+1}^{\pi}(\mathbf{c}_{t}-\mathbf{a}_{j( \mathbf{s}_{t})},\mathbf{s}^{\prime})\) denotes the marginal increase in terms of the total expected reward collected by \(\pi\) if we do not serve customer \(t\), given \(\mathbf{s}_{t+1}\) is realized as \(\mathbf{s}^{\prime}\). We compute the marginal increase for not serving customer \(t\) by taking an expectation over \(\mathbf{s}_{t+1}\). Customer \(t\) is served in a myopic sense if the benefits of serving customer \(t\), which is \(r_{j(\mathbf{s}_{t})}\), exceeds the expected marginal increase for not serving customer \(t\).
The only issue of implementing the decision rule (10) relies on how to compute the term \(H_{t+1}^{\pi}(\mathbf{c}_{t},\mathbf{s}^{\prime})-H_{t+1}^{\pi}(\mathbf{c}_{t}-\mathbf{a}_{j( \mathbf{s}_{t})},\mathbf{s}^{\prime})\). One could indeed use backward induction and follow the decision rule (10) to compute the value of \(H_{t+1}^{\pi}(\mathbf{c}_{t},\mathbf{s}^{\prime})\) for every possible \(\mathbf{c}_{t}\) and every \(\mathbf{s}^{\prime}\in\mathcal{S}\). However, the
computational complexity would be the same as directly solving the DP and we will encounter the curse of dimensionality, i.e., the computational complexity would be exponential in \(m\). In order to remedy this issue, we follow the idea of bid price control (Talluri and Van Ryzin 1998) and derive a set of bid prices to approximate the term \(H_{t+1}^{\pi}(\boldsymbol{c}_{t},\boldsymbol{s}^{\prime})-H_{t+1}^{\pi}( \boldsymbol{c}_{t}-\boldsymbol{a}_{j(\boldsymbol{s}_{t})},\boldsymbol{s}^{ \prime})\).
We introduce a bid price \(\nu_{j}^{t}(\boldsymbol{s}^{\prime})\) for each \(t\in[T]\), each \(j\in[n]\) and each \(\boldsymbol{s}^{\prime}\in\mathcal{S}\). Note that each type of customer can require multiple resources simultaneously to be served. For any type \(j\in[n]\), we denote by \(\mathcal{A}_{j}\) the set of resources that will be consumed by serving type \(j\) customer, i.e., \(\mathcal{A}_{j}=\{\forall i\in[m]:a_{i,j}=1\}\). Analogously, for any resource \(i\in[m]\), we denote by \(\mathcal{B}_{i}\) the set of customer types that would require one unit of resource \(i\) to be served, i.e., \(\mathcal{B}_{i}=\{\forall j\in[n]:a_{i,j}=1\}\). We use the bid price \(\nu_{j}^{t}(\boldsymbol{s}^{\prime})\) to approximate the marginal gain we can obtain from serving customers of type \(j\) if we have _one more unit of resource_\(i\in\mathcal{A}_{j}\), no matter what remaining resources are. Then, the benefits of having one more resource \(i\) at period \(t\) given state \(\boldsymbol{s}^{\prime}\) can be approximated by \(\frac{1}{C_{i}}\cdot\sum_{j\in\mathcal{B}_{i}}\nu_{j}^{t}(\boldsymbol{s}^{ \prime})\), where we normalize by the total capacity \(C_{i}\) to guarantee that the approximation is valid no matter what remaining resources are. Comparing \(\boldsymbol{c}_{t}\) and \(\boldsymbol{c}_{t}-\boldsymbol{a}_{j(\boldsymbol{s}_{t})}\), we have one more unit of resource \(i\) for each \(i\in\mathcal{A}_{j(\boldsymbol{s})}\). Therefore, the term \(H_{t+1}^{\pi}(\boldsymbol{c}_{t},\boldsymbol{s}^{\prime})-H_{t+1}^{\pi}( \boldsymbol{c}_{t}-\boldsymbol{a}_{j(\boldsymbol{s}_{t})},\boldsymbol{s}^{ \prime})\) can be approximated by
\[\sum_{i\in\mathcal{A}_{j(\boldsymbol{s}_{t})}}\frac{1}{C_{i}}\cdot\sum_{j^{ \prime}\in\mathcal{B}_{i}}\nu_{j^{\prime}}^{t+1}(\boldsymbol{s}^{\prime}).\]
The decision rule (10) can be finally represented as serving customer \(t\) as long as \(\boldsymbol{c}_{t}\geq\boldsymbol{a}_{j(\boldsymbol{s})}\) and
\[r_{j(\boldsymbol{s}_{t})}\geq\sum_{\boldsymbol{s}^{\prime}\in\mathcal{S}}p_{t} (\boldsymbol{s}_{t},\boldsymbol{s}^{\prime})\cdot\sum_{i\in\mathcal{A}_{j( \boldsymbol{s}_{t})}}\frac{1}{C_{i}}\cdot\sum_{j^{\prime}\in\mathcal{B}_{i}} \nu_{j^{\prime}}^{t+1}(\boldsymbol{s}^{\prime}). \tag{11}\]
Our bid price policy is formally described in Algorithm 1, where we approximate the expected marginal increase \(\sum_{\boldsymbol{s}^{\prime}\in\mathcal{S}}p_{t}(\boldsymbol{s}_{t}, \boldsymbol{s}^{\prime})\cdot\left(H_{t+1}^{\pi}(\boldsymbol{c}_{t}, \boldsymbol{s}^{\prime})-H_{t+1}^{\pi}(\boldsymbol{c}_{t}-\boldsymbol{a}_{j( \boldsymbol{s}_{t})},\boldsymbol{s}^{\prime})\right)\) and we serve customer \(t\) when the reward \(r_{j(\boldsymbol{s}_{t})}\) exceeds the approximation of the expected marginal increase.
### Bid Price Computing
We now describe how to compute the bid price \(\nu_{j}^{t}(\boldsymbol{s})\) for all \(j\in[n]\), \(t\in[T]\) and all \(\boldsymbol{s}\in\mathcal{S}\). The bid price \(\nu_{j}^{t}(\boldsymbol{s})\) is computed in a backward induction from \(T\) to \(1\). To be specific, we set \(\nu_{j}^{T+1}(\boldsymbol{s})=0\) for any \(j\in[n]\) and any \(\boldsymbol{s}\in\mathcal{S}\). Then, for \(t=T,T-1,\ldots,1\), we compute iteratively
\[\nu_{j}^{t}(\boldsymbol{s})=\sum_{\boldsymbol{s}^{\prime}\in\mathcal{S}}p_{t}( \boldsymbol{s},\boldsymbol{s}^{\prime})\cdot\nu_{j}^{t+1}(\boldsymbol{s}^{ \prime})+\mathbbm{1}_{\{j=j(\boldsymbol{s})\}}\cdot\left[r_{j}-\sum_{\boldsymbol {s}^{\prime}\in\mathcal{S}}p_{t}(\boldsymbol{s},\boldsymbol{s}^{\prime})\cdot \sum_{i\in\mathcal{A}_{j}}\frac{1}{C_{i}}\cdot\sum_{j^{\prime}\in\mathcal{B}_{i }}\nu_{j^{\prime}}^{t+1}(\boldsymbol{s}^{\prime})\right]^{+} \tag{12}\]
for every \(j\in[n]\) and every \(\boldsymbol{s}\in\mathcal{S}\). We provide an interpretation of the intuition behind the backward induction (12). As we have illustrated previously, the term \(\sum_{\boldsymbol{s}^{\prime}\in\mathcal{S}}p_{t}(\boldsymbol{s},\boldsymbol{s} ^{\prime})\cdot\sum_{i\in\mathcal{A}_{j}}\frac{1}{C_{i}}\cdot\sum_{j^{\prime} \in\mathcal{B}_{i}}\nu_{j^{\prime}}^{t+1}(\boldsymbol{s}^{\prime})\) is an approximation of the expectation of the marginal increase for having an extra \(\boldsymbol{a}_{j}\) resources.
Then, the term \(\sum_{\mathbf{s}^{\prime}\in\mathcal{S}}p_{t}(\mathbf{s},\mathbf{s}^{\prime})\cdot\sum_{i\in \mathcal{A}_{j}}\frac{1}{C_{i}}\cdot\sum_{j^{\prime}\in\mathcal{B}_{i}}\nu_{j^ {\prime}}^{t+1}(\mathbf{s}^{\prime})\) can be interpreted as the "opportunity cost" for serving customer \(t\) with type \(j\). As a result, by noting the decision rule (11), the term
\[\left[r_{j}-\sum_{\mathbf{s}^{\prime}\in\mathcal{S}}p_{t}(\mathbf{s},\mathbf{s}^{\prime}) \cdot\sum_{i\in\mathcal{A}_{j}}\frac{1}{C_{i}}\cdot\sum_{j^{\prime}\in\mathcal{ B}_{i}}\nu_{j^{\prime}}^{t+1}(\mathbf{s}^{\prime})\right]^{+}\]
accounts for the gain of our policy for the period \(t\), which will be used to update the bid price \(\nu_{j}^{t}(\mathbf{s})\). The bid price update (12) also corresponds to our decision rule (11). Note that we only need to update the bid price \(\nu_{j}^{t}(\mathbf{s})\) for \(j=j(\mathbf{s})\) since the type \(j(\mathbf{s})\) customer arrives at period \(t\) given the state \(\mathbf{s}\).
The backward induction (12) can also be motivated by the approximate DP approach from Ma et al. (2020) with the differences described as follows. For each period \(t\), we define the bid price \(\nu_{j}^{t}(\mathbf{s})\) for each type \(j\in[n]\) and each state \(\mathbf{s}\in\mathcal{S}\) while their approximate DP parameters are only defined for each type \(j\in[n]\). We let our bid price depend on the state to deal with the correlation of the customer arrivals. Also, for each state \(\mathbf{s}\), we update the bid price \(\nu_{j}^{t}(\mathbf{s})\) if and only if \(j=j(\mathbf{s})\), while their approximate DP parameters are updated for each \(j\in[n]\). Finally, we need to take expectations over the state of the next period given the current state in our update (12), which is not needed in the approximate DP approach since they assumed independent customer arrival of each period.
```
1:Compute the bid prices \(\nu_{j}^{t}(\mathbf{s})\) for any \(j\in[n]\), any \(t\in[T]\) and any \(\mathbf{s}\in\mathcal{S}\), following the backward induction equation described in (12).
2:for t=1,...,T do
3: observe the state \(\mathbf{s}_{t}\) and the remaining capacities \(\mathbf{c}_{t}\).
4:if there exists a resource \(i\in[m]\) such that \(c_{t,i}<a_{i,j(\mathbf{s})}\)then reject customer \(t\)
5:else serve customer \(t\) if and only if \[r_{j(\mathbf{s})}\geq\sum_{\mathbf{s}^{\prime}\in\mathcal{S}}p_{t}(\mathbf{s},\mathbf{s}^{ \prime})\cdot\sum_{i\in\mathcal{A}_{j(\mathbf{s})}}\frac{1}{C_{i}}\cdot\sum_{j^{ \prime}\in\mathcal{B}_{i}}\nu_{j^{\prime}}^{t+1}(\mathbf{s}^{\prime}).\] (13)
6:endif
7:endfor
```
**Algorithm 1** Bid Price Policy
Algorithm Analysis
In this section, we prove the approximation ratio bound of our Algorithm 1. Specifically, denote by
\[L=\max_{j\in[n]}\left\{\sum_{i\in[m]}a_{i,j}\right\}. \tag{14}\]
We show that Algorithm 1 would collect an expected total reward at least \(1/(1+L)\) fraction of that of the optimal policy. Our proof is based on our construction of the bid price \(\nu_{j}^{t}(\boldsymbol{s})\) described in (12) and can be classified into two steps. In the first step, we show that the expected total reward collected by Algorithm 1 can be lower bounded by \(\sum_{\boldsymbol{s}\in\mathcal{S}}p_{1}(\boldsymbol{s})\cdot\sum_{j\in[n]} \nu_{j}^{1}(\boldsymbol{s})\). In the second step, we show that the expected total reward collected by the optimal policy is upper bounded by \((1+L)\cdot\sum_{\boldsymbol{s}\in\mathcal{S}}p_{1}(\boldsymbol{s})\cdot\sum_{ j\in[n]}\nu_{j}^{1}(\boldsymbol{s})\), which would imply a \(1/(1+L)\) approximation ratio of our policy. The key point in finishing the second step is to use the bid price \(\nu_{j}^{t}(\boldsymbol{s})\) described in (12) to construct a feasible solution to LP (7). Then, with Lemma 1, we prove our approximation ratio bound.
### Lower Bound the Total Reward of Algorithm 1
We now show that the total expected reward collected by Algorithm 1, which is denoted by \(\pi\), is lower bounded by \(\sum_{\boldsymbol{s}\in\mathcal{S}}p_{1}(\boldsymbol{s})\cdot\sum_{j\in[n]} \nu_{j}^{1}(\boldsymbol{s})\), with \(\nu_{j}^{t}(\boldsymbol{s})\) defined in (12) for for any \(j\in[n]\), \(t\in[T]\), and any \(\boldsymbol{s}\in\mathcal{S}\).
Note that for each period \(t\) and each state \(\boldsymbol{s}_{t}\), we can view the bid price
\[\sum_{\boldsymbol{s}^{\prime}\in\mathcal{S}}p_{t}(\boldsymbol{s}_{t}, \boldsymbol{s}^{\prime})\cdot\sum_{i\in\mathcal{A}_{j(\boldsymbol{s}_{t})}} \frac{1}{C_{i}}\cdot\sum_{j^{\prime}\in\mathcal{B}_{i}}\nu_{j^{\prime}}^{t+1} (\boldsymbol{s}^{\prime}) \tag{15}\]
as a threshold, and we only serve customer \(t\) as long as its reward \(r_{j(\boldsymbol{s}_{t})}\) exceeds this threshold and there are enough remaining capacities. Denote by \(x_{t}^{\pi}(\boldsymbol{s}_{t})\in\{0,1\}\) the online decision made by our policy \(\pi\) at period \(t\) given state \(\boldsymbol{s}_{t}\). We decompose the total reward collected by policy \(\pi\) into two parts based on the bid price (15). To be specific, we have
\[\begin{split}\sum_{t\in[T]}\mathbbm{1}_{\{x_{t}^{\pi}( \boldsymbol{s}_{t})=1\}}\cdot r_{j(\boldsymbol{s}_{t})}&=\sum_{t \in[T]}\mathbbm{1}_{\{x_{t}^{\pi}(\boldsymbol{s}_{t})=1\}}\cdot\left[r_{j( \boldsymbol{s}_{t})}-\sum_{\boldsymbol{s}^{\prime}\in\mathcal{S}}p_{t}( \boldsymbol{s}_{t},\boldsymbol{s}^{\prime})\cdot\sum_{i\in\mathcal{A}_{j( \boldsymbol{s}_{t})}}\frac{1}{C_{i}}\cdot\sum_{j^{\prime}\in\mathcal{B}_{i}} \nu_{j^{\prime}}^{t+1}(\boldsymbol{s}^{\prime})\right]\\ &\quad+\sum_{t\in[T]}\mathbbm{1}_{\{x_{t}^{\pi}(\boldsymbol{s}_{t })=1\}}\cdot\left(\sum_{\boldsymbol{s}^{\prime}\in\mathcal{S}}p_{t}( \boldsymbol{s}_{t},\boldsymbol{s}^{\prime})\cdot\sum_{i\in\mathcal{A}_{j( \boldsymbol{s}_{t})}}\frac{1}{C_{i}}\cdot\sum_{j^{\prime}\in\mathcal{B}_{i}} \nu_{j^{\prime}}^{t+1}(\boldsymbol{s}^{\prime})\right).\end{split} \tag{16}\]
The first term in the right-hand side of (16) can be further simplified with the decision rule (13), and the second term in the right-hand side of (16) can be further simplified with the bid price update (12). Therefore, we can finally derive that
\[\mathbb{E}\left[\sum_{t\in[T]}\mathbbm{1}_{\{x_{t}^{\pi}(\boldsymbol{s}_{t})=1 \}}\cdot r_{j(\boldsymbol{s}_{t})}\right]\geq\sum_{\boldsymbol{s}\in\mathcal{ S}}p_{1}(\boldsymbol{s})\cdot\sum_{j\in[n]}\nu_{j}^{1}(\boldsymbol{s}).\]
The above arguments are summarized in the following lemma, with the formal proof relegated to Appendix B.
**Lemma 2**: _The expected total reward collected by our Algorithm 1 is lower bounded by_
\[\sum_{\mathbf{s}\in\mathcal{S}}p_{1}(\mathbf{s})\cdot\sum_{j\in[n]}\nu_{j}^{1}(\mathbf{s}).\]
The proof idea of Lemma 2 is motivated by the analysis of the thresholding algorithm for prophet inequality (e.g. Krengel and Sucheston (1978)), for NRM problem with reusable resources (e.g. Baek and Ma (2022)), and for other stochastic online optimization problems (e.g. Dutting et al. (2020)), where we regard the bid price as a threshold for each type of customer based on the state, and we analyze the bid price part and the reward beyond the bid price part separately. An alternative idea to prove Lemma 2 is based on the DP formulation, where we introduce a basis function \(\psi_{j}(\cdot)\) for each type \(j\in[n]\) and we use a linear combination of \(\psi_{j}(\cdot)\) over \(j\in[n]\) to lower bound the total revenue collected by Algorithm 1. The bid price \(\nu_{j}^{t}(\mathbf{s})\) now becomes the coefficient of the basis function \(\psi_{j}(\cdot)\). Such an idea has been developed in Ma et al. (2020) for independent customer arrivals and we develop here for correlated customer arrivals, with further details referred to Appendix B. Note that the basis functions are only theoretically needed to prove a lower bound of the total reward collected by Algorithm 1. The implementation of our Algorithm 1 does not require constructing any basis function. This is another difference (besides the bid price computing) between our Algorithm 1 and the approximate policy in Ma et al. (2020).
### Construct a Feasible Solution to LP (7)
We now construct a feasible solution to LP (7) based on the bid price \(\nu_{j}^{t}(\mathbf{s})\) described in (12). To be specific, we define
\[\hat{\beta}_{i}^{t}(\mathbf{s})=\frac{1}{C_{i}}\cdot\sum_{j\in\mathcal{B}_{i}}\nu_ {j}^{t}(\mathbf{s}) \tag{17}\]
for any \(i\in[m]\), any \(t\in[T]\) and any \(\mathbf{s}\in\mathcal{S}\). Also, starting from \(\hat{\theta}^{T+1}(\mathbf{s})=0\) for any \(\mathbf{s}\in\mathcal{S}\), we iteratively define
\[\hat{\theta}^{t}(\mathbf{s})=\sum_{j\in[n]}\nu_{j}^{t}(\mathbf{s}) \tag{18}\]
for any \(\mathbf{s}\in\mathcal{S}\), for \(t=T,T-1,\ldots,1\). We now provide intuition on why \(\{\hat{\beta}_{i}^{t}(\mathbf{s}),\hat{\theta}^{t}(\mathbf{s}),\forall i\in[m],\forall t \in[T],\forall\mathbf{s}\in\mathcal{S}\}\) defined in (17) and (18) is feasible to LP (7). Note that from the definition in (12), we must have \(\nu_{j}^{t}(\mathbf{s})\geq\sum_{\mathbf{s}^{\prime}\in\mathcal{S}}p_{t}(\mathbf{s},\mathbf{s} ^{\prime})\cdot\nu_{j}^{t+1}(\mathbf{s}^{\prime})\) for any \(j\in[n]\), \(t\in[T]\) and \(\mathbf{s}\in\mathcal{S}\), which implies that
\[\hat{\beta}_{i}^{t}(\mathbf{s})\geq\sum_{\mathbf{s}^{\prime}\in\mathcal{S}}p_{t}(\mathbf{s},\mathbf{s}^{\prime})\cdot\hat{\beta}_{i}^{t+1}(\mathbf{s}^{\prime}).\]
Therefore, in order for the constraint (7b) to be satisfied, it is sufficient to show that
\[\hat{\theta}^{t}(\mathbf{s})-\sum_{\mathbf{s}^{\prime}\in\mathcal{S}}p_{t}(\mathbf{s},\mathbf{s}^ {\prime})\cdot\hat{\theta}^{t+1}(\mathbf{s}^{\prime})\geq\left[r_{j(\mathbf{s})}-\sum_{ \mathbf{s}^{\prime}\in\mathcal{S}}p_{t}(\mathbf{s},\mathbf{s}^{\prime})\cdot\sum_{i\in[m]}a _{i,j(\mathbf{s})}\cdot\hat{\beta}_{i}^{t+1}(\mathbf{s}^{\prime})\right]^{+}.\]
The above inequality can be directly verified by plugging in the definitions in (17) and (18), and using the backward induction defined in (12). On the other hand, we can show that
\[\sum_{\mathbf{s}\in\mathcal{S}}p_{1}(\mathbf{s})\cdot\left(\hat{\theta}^{1}(\mathbf{s})+ \sum_{i\in[m]}C_{i}\cdot\hat{\beta}_{i}^{1}(\mathbf{s})\right)\leq(1+L)\cdot\sum_{ \mathbf{s}\in\mathcal{S}}p_{1}(\mathbf{s})\cdot\sum_{j\in[n]}\nu_{j}^{1}(\mathbf{s})\]
with parameter \(L\) defined in (14). We summarize the above arguments in the following lemma, with the formal proof relegated to Appendix B.
**Lemma 3**: _The set of solution \(\{\hat{\beta}_{i}^{t}(\mathbf{s}),\hat{\theta}^{t}(\mathbf{s}),\forall i\in[m],\forall t \in[T],\forall\mathbf{s}\in\mathcal{S}\}\) defined in (17) and (18) is feasible to LP (7). Moreover, it holds that_
\[\sum_{\mathbf{s}\in\mathcal{S}}p_{1}(\mathbf{s})\cdot\left(\hat{\theta}^{1}(\mathbf{s})+ \sum_{i\in[m]}C_{i}\cdot\hat{\beta}_{i}^{1}(\mathbf{s})\right)\leq(1+L)\cdot\sum_ {\mathbf{s}\in\mathcal{S}}p_{1}(\mathbf{s})\cdot\sum_{j\in[n]}\nu_{j}^{1}(\mathbf{s})\]
_with \(L\) defined in (14)._
### Proof of the Approximation Ratio Bound
We are now ready to prove the approximation ratio bound of Algorithm 1. Our final result is formalized in the following theorem.
**Theorem 1**: _Let the parameter \(L\) be defined in (14). Denote by \(\mathsf{ALG}\) the expected total reward collected by Algorithm 1 and denote by \(\mathsf{OPT}\) the expected total reward collected by the optimal policy. Then, it holds that_
\[\mathsf{ALG}\geq\frac{1}{1+L}\cdot\mathsf{OPT}.\]
From Lemma 2, we know that
\[\mathsf{ALG}\geq\sum_{\mathbf{s}\in\mathcal{S}}p_{1}(\mathbf{s})\cdot\sum_{j\in[n]} \nu_{j}^{1}(\mathbf{s})\]
with \(\nu_{j}^{t}(\mathbf{s})\) defined in (12) for for any \(j\in[n]\), \(t\in[T]\), and any \(\mathbf{s}\in\mathcal{S}\). Moreover, from Lemma 1 and Lemma 3, we know that
\[\mathsf{OPT}\leq\text{LP \eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:
Note that the approximation ratio bound established in Theorem 1 depends on \(L\). Indeed, our problem can be reduced to the set packing problem studied in Hazan et al. (2006) by having _deterministic_ customer arrivals at each period. Then, it has been shown in Theorem 1 of Hazan et al. (2006) that even if the online policy has more power to make revocable decisions, which further reduces the problem into an offline problem, it is NP-hard to approximate the optimal policy with an approximation ratio bound better than \(\Omega(\log L/L)\). Therefore, we know that it is inevitable to have the parameter \(L\) showing up in our approximation ratio bound.
## 6 Extention to the Choice-based Model
In this section, we extend our model and the performance guarantee to incorporate the choice model behavior of the customers. In the original formulation in Section 2, each customer requires only one product and the decision maker only needs to decide whether or not to provide the requested product to the customer. We now consider a more general setting where the decision maker offers a subset \(A_{t}\in\mathcal{F}\) of products at each period \(t\), where \(\mathcal{F}\) denotes the collection of all feasible assortments which contains the empty set \(\emptyset\), and the customer would choose one product from \(A_{t}\) according to its underlying choice model. Note that the customer can leave without purchase and we introduce a null product with \(0\) reward and \(0\) consumption of the resources to incorporate the leaving behavior of the customer. We require every assortment \(A_{t}\) to contain the null product, which is indexed by product \(0\). The choice probabilities now depend on the system state \(\boldsymbol{s}_{t}\). To be more concrete, we denote by \(\phi_{j}(A_{t},\boldsymbol{s}_{t})\) the probability that customer \(t\) would choose product \(j\in[n]\) to purchase, given the offered assortment \(A_{t}\) and the system state \(\boldsymbol{s}_{t}\). Since the null product is always included in the assortment \(A_{t}\), we have \(\sum_{j\in A}\phi_{j}(A,\boldsymbol{s})=1\). We assume that the probabilities \(\{\phi_{j}(A,\boldsymbol{s}),\forall j\in[n],\forall A\in\mathcal{F},\forall \boldsymbol{s}\in\mathcal{S}\}\) is given and the goal of the decision maker is to choose the assortment \(A_{t}\) at each period \(t\in[T]\) to maximize the total collected reward, subject to the capacity constraints of the resources. We adopt the standard substitutability assumption (e.g. Golrezaei et al. (2014)) over the assortments.
**Assumption 1**: _For any system state \(\boldsymbol{s}\in\mathcal{S}\), any assortment \(A\in\mathcal{F}\), any product \(j\in A\) and product \(j^{\prime}\notin A\), it holds that \(\phi_{j}(A,\boldsymbol{s})\geq\phi_{j}(A\cup\{j^{\prime}\},\boldsymbol{s})\). Moreover, if \(A\in\mathcal{F}\), then for any subset \(B\subset A\), it holds \(B\in\mathcal{F}\)._
Denote by \(V_{t}^{*}(\boldsymbol{c},\boldsymbol{s})\) the value to go function at period \(t\), given the remaining capacity \(\boldsymbol{c}\) and the current state \(\boldsymbol{s}\). The backward induction can be given as follows:
\[V_{t}^{*}(\boldsymbol{c},\boldsymbol{s})=\max_{A\in\mathcal{F}(\boldsymbol{c}) }\left\{\sum_{j\in A}\phi_{j}(A,\boldsymbol{s})\cdot\left(r_{j}+\sum_{ \boldsymbol{s}^{\prime}\in\mathcal{S}}p_{t}(\boldsymbol{s},\boldsymbol{s}^{ \prime})\cdot V_{t+1}^{*}(\boldsymbol{c}-\boldsymbol{a}_{j},\boldsymbol{s}^{ \prime})\right)\right\}, \tag{19}\]
where \(\mathcal{F}(\boldsymbol{c})\subset\mathcal{F}\) denotes the collection of assortments that we have enough remaining capacities for every product contained in the assortment. Again, we adopt the linear approximation (6) to
approximate the value of \(V_{t}^{*}(\mathbf{c},\mathbf{s})\) and the upper bound LP (7) can now be reformulated as follows.
\[\min \sum_{\mathbf{s}\in\mathcal{S}}p_{1}(\mathbf{s})\cdot\left(\theta^{1}(\mathbf{s })+\sum_{i\in[m]}C_{i}\cdot\beta_{i}^{1}(\mathbf{s})\right)\] (20a) s.t. \[\theta^{t}(\mathbf{s})-\sum_{\mathbf{s}^{\prime}\in\mathcal{S}}p_{t}(\bm {s},\mathbf{s}^{\prime})\cdot\theta^{t+1}(\mathbf{s}^{\prime})\geq\sum_{j\in A}\phi_{j }(A,\mathbf{s})\cdot\left[r_{j}-\sum_{\mathbf{s}^{\prime}\in\mathcal{S}}p_{t}(\mathbf{s}, \mathbf{s}^{\prime})\cdot\sum_{i\in[m]}a_{i,j}\cdot\beta_{i}^{t+1}(\mathbf{s}^{\prime })\right]^{+}\] \[\qquad\qquad+\sum_{i\in[m]}C_{i}\cdot\left[\sum_{\mathbf{s}^{\prime} \in\mathcal{S}}p_{t}(\mathbf{s},\mathbf{s}^{\prime})\cdot\beta_{i}^{t+1}(\mathbf{s}^{ \prime})-\beta_{i}^{t}(\mathbf{s})\right]^{+},\qquad\forall t\in[T],\forall\mathbf{s} \in\mathcal{S},\forall A\in\mathcal{F} \tag{20b}\] \[\theta^{t}(\mathbf{s})\geq 0,\beta_{i}^{t}(\mathbf{s})\geq 0,\forall i \in[m],\qquad\forall t\in[T],\forall\mathbf{s}\in\mathcal{S}. \tag{20c}\]
We show in the following lemma that the optimal objective value of LP (20) is an upper bound of the DP value \(\sum_{\mathbf{s}\in\mathcal{S}}p_{1}(\mathbf{s})\cdot V_{1}^{*}(\mathbf{C},\mathbf{s})\), with formal proof relegated to Appendix C.
**Lemma 4**: _It holds that LP (20) \(\geq\sum_{\mathbf{s}\in\mathcal{S}}p_{1}(\mathbf{s})\cdot V_{1}^{*}(\mathbf{C},\mathbf{s})\)._
We now derive our policy. We still assign a bid price \(\nu_{j}^{t}(\mathbf{s})\) for each product \(j\in[n]\), each state \(\mathbf{s}\in\mathcal{S}\) and each period \(t\in[T]\). The bid price \(\nu_{j}^{t}(\mathbf{s})\) is computed in a backward induction from \(T\) to \(1\). To be specific, we set \(\nu_{j}^{T+1}(\mathbf{s})=0\) for any \(j\in[n]\) and any \(\mathbf{s}\in\mathcal{S}\). Then, for \(t=T,T-1,\ldots,1\), we compute iteratively
\[\nu_{j}^{t}(\mathbf{s})=\sum_{\mathbf{s}^{\prime}\in\mathcal{S}}p_{t}(\mathbf{s},\mathbf{s}^{ \prime})\cdot\nu_{j}^{t+1}(\mathbf{s}^{\prime})+\sum_{j\in\hat{A}_{t}(\mathbf{s})}\phi _{j}(\hat{A}_{t}(\mathbf{s}),\mathbf{s})\cdot\left[r_{j}-\sum_{\mathbf{s}^{\prime}\in \mathcal{S}}p_{t}(\mathbf{s},\mathbf{s}^{\prime})\cdot\sum_{i\in\mathcal{A}_{j}}\frac {1}{C_{i}}\cdot\sum_{j^{\prime}\in\mathcal{B}_{i}}\nu_{j^{\prime}}^{t+1}(\mathbf{ s}^{\prime})\right]^{+} \tag{21}\]
for every \(j\in[n]\) and every \(\mathbf{s}\in\mathcal{S}\), where the set \(\hat{A}_{t}(\mathbf{s})\) is defined as follows
\[\hat{A}_{t}(\mathbf{s})=\operatorname*{argmax}_{A\in\mathcal{F}}\left\{\sum_{j\in A }\phi_{j}(A,\mathbf{s})\cdot\left[r_{j}-\sum_{\mathbf{s}^{\prime}\in\mathcal{S}}p_{t} (\mathbf{s},\mathbf{s}^{\prime})\cdot\sum_{i\in\mathcal{A}_{j}}\frac{1}{C_{i}}\cdot \sum_{j^{\prime}\in\mathcal{B}_{i}}\nu_{j^{\prime}}^{t+1}(\mathbf{s}^{\prime}) \right]^{+}\right\}. \tag{22}\]
Our policy is formally described in Algorithm 2. We have the following approximation ratio bound regarding the policy in Algorithm 2, with the formal proof relegated to Appendix C.
**Theorem 2**: _Let the parameter \(L\) be defined in (14). Denote by \(\mathsf{ALG}\) the expected total reward collected by Algorithm 2 and denote by \(\mathsf{OPT}\) the expected total reward collected by the optimal policy. Then, it holds that_
\[\mathsf{ALG}\geq\frac{1}{1+L}\cdot\mathsf{OPT}.\]
```
1:Compute the bid prices \(\nu_{j}^{t}(\mathbf{s})\) for any \(j\in[n]\), any \(t\in[T]\) and any \(\mathbf{s}\in\mathcal{S}\), following the backward induction equation described in (21).
2:for t=1,...,T do
3: observe the state \(\mathbf{s}_{t}\) and the remaining capacities \(\mathbf{c}_{t}\).
4: compute the assortment \(\hat{A}_{t}(\mathbf{s})\) as defined in (22).
5: offer the assortment
\[A_{t}(\mathbf{s}_{t})=\{j\in\hat{A}_{t}(\mathbf{s}_{t}):\mathbf{c}_{t}\geq\mathbf{a}_{j}\} \tag{23}\]
to customer \(t\).
6:endfor
```
**Algorithm 2** Assortment Bid Price Policy
## 7 Concluding Remarks
We consider the NRM problem with correlated customer arrivals. Our contributions are threefold. First, we propose a new model that assumes the existence of a system state, which determines customer arrivals for the current period. This system state evolves over time according to a time-inhomogeneous Markov chain. Our model can be used to represent correlation in various settings and synthesizes previous literature on correlation models. Second, we develop a new LP approximation of the optimal policy. Our approximation is motivated from the approximate DP literature and serves as a tighter upper bound over the expected total reward collect by the optimal policy than the existing upper bounds in the literature for correlated customer arrivals. Third, we develop a new bid price policy and show that our policy enjoys an approximation ratio bound of \(1/(1+L)\). Finally, we extend all our results to the assortment setting where the decision maker could offer an assortment to the cusotmer at each period and the customer would choose one product to purchase according to its underlying choice model. This extension is important because it captures the general scenario where customers have different preferences and can choose from a variety of products. There are also multiple directions to further extend our results. For example, one may consider a reusable resource setting where each unit of resource can be returned after a certain period. One may also consider an overbooking setting where the customers may reserve some resources but eventually do not consume the resources. We leave these interesting topics for future research.
## Acknowledgments
We thank Stefanus Jasin and Billy Jin for helpful discussions of the project. We also would like to thank Rajan Udwani and Huseyin Topaloglu for their helpful feedbacks and comments on the paper.
|
2307.04396 | Diffusion and fluctuations of open charmed hadrons in an interacting
hadronic medium | Heavy quarks are excellent probes to understand the hot and dense medium
formed in ultra-relativistic collisions. In a hadronic medium, studying the
transport properties, e.g. the drag ($\gamma$), momentum diffusion ($B_{0}$),
and spatial diffusion ($D_{s}$) coefficients of open charmed hadrons can
provide useful information about the medium. Moreover, the fluctuations of
charmed hadrons can help us to locate the onset of their deconfinement. In this
work, we incorporate attractive and repulsive interactions in the
well-established van der Waals hadron resonance gas model (VDWHRG) and study
the diffusion and fluctuations of charmed hadrons. This study helps us
understand the importance of interactions in the system, which affect both the
diffusion and fluctuations of charmed hadrons. | Kangkan Goswami, Kshitish Kumar Pradhan, Dushmanta Sahu, Raghunath Sahoo | 2023-07-10T07:57:46Z | http://arxiv.org/abs/2307.04396v2 | # Diffusion and fluctuations of open charmed hadrons in an interacting hadronic medium
###### Abstract
Heavy quarks are excellent probes to understand the hot and dense medium formed in ultra-relativistic collisions. In a hadronic medium, studying the transport properties, e.g. the drag (\(\gamma\)), momentum diffusion (\(B_{0}\)), and spatial diffusion (\(D_{s}\)) coefficients of open charmed hadrons can provide useful information about the medium. Moreover, the fluctuations of charmed hadrons can help us to locate the onset of their deconfinement. In this work, we incorporate attractive and repulsive interactions in the well-established van der Waals hadron resonance gas model (VDWHRG) and study the diffusion and fluctuations of charmed hadrons. This study helps us understand the importance of interactions in the system, which significantly affect both the diffusion and fluctuations of charmed hadrons.
+
Footnote †: preprint:
## I Introduction
In a quest to explore the deconfined medium of partons and to create an early universe-like condition, ultra-relativistic heavy-ions have collided at the Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC). Under such extreme conditions, an asymptotically free and locally thermalized system of deconfined partons is formed, called the quark-gluon plasma (QGP). Understanding the dynamics and interactions of such a medium is very interesting yet tricky, but estimating the thermodynamic and transport properties of the system formed in heavy-ion collisions can help us to understand the medium in a better way. One of the most effective probes to study the strongly interacting medium is the heavy quarks (HQs). This is because many heavier quark-antiquark pairs are produced in the initial hard scatterings. The relatively heavy quarks undergo Brownian motion in a medium of thermalized light-flavor quarks. Due to their large masses, their relaxation time is larger than the lifetime of the QGP, thus resulting in intermediate and high \(p_{T}\) heavy quarks not getting thermalized in the medium [1]. According to some phenomenological estimations, the QGP lifetime is estimated to be 4 - 5 fm/c at RHIC [2] and 10 - 12 fm/c at the LHC [3]. In contrast, the thermalization time of charm quarks is estimated to be of the order of 10-15 fm/c. Their mass is also much greater than the temperature of the system; thus, the probability of them being produced or annihilated in the medium is almost negligible. Hence, the HQs traverse the medium unaffected, but their momenta get modified due to their interaction with thermalized lighter quarks. These quarks hadronize around critical temperatures to form open-charm or open-bottom hadrons. However, it is crucial to notice that the momentum spectra of these hadrons undergo significant modification in the hadronic medium. Exploring the diffusion of these hadrons would help us to separate the contribution due to the hadronic sector from the deconfined phase. Thus, along with the charm quark, the study of drag and diffusion of \(D^{0}\) meson (\(c\bar{u}\)) is of utmost importance.
The heavy-meson semi-leptonic decay produces electrons that serve as a mode to investigate the dynamics of the heavy meson. Generally, the nuclear suppression factor and elliptic flow of these electrons are analyzed to understand the drag and diffusion of the heavy mesons. Such experimental studies have been done at the RHIC and the LHC [4; 5]. For the charm sector, the nuclear suppression factor and elliptic flow of \(D^{0}\) mesons have been obtained at ALICE [6; 7]. Theoretically, many studies have also been done to explore the dynamics of heavy mesons in the hadronic medium. In a thermal bath of lighter hadrons, the \(D^{0}\) mesons are significantly heavier, and their mass is much greater than the temperature of the system. One can exploit this fact to study the diffusion process of \(D^{0}\) mesons in a dynamically changing medium, which can be mathematically described by the Fokker-Planck equation [8].
In recent times, this idea has been utilized to study the diffusion of the open charmed state as well as the charm quark in thermalized hadronic and partonic media, respectively. In our previous work, we employed the Color string percolation model to study the drag and diffusion coefficients of the charm quark in the deconfined medium [9]. HQ diffusion has also been studied extensively using perturbative QCD (pQCD) theory at leading order (LO) and next-to-leading order (NLO) [8; 10; 11]. There are also modifications to the pQCD theory, with addition of "hadronic" states in the deconfined phase, which shows a better agreement with other models [12]. In ref. [13; 14], the authors have taken the lattice QCD (lQCD) approach to study the drag and diffusion of the charm quark. Moreover, works with T-matrix calculations explore the diffusion of charm quarks in the deconfined medium [15]. On the other hand, in the hadronic sector, the interaction of \(D^{0}\) meson has also been stud
ied using Born amplitudes [16], and the spatial diffusion coefficient is found to have a smooth transition from the hadronic to the partonic medium. The drag and diffusion coefficients have also been evaluated in the framework of chiral perturbation theory (ChPT) [17] and heavy meson chiral perturbation theory [18]. Moreover, the ideal hadron gas approach has been used to estimate the \(D^{0}\) meson diffusion in ref. [19]. In a recent work, the authors explore the magnetic field dependence on the \(D^{+}\) meson diffusion using the fluctuation-dissipation theorem [20].
Apart from studying the drag and diffusion of an open charmed hadron, one can also explore the melting of charmed hadrons in the medium to understand the medium in a better way. This may be explored by studying the fluctuations of open charmed hadrons. As the hot and dense medium expands violently, fluctuations in locally conserved quantities, e.g. net baryon number, electric charge, and strangeness show different behaviour in the hadronic medium as compared to a deconfined medium of quarks and gluons. In the hadronic medium, the baryon number carried by the particles is \(1\) or \(0\); however, for the QGP medium, it is only \(\pm\frac{1}{3}\). Thus, a particle coming in or out of a sub-volume would produce quantitatively different fluctuations in the hadronic medium as compared to a QGP medium [21]. These fluctuations show non-monotonic behavior at the phase boundary and hence can act as a potential probe to locate the phase boundary in the QCD phase diagram. Many have used this to explore the temperature at which there is a change in the degrees of freedom. Previously, studies [21; 22] on fluctuations in the net baryon number and the electric charge have been done to explore the emergence of the deconfined medium and as a probe of chiral symmetry restoration. Similarly, net strangeness fluctuations and the appropriate ratios of their cumulants and cross-correlations give us an idea about the melting of strange mesons and baryons near the transition temperature [23; 24; 25]. Fluctuations of net baryon number, electric charge, and strangeness have been explored broadly by the hybrid Polyakov-Nambu-Jona-Lasinio model [29; 30], the Polyakov linear-\(\sigma\) model [31], van der Waals hadron resonance gas model (VDWHRG) [32], and the functional renormalization group approach [33]. Likewise, to understand the transition from charmed hadrons to charm quarks, one of the key methods is to investigate the melting point of the charmed hadrons. One can deploy the same strategy to understand the melting of open charm hadrons since it has been well established that charmonium states exist well above \(T_{c}\)[27; 28]. However, the charm number fluctuations are rarely studied, making it a very intriguing topic of interest. In ref [27], Bazavov et al. have estimated the open charm fluctuations by using the lQCD theory. Thus, it would be interesting to see the results from other phenomenological models and their agreement with the lQCD results.
The ideal hadron resonance gas (IHRG) model is a simple statistical model which successfully explains the lQCD results up to temperatures of 140-150 MeV. But near the transition temperature, the hadrons start to melt, and this model breaks down. There are various improvements to the IHRG model, such as the excluded volume hadron resonance gas (EVHRG) model, where the finite volume takes care of the repulsive interaction due to the hardcore radius of the hadrons. Recently, Vovchenko et al. [32] found that incorporating the van der Waals interaction between the hadrons improves the agreement with the lQCD results near the transition temperature. This van der Waals hadron resonance gas (VDWHRG) model has been used to explore various thermodynamic and transport properties of the hadronic matter [34; 35; 36; 37; 38; 39]. In this work, we study the diffusion of \(D^{0}\) meson, the net charm fluctuations, and their correlation with net baryon fluctuation, electric charge, and strangeness using the van der Waals hadron resonance gas model. In section II, we briefly describe the formulation of the van der Waals HRG model. In section III, we present the results of diffusion of \(D^{0}\) meson in an interacting hadronic medium. Finally, in section IV, we briefly discuss the melting of open charm hadrons and present our result. Finally, we discuss and summarize our results in section V.
## II Van der Waals hadron resonance gas model (VDWHRG)
The ideal HRG model is a thermally and chemically equilibrated statistical model consisting of non-interacting point-like hadrons. It can successfully reproduce results of various thermodynamic quantities from lQCD calculations [40]. In addition, the IHRG model can also be extended to a high baryochemical potential regime where the applicability of lQCD breaks down due to the fermion sign problem [40; 41]. However, some disagreement with the lQCD data can be observed near the critical temperature. The major disagreements come while explaining the higher-order conserved charge fluctuations. A way out of this disagreement is to introduce interaction among hadrons at high temperatures. This is to take care of the qualitative features of the strong interaction that becomes much more significant as the temperature approaches \(T_{c}\). To include the short-range repulsive interactions, one introduces a finite hardcore radius to all the hadrons, giving them a finite volume. This gives rise to the excluded volume HRG model (EVHRG). Although it improves the result near critical temperature, this model ignores the long-range attractive interactions. The van der Waals HRG model (VDWHRG) takes care of both attractive and repulsive interactions by introducing the \(a\) and \(b\) parameters, respectively. In the VDWHRG model [32], the authors have assumed that the interaction exists between (anti)baryons-(anti)baryons. However, the interactions between meson-meson, baryon-antibaryon, and meson-(anti)baryon are not considered. One can safely exclude the short-range interactions between baryon-antibaryon as it is dominated by annihi
lation processes [42]. The interactions between mesons are neglected due to the fact that there is substantial suppression in the thermodynamic observables, which disagrees with lQCD results near the critical temperature at vanishing chemical potential. However, in recent years meson-meson repulsive interaction was included in the model by choosing a finite hardcore radius for the mesons, \(r_{M}\)[35]. Moreover, the attractive interaction among mesons leads to resonance formation, which is already present in the HRG model [43; 44] and hence not included in the formalism.
Owing to the number fluctuation, the system created in a relativistic heavy-ion collision resembles the grand canonical ensemble (GCE). In the ideal HRG model, the grand canonical partition function of the \(i^{th}\) hadronic species can be expressed as [42]
\[lnZ_{i}^{id}=\pm\frac{Vg_{i}}{2\pi^{2}}\int_{0}^{\infty}p^{2}dp\ ln\{1\pm\exp[-( E_{i}-\mu_{i})/T]\}, \tag{1}\]
where \(g_{i}\), \(E_{i}\), and \(\mu_{i}\) are the degeneracy, energy and chemical potential of the \(i^{th}\) hadron, respectively. The energy of the \(i^{th}\) hadronic species is given as \(E_{i}=\sqrt{p^{2}+m_{i}^{2}}\), and \(\mu_{i}\) can be further expanded in terms of the baryonic, strangeness, charge, charm chemical potentials and the corresponding conserved numbers as,
\[\mu_{i}=B_{i}\mu_{B}+S_{i}\mu_{S}+Q_{i}\mu_{Q}+C_{i}\mu_{C}, \tag{2}\]
where \(B_{i}\), \(S_{i}\), \(Q_{i}\), and \(C_{i}\) are, respectively, the baryon number, strangeness, electric charge, and charm quantum number of \(i^{th}\) hadron. In the ideal HRG formalism, pressure \(P_{i}^{id}\), and number density \(n_{i}^{id}\) of an ideal hadron gas in the GCE can be written as,
\[P_{i}^{id}(T,\mu_{i})=\pm\frac{Tg_{i}}{2\pi^{2}}\int_{0}^{\infty}p^{2}dp\ ln\{1\pm\exp[-(E_{i}-\mu_{i})/T]\} \tag{3}\]
\[n_{i}^{id}(T,\mu_{i})=\frac{g_{i}}{2\pi^{2}}\int_{0}^{\infty}\frac{p^{2}dp}{ \exp[(E_{i}-\mu_{i})/T]\pm 1} \tag{4}\]
To introduce van der Waals interaction, we start with the van der Waals equation of state in the canonical ensemble, which reads,
\[\left(P+a\left(\frac{N}{V}\right)^{2}\right)(V-bN)=NT, \tag{5}\]
where \(P\), \(N\), \(V\), and \(T\) are the pressure, number of particles, volume, and temperature of the system, respectively. The van der Waals parameters are \(a\) and \(b\), where \(b\) is the eigen volume of the hadron given by \(b=\frac{16}{3}\pi r^{3}\), \(r\) is the hardcore radius of the hadron. The parameters \(a\) and \(b\) are determined by simultaneously fitting the thermodynamic quantities obtained by lattice calculations [35]. For our study, we choose the values as \(a=0.926\) GeVfm\({}^{3}\) and for the parameter \(b\), the hardcore radius for mesons and baryons (antibaryons) are taken as \(r_{M}=0.2\) fm and \(r_{B(\bar{B})}=0.62\) fm respectively. Eq. (5) can be expressed in terms of number density, \(N=n/V\) as,
\[P(T,n)=\frac{nT}{1-bn}-an^{2} \tag{6}\]
In the GCE, we can express pressure as [45; 46],
\[P(T,\mu)=P^{id}(T,\mu^{*})-an^{2} \tag{7}\]
where, \(n\) is the number density calculated within the VDWHRG model and \(\mu^{*}\) is the effective chemical potential given by,
\[n(T,\mu)=\frac{\sum_{i}n_{i}^{id}(T,\mu^{*})}{1+b\sum_{i}n_{i}^{id}(T,\mu^{*})} \tag{8}\]
\[\mu^{*}=\mu-bP(T,\mu)-abn^{2}(T,\mu)+2an(T,\mu). \tag{9}\]
The total pressure in the VDWHRG model can be written as,
\[P(T,\mu)=P_{M}(T,\mu)+P_{B}(T,\mu)+P_{\bar{B}},(T,\mu) \tag{10}\]
where \(P_{M}(T,\mu)\), \(P_{\bar{B}}(T,\mu)\), and \(P_{\bar{B}}(T,\mu)\) are the pressure of the three subsystems defined in a VDW hadron gas, mesons with repulsive interaction, baryons and antibaryons with VDW interactions respectively. The pressure of these subsystems can further be expressed as,
\[P_{M}(T,\mu)=\sum_{i\in M}P_{i}^{id}(T,\mu^{*M}) \tag{11}\]
\[P_{B}(T,\mu)=\sum_{i\in B}P_{i}^{id}(T,\mu^{*B})-an_{B}^{2}(T,\mu) \tag{12}\]
\[P_{\bar{B}}(T,\mu)=\sum_{i\in\bar{B}}P_{i}^{id}(T,\mu^{*B})-an_{\bar{B}}^{2}( T,\mu), \tag{13}\]
where \(M,B,\bar{B}\) stands for mesons, baryons and antibaryons respectively. \(\mu^{*M}\) and \(\mu^{*B(\bar{B})}\) are the effective chemical potential, for mesons and baryons(antibaryons) respectively. Considering vanishing chemical potential corresponding to electric charge, strangeness, and charm quantum number, i.e., \(\mu_{Q}=\mu_{S}=\mu_{C}=0\), the modified chemical potential for mesons and baryon can be expressed as,
\[\mu_{M}^{*}=-bP_{M}(T,\mu) \tag{14}\]
\[\mu_{B(\bar{B})}^{*}=\mu_{B(B)}-bP_{B(B)}(T,\mu)-abn_{B(\bar{B})}^{2}+2an_{B( \bar{B})}, \tag{15}\]
where \(n_{M}\) and \(n_{B(\bar{B})}\) are the number density of mesons and baryons (antibaryons) in a VDW hadron gas and is given by Eq. (8).
Using the above VDWHRG formalism, we estimate the drag and diffusion coefficient of the \(D^{0}\) meson in an interacting hadron gas in the following section.
## III Drag and diffusion of \(D^{0}\) meson
The charmed states are considerably heavier in a thermal bath of light hadrons, consisting mainly of pions, kaons, and protons. Hidden charm mesons like \(J/\psi\) have much lower scattering cross-sections in the hadronic medium [47] as compared to that of open charmed mesons such as \(D^{0}\) mesons. Thus, \(D^{0}\) meson will diffuse sufficiently larger than \(J/\psi\) in the hadronic medium. This affects the elliptic flow of \(D^{0}\), while the \(v_{2}\) of \(J/\psi\) will remain unaffected, giving unfiltered information about the QGP phase. Thus, the interactions in the hadronic medium make the \(D^{0}\) mesons an interesting probe to explore the hadron gas. Owing to the large mass difference, it has been established that one can reduce the Boltzmann transport equation or Boltzmann-Uehling-Uhlenbeck (BUU) equation to the Fokker-Planck equation to study the dynamics of the heavy meson in the hadronic medium. Although the Fokker-Planck and BUU methodologies exhibit significant differences, the drag and diffusion coefficient computed using these formalisms agree with each other considerably well [53].
The Fokker-Planck equation is given as,
\[\frac{\partial f(t,\mathbf{p})}{\partial t}=\frac{\partial}{\partial p^{i}} \bigg{[}(A^{i}(\mathbf{p})f(t,\mathbf{p}))+\frac{\partial}{\partial p^{j}}(B ^{ij}(\mathbf{p})f(t,\mathbf{p}))\bigg{]}, \tag{16}\]
where, \(f(t,\mathbf{p})\) is the momentum space distribution of \(D^{0}\) meson with \(i\),\(j\)= 1,2,3 are the spatial indices. The collision kernels \(A^{i}(\mathbf{p})\) and \(B^{ij}(\mathbf{p})\) are given by [19],
\[A^{i}(\mathbf{p})=\int d\mathbf{k}\ \omega(\mathbf{p},\mathbf{k})k^{i}, \tag{17}\]
\[B^{ij}(\mathbf{p})=\frac{1}{2}\int d\mathbf{k}\ \omega(\mathbf{p},\mathbf{k})k^{ i}k^{j}. \tag{18}\]
Here, \(\omega(\mathbf{p},\mathbf{k})\) is the collision rate of \(D^{0}\) meson with initial momenta \(\mathbf{p}\), transferred momenta \(\mathbf{k}\), and final momenta \(\mathbf{p}\)-\(\mathbf{k}\). Considering an isotropic medium and taking a static limit \(p\to 0\), where \(p\) is the relative transverse momenta of the \(D^{0}\) meson with the thermal bath. The collision kernels can be expressed in terms of drag and momentum diffusion coefficients as [19],
\[A_{i}=\gamma p_{i}, \tag{19}\]
\[B_{ij}=B_{0}P_{ij}^{\perp}+B_{1}P_{ij}^{\parallel}, \tag{20}\]
where \(\gamma\) is the drag coefficient, \(B_{0}\) and \(B_{1}\) are the transverse and longitudinal momentum diffusion coefficients respectively. \(P_{ij}^{\perp}\) and \(P_{ij}^{\parallel}\) are the perpendicular and parallel components of the projection operator. The \(D^{0}\) meson undergoes Brownian motion in a thermal bath of lighter hadrons losing its momenta. The average momenta of the \(D^{0}\) meson in the hadronic medium can be expressed as[49],
\[\langle p\rangle=\frac{\int_{-\infty}^{\infty}dp\ f(t,p)}{\int_{-\infty}^{ \infty}dp\ f(t,p)}=p_{0}\ e^{-\frac{t}{\tau}} \tag{21}\]
where, \(\tau\) is the relaxation time of the \(D^{0}\) meson and \(p_{0}\) is the initial momenta. The relaxation time of the \(D^{0}\) meson is related to the drag coefficient as \(\tau=1/\gamma\)[49].
In accordance with the widely used relaxation time approximation [50], the relaxation time for \(D^{0}\) meson in a hadron gas can be expressed as,
\[\tau^{-1}=\sum_{j}n_{j}\langle\sigma_{j}v_{j}\rangle, \tag{22}\]
where \(n_{j}\) is the number density of \(j^{th}\) hadronic species. \(\sigma_{j}\) and \(v_{j}\) are the cross-section and relative velocities between \(j^{th}\) hadronic species and \(D^{0}\) meson. Their thermal average can be approximated as [51],
\[\langle\sigma_{j}v_{j}\rangle=\frac{\sigma_{Dj}}{8Tm_{D}^{2}m_{j }^{2}K_{2}(\frac{m_{D}}{T})K_{2}(\frac{m_{j}}{T})}\int_{(m_{D}+m_{j})^{2}}^{ \infty}\\ ds\frac{s-(m_{D}-mj)^{2}}{\sqrt{s}}(s-(m_{D}+m_{j})^{2})K_{1}( \frac{\sqrt{s}}{T}). \tag{23}\]
Here, \(m_{j}\) is the mass of \(j^{th}\) species of hadron, \(s=(p_{D}+p_{j})^{2}\) is the Mandelstam variable, and \(K_{n}\) is the modified Bessel function of \(n\)th order. Following the Ref. [19; 52], \(Dm\to Dm\) and \(DB(\overline{B})\to DB(\overline{B})\) elastic scattering cross section is taken as \(\sigma=10\) mb and \(\sigma=15\) mb respectively, where \(m\), \(B\), and \(\overline{B}\) are mesons, baryons, and antibaryons respectively. On estimating \(\tau^{-1}\), we obtain the drag coefficient, \(\gamma\) as a function of temperature. In an isotropic medium and the static limit (\(p\to 0\)), the transverse and longitudinal momentum diffusion coefficient follows, \(B_{0}=B_{1}\). It describes the broadening of momentum spectra of final state hadrons. From Einstein's relation, we can express momentum diffusion coefficients in terms of drag coefficient, temperature, and
Figure 1: Drag coefficient as a function of temperature with different mass cutoff.
mass of \(D^{0}\) meson as [53],
\[B_{0}=\gamma m_{D}T, \tag{24}\]
where, \(m_{D}\) is the mass of \(D^{0}\) meson. Finally, we estimate the spatial diffusion coefficient, \(D_{s}\), to understand \(D^{0}\) meson diffusion in coordinate space. The mean quadratic displacement of \(D^{0}\) meson as a function of time is given as [49],
\[\langle(x(t)-x(t=0))^{2}\rangle=2D_{s}t \tag{25}\]
It can be understood as the speed of \(D^{0}\) diffusion in space in a hadronic medium. Under the static limit, \(D_{s}\) can be obtained as,
\[D_{s}=\frac{T}{m_{D}\gamma}. \tag{26}\]
In fig. (1), we estimate the \(D^{0}\) meson drag coefficient, \(\gamma\), in a thermal bath with different mass cutoffs by using the Eq.22, where the number density is estimated using the van der Waals formalism. For only pion gas, the \(D^{0}\) meson drag coefficient increases as a function of temperature. The trend remains the same for a gas of pions, kaons, and protons, but the magnitude of \(\gamma\) increases. This is because, in a gas with lighter hadrons, as compared to a pion gas, the \(D^{0}\) meson will undergo significantly more interactions, resulting in an increase in \(\gamma\). We also use a mass cutoff of 1 GeV for the hadrons in the medium, for that case the \(D^{0}\) meson drag increases in magnitude as compared to the \((\pi+K+p)\) gas. Finally, for a 1.2 GeV mass cutoff, it can be seen that the drag coefficient does not change much as compared to the previous 1.0 GeV mass cutoff case. This can be attributed to the negligible contribution of heavier hadrons due to their reduced number density in the hadron gas. For further calculations in this section, we use the mass cut-off of 1.2 GeV for the hadron gas.
In the left panel of Fig. 2, we study the variation of drag coefficient with temperature for various HRG models. We compare the values obtained from ideal HRG, EVHRG, and VDWHRG models. The ideal HRG model, where the number density of the hadronic medium is the highest, gives higher drag coefficient values. On the other hand, for the EVHRG model, due to the inclusion of the hardcore radius, the number density is suppressed. This causes the drag coefficient to be lower. However, the drag coefficient estimated using the VDWHRG model is slightly higher than the EVHRG model due to the attractive interaction, compensating for some repulsive effects. On the right panel of Fig.2, we compare our VDWHRG estimation of the drag coefficient with different phenomenological models. Ghosh et al. [16] use an effective field theory to study the interaction between open charm mesons in a hot hadronic medium comprising of pions, nucleons, kaons, and eta mesons. Using the Kadanoff-Baym approach, Torres-Rincon et al. [54] derived the off-shell Fokker-Planck equation that encodes the heavy-flavor transport coefficients. The drag and diffusion coefficients of \(D^{0}\) mesons in a hadronic matter as a function of the momentum of \(D^{0}\) mesons and the temperature of the medium at zero chemical potential have been computed by Ozvenchuk et al. [19]. It is observed that our estimation of the drag coefficient seems to agree well with the other models.
We study the variation of the transverse momentum diffusion coefficient as a function of temperature in Fig. 3. The transverse momentum diffusion coefficient accounts for the broadening of the momentum spectra. A correlation has been observed between temperature, \(T\), and the transverse momentum diffusion coefficient, \(B_{0}\), showing that a rise in temperature increases the momentum broadening. From the left panel of Fig. 3, we observe maximum momentum broadening of \(D^{0}\) spectra in the thermal bath of an ideal hadron gas, while reduced number density reduces the momentum diffusion coefficient. Comparing our VDWHRG estimation with other phenomenological works in the right panel of Fig. 3, we find that our estimation is consistent with the results reported by Ghosh et al. [16] and Torres-Rincon et al. [54].
In Fig. 4, we compute the spatial diffusion coefficient and observe its variation with temperature. We observe a decreasing trend with an increase in temperature. The AdS/CFT calculation yields a lower bound of \(2\pi TD_{s}\)=1 near the critical temperature; as we approach \(T_{c}\), we can see that the value of \(2\pi TD_{s}\) tends towards a minimum. A slight difference in the estimation of \(2\pi TD_{s}\) from different HRG models is due to the effect of interactions in the models. EVHRG estimates the maximum spatial diffusion coefficient; this can be understood as due to the repulsive interactions in the EVHRG model, the number density decreases significantly, allowing the \(D^{0}\) meson to diffuse with relative ease. On the right panel, we compare our VDWHRG results with other studies estimating the spatial diffusion coefficient. In the hadronic medium, the results obtained by Torres-Rincon et al. [54] and Ozvenchuk et al. [19] align well with our result and show a decrease with an increase in temperature. This is because as the temperature increases, the number density increases, and as a result, the interaction in the medium increases as well, which in turn, decreases the spatial diffusion coefficient. We can observe minima near the critical temperature owing to the emergence of a deconfined medium. In the partonic phase, our previous work [9] and the result obtained from the T-matrix approach [15] show an increase in \(D_{s}\) with increasing temperature. This is because, at higher temperatures, the partons will be asymptotically free, resulting in the weakening of strong interaction, which causes \(2\pi TD_{s}\) to increase at higher temperatures.
In Fig. 5, we study the drag coefficient, transverse momentum diffusion coefficient, and spatial diffusion coefficient of \(D^{0}\) meson in a van der Waals HRG model for finite baryonic chemical potential. The \(\mu_{B}\) values taken correspond to various colliders at different collision energies. \(\mu_{B}=0\) GeV corresponds to the LHC,
0.025 and 0.200 GeV correspond to RHIC at \(\sqrt{s_{NN}}=200\) GeV and 19.6 GeV respectively. Similarly, \(\mu_{B}=0.436\) and 0.630 GeV correspond to RHIC/FAIR at \(\sqrt{s_{NN}}=7.7\) GeV and NICA at \(\sqrt{s_{NN}}=3\) GeV respectively [55; 56; 57; 58]. We observe a trend of increasing values of \(\gamma\) and \(B_{0}\) with an increase in \(\mu_{B}\). However, as the number density saturates at a higher temperature, the distinction between high and low \(\mu_{B}\) becomes nonexistent. As \(\mu_{B}\) increases, the spatial diffusion coefficient displays a similar pattern of decreasing value at lower and intermediate temperatures. However, at higher temperatures, \(2\pi TD_{s}\) approaches a consistent value regardless of the \(\mu_{B}\) value. For \(\mu_{\rm B}=0.630\) GeV, we observe a non-monotonic change in the value of \(\gamma\), \(B_{0}\), and \(D_{s}\) at a lower temperature. It is more visible for the spatial diffusion coefficient, \(2\pi TD_{s}\). This might be due to the possible approach of the system towards the liquid-gas phase transition in the VDWHRG model.
## IV Charm abundances in the hadronic medium
The critical temperature for the transition from hadronic to partonic degrees of freedom at zero baryochemical potential is estimated to be around 155 MeV from the lQCD calculations [40]. The light quark bound states dissolve at or around this temperature, demonstrating the strong connection between the chiral crossover and deconfinement of light quark degrees
Figure 3: Transverse momentum diffusion coefficient as a function of temperature. A comparison among the Ideal HRG, EVHRG, and VDWHRG models (left). The red dot-dash line is the result of Ref. [16] and the green dashed-dotted-dotted line is obtained from Ref. [54] (right); are compared with VDWHRG.
Figure 2: Variation of drag coefficient with temperature. A comparison among the Ideal HRG, EVHRG, and VDWHRG models (left). A comparison between our result and different phenomenological models (right). The red dot-dash line is obtained from Ref. [16], the black dashed line is the result from Ref. [19], and the green dashed-dotted-dotted line is taken from Ref. [54].
of freedom. This results in an abrupt change in the bulk thermodynamic observables, such as the speed of sound, which gives a minimum around \(T_{\rm c}\)[40]. This change is even more evident in the behavior of fluctuations of conserved charges, such as baryon number, electric charge, or strangeness fluctuations. The quick shift in the degrees of freedom carrying the necessary conserved charges is directly reflected in the ratios of the various moments (cumulants) of net-charge fluctuations and their correlations in the transition zone. The overall number of hadronic degrees of freedom or the precise hadronic mass spectrum also affects bulk thermodynamics. To illustrate, the high rise of the trace anomaly, which was discovered in lattice QCD computations, may indicate contributions from hadron resonances that have not yet been seen [59]. In addition, some recent works have shown that the chemical freeze-out temperature for the light hadrons is not the same as that of the strange hadrons [60; 61]. The single strange, doubly strange, triply strange hadrons all freeze out at different temperatures, thus emphasizing the case for a differential chemical freeze-out scenario [62; 63]. In the same line, one can assume that such a condition may be observed in the charm sector as well. Thus, a thorough investigation is necessary for the charm sector.
Although it appears to be established that charmonium states, or bound states with hidden charm, continue to exist in the QGP at temperatures much higher than \(T_{\rm c}\)[64], this may not be true for the heavy-light mesons or baryons, such as open charm mesons (\(D^{0},D^{+},D^{-},D_{s}\)) or charmed baryons (\(\Lambda_{c},\Xi_{c},\Omega_{c}\)) [27]. To answer this query of melting charmed hadrons, one needs to compute net-charm fluctuations, cumulants, and correlations between their moments and moments of net baryon number, electric charge, or strangeness fluctuations. We can compute the susceptibilities of the conserved charges by the formula given by
\[\chi^{\rm BSQC}_{\rm ijkl}=\frac{\partial^{\rm i+j+k+l}(P/T^{4})}{\partial(\mu _{B}/T)^{\rm i}(\mu_{S}/T)^{\rm j}(\mu_{Q}/T)^{\rm k}(\mu_{C}/T)^{\rm i}}. \tag{27}\]
For this section of the calculation, we use the particle
Figure 4: Spatial diffusion coefficient as a function of temperature. A comparison among the Ideal HRG, EVHRG, and VDWHRG models (left). A comparison of our findings with other phenomenological models (right). The violet markers are from IQCD calculations [13], the black dashed line is from Ref. [9], and the green and red bands are obtained from T-matrix calculation [15]. On the hadronic phase, the black dotted line is taken from Ref. [19] and the dashed-dotted-dotted line is from Ref. [54]
Figure 5: Drag coefficient, transverse momentum diffusion coefficient, and spatial diffusion coefficient as a function of temperature for finite \(\mu_{B}\) values.
list from the Particle data group (PDG) [65]. In addition, we have included the undiscovered charmed states predicted by the quark model [66; 67], without which one underestimates the lQCD data [27]. In the upper panel of Fig. 6, we study the variation of second-order, \((\chi_{2}^{C})\), cumulant with temperature. We observe an increase in their values with an increase in temperature. This is primarily because, numerically, second-order fluctuation is the second derivative of pressure with respect to the corresponding chemical potential. As the pressure increases with time, the second-order susceptibility increases as well. A slight deviation can be observed between the IHRG, VDWHRG, and EVHRG estimations. In the lower panel of Fig. 6, we plot the ratio of fourth-order net charm fluctuation to second-order net charm fluctuation. This ratio gives the kurtosis of the net-charm distribution. From the lQCD calculations, the ratio is estimated to be unity within errors, which means that the distribution is normal. Our results are in line with the findings from the lQCD calculation. One can see a slight deviation of the VDWHRG results from the HRG results at high temperatures.
In the left panel of Fig. 7, we plot the ratio of fourth-order cumulants, \(\chi_{13}^{\rm BC}\) and \(\chi_{22}^{\rm BC}\). Mathematically, it can be understood as the ratio of charm number to baryon number. In the hadronic sector, we have \(|{\rm B}|=|{\rm C}|=1\), so the ratio should ideally be unity in an uncorrelated hadronic medium. In the partonic phase, we have \(|{\rm C}|=1\) and \(|{\rm B}|=1/3\), thus the ratio rises to 3. For this study, as mentioned in [27], we have only considered hadrons with \(|{\rm C}|=1\). This is due to the reason that hadrons with \(|{\rm C}|=2\) and \(|{\rm C}|=3\) are much heavier and their contribution is negligible. Our results depict that an ideal HRG model fails to explain the trend, while estimations from the VDWHRG model are consistent with lQCD data. The VDWHRG model can explain the lQCD data up to 200 MeV. The van der Waals interactions between the hadrons with increasing temperature mimics the behavior of a deconfined medium up to a certain temperature. An analogy can be drawn by understanding the PNJL model, where the quarks gain masses below the critical temperature, and the model can explain the hadronic sector even though no hadrons are present in the model [68]. On the right panel of Fig. 7, we plot the ratio of cumulants that receive contributions from the open charmed mesons. We calculate the contribution due to the charmed meson fluctuation from any second-order or fourth-order charmed fluctuation by subtracting the contribution of the charmed baryons. In this plot, one observes that the HRG model again fails to explain the rise in the ratio of the cumulant with temperature. However, findings from the VDWHRG model are consistent with lQCD up to around 180 MeV.
Finally, in Fig. 8, we compute fourth-order cumulants of net charm fluctuations and their correlation with conserved charges like net baryon number, electric charge, and strangeness. We take ratios of appropriate cumulants sensitive to the melting of charmed hadrons and study their variation with temperature. On the left panel, we plot the ratio of net charmed baryon fluctuation to net charmed meson fluctuation. The middle panel shows the fluctuation ratio of all charged-charmed baryons to all charged-charmed mesons. On the right panel, we show the variation in the ratio of all strange-charmed baryons to all strange-charmed mesons. We observe a similar trend in all the plots: the ideal HRG model increases almost monotonically with temperature, deviating from the lQCD results after temperatures 160-170 MeV. However, results from the VDWHRG model reach a peak around 160-170 MeV and then decrease towards higher temperatures. The disagreement between the lQCD and VDWHRG at high temperatures may be due to the reason that in our study, we have excluded the meson-meson attraction, meson-anti(baryon) interactions, and baryon-anti baryon interactions. However, one can clearly see that the VDWHRG model explains the lQCD data very well compared to the IHRG and EVHRG models.
## V Summary and discussion
In this work, we present a phenomenological estimation of the drag and diffusion coefficients of the \(D^{0}\) meson. We also study the effect of baryon chemical potential and the interactions in the system on the \(D^{0}\) meson diffusion. Due to the van der Waals interaction, a non-monotonic behavior can be seen at low temperatures and high-\(\mu_{B}\) regime. Our estimation of \(2\pi TD_{s}\) approaches the minima around the critical temperature. Moreover, we estimate the charm number fluctuations within the van der Waals hadron resonance gas model. We observe that the VDWHRG model shows a good agreement with the lQCD results up to \(T\simeq 180\) MeV. This study can help us to understand the melting of
charmed hadrons in a hot and dense medium formed in an ultra-relativistic collision.
The study of heavy-flavour hadron dynamics provides us with unique opportunities to understand the hot and dense matter produced in heavy-ion collisions at ultra-relativistic energies. The \(D^{0}\) meson, which is the lightest neutral charmed hadron, can give us information about the medium through its study of the drag and diffusion coefficients. This is encoded within the elliptic flow (\(v_{2}\)) and the nuclear suppression factor (\(R_{AA}\)) of \(D^{0}\) meson, which can be measured in experiments. On the other hand, one can, in principle, study the net charm number fluctuation by taking net \(D^{+}\) and \(D^{-}\) meson fluctuations as a proxy. This is because, for net charm cumulants calculation, one needs to take two particle species (particle and antiparticle, \(\Delta N_{c}=N_{c}-\bar{N}_{c}\)) into consideration. By default, \(D^{0}\) and \(\bar{D}_{0}\) should have been the ideal choice, as they are the lightest charmed hadrons. However, as observed in the LHCb experiment with a significance of 8.2 standard deviations, the \(D^{0}\) and \(\bar{D}^{0}\) suffer from oscillations [69]. Hence, it is not an ideal probe for net charm cumulants estimation. However, looking at net \(D^{+}\) and \(D^{-}\) meson fluctuations, one can use them as probes to study charm number fluctuations at both ALICE and STAR experiments. In view of ALICE run-3 and a high luminosity collision environment, the charm sector will be of high importance. Results such as \(D^{0}\) meson \(v_{2}\) and \(R_{AA}\) will be more accurate with smaller uncertainities. This means the theoretical and phenomenological models must be fine-tuned to explain the data. In addition, with higher statistics, it would be interesting to see the charm number cumulants, which would shed light on the melting of charmed hadrons in a hot and dense medium.
## VI Acknowledgement
K.G. acknowledges the financial support from the Prime Minister's Research Fellowship (PMRF), Govern
Figure 8: Fourth-order net charm fluctuation as a function of temperature computed using IHRG, EVHRG, and VDWHRG model. The ratio of fluctuations receiving contributions from all charmed hadrons (left), charged charmed hadrons (middle), and strange charmed hadrons (right) compared with data from Hot QCD collaboration [27].
Figure 7: Second-order and fourth-order fluctuation as a function of temperature computed using IHRG, EVHRG, and VDWHRG model. On the right panel, the ratio of fourth-order net baryon and charm correlations. On the left panel, we compute the contributions coming from the open charm mesons and a comparison with results from [27].
ment of India. K.K.P. acknowledges the doctoral fellowships from the University Grants Commission (UGC), Government of India. The authors acknowledge the valuable discussion with Ronald Scaria. The authors gratefully acknowledge the DAE-DST, Government of India funding under the mega-science project "Indian participation in the ALICE experiment at CERN" bearing Project No. SR/MF/PS-02/2021-IITI(E-37123).
|
2305.14924 | Observational signatures of rotating black holes in the semiclassical
gravity with trace anomaly | In a recent work by Fernandes [arXiv:2305.10382], an exact stationary and
axisymmetric solution was discovered in semiclassical gravity with type-A trace
anomaly, identified as a quantum-corrected version of the Kerr black hole. In
this study, we explore the observational signatures of this black hole
solution. Our investigation reveals that there exist prograde and retrograde
light rings, whose radii increase monotonically with the coupling parameter
$\alpha$. We also observe that when $\alpha$ is negative, the shadow area for
the quantum-corrected black hole is smaller than that of the Kerr black hole,
whereas when $\alpha$ is positive, the area is larger. Furthermore, for a
near-extremal black hole, its high-spin feature (the NHEKline) is found to be
highly susceptible to disruption by $\alpha$. Moreover, we discuss the images
of the quantum-corrected black hole in the presence of a thin accretion disk
and compare them to those of the Kerr black hole. Our study highlights the
importance of near-horizon emission sources in detecting the effects of quantum
corrections by black hole images. | Zhenyu Zhang, Yehui Hou, Minyong Guo | 2023-05-24T09:09:38Z | http://arxiv.org/abs/2305.14924v3 | # Light rings and shadows of rotating black holes in the semiclassical gravity with trace anomaly
###### Abstract
In a recent work by Fernandes [1], an exact stationary and axisymmetric solution was discovered in semiclassical gravity with type-A trace anomaly, identified as a quantum-corrected version of the Kerr black hole. This discovery presents exciting research opportunities for observing non-circular spacetimes. In this study, we explore the light rings and shadow of this black hole solution. Our investigation reveals that there exist prograde and retrograde normal light rings, whose radii increase monotonically with the coupling parameter \(\alpha\). We also observe that when \(\alpha\) is negative, the shadow area for the quantum-corrected black hole is smaller than that of the Kerr black hole, whereas when \(\alpha\) is positive, the area is larger. Furthermore, the NHEKline for nearly extreme black hole disappears when \(\alpha\) is greater than zero, while it appears for negative \(\alpha\), even if the spin is not too high. Such line sinks in the middle part when \(|\alpha|\) is relatively large if \(\alpha\) is less than zero.
\(*\) Corresponding author: [email protected]
Introduction
Semiclassical gravity is an approach that considers the backreaction of quantum fields while treating spacetime classically. One of the quantum effects in this scheme is the trace anomaly, which refers to the breaking of symmetry in a conformally invariant classical theory due to one-loop quantum corrections [2]. As a result of the trace anomaly, the renormalized stress tensor of quantum fields has a non-zero trace, which serves as a source term for the semiclassical Einstein equations. The trace anomaly may also induce higher-order curvature terms such as the Gauss-Bonnet term, which arises from the type-A anomaly [3].
The black hole solutions in semiclassical gravity represent the corrected versions of black holes that account for quantum effects. However, deriving these solutions is challenging because the renormalized stress tensor is often unknown, requiring additional assumptions to solve the problem. More than a decade ago, considering only the typeA anomaly, [4] was the first to find the static and spherically symmetric black hole solution in a four-dimensional spacetime within the framework of such semiclassical gravity. Interestingly, the same black hole solution was obtained in the 4D Einstein-Gauss-Bonnet (EGB) theory, even though the re-scaling procedure proposed in [5] attracted criticism on multiple fronts [6, 7, 8, 9, 10]. Note that an alternative approach to understanding the renormalized stress tensor of quantum fields is to consider an effective action that incorporates the anomaly in a gravitational theory with a conformally coupled scalar field [11, 12]. To remedy issues with the original 4D EGB theory, several regularization procedures have been proposed in [13, 14, 15]. Nevertheless, the 4D spherically symmetric black holes has been widely studied in subsequent works [16, 17, 18, 19] because of their intriguing features resulting from quantum effects. Furthermore, since the black hole in our universe is thought to be rotating, having an exact stationary and axisymmetric solution to the semiclassical Einstein equations that is sourced by the type-A trace anomaly is crucial for modeling the actual black hole in space.
Very recently in [1], the author solved this problem by adopting a Kerr-Schild ansatz and analytically solving the Einstein equation to obtain the exact stationary and axisymmetric solution in semiclassical gravity with the type-A trace anomaly. Compared to the classical Kerr black hole, this new solution replaces the ADM mass with a mass function given by
\[\mathcal{M}=\mathcal{M}(r,\theta)=\frac{2M}{1+\sqrt{1-\frac{8\alpha r\xi M}{ \Sigma^{3}}}}\,, \tag{1.1}\]
where \(M\) represents the ADM mass, \(\Sigma=r^{2}+a^{2}\cos^{2}\theta\) with \(a\) denoting the spin parameter, \(\xi=r^{2}-3a^{2}\cos^{2}\theta\), and \(\alpha\) representing the coupling constant of the type-A anomaly. This rotating black hole solution includes quantum corrections and reduces to the classical Kerr spacetime when \(\alpha=0\). Furthermore, when \(a=0\), the solution reduces to the static and spherically symmetric solution in
semiclassical gravity. This new solution presents several unique characteristics. For instance, the event horizon geometry is non-spherically symmetric, and there exists another Killing horizon outside of it. Additionally, under specific coupling constants, the spin parameter may surpass the traditional Kerr bound. This suggests that black holes may possess higher spins than their classical counterparts [20].
In addition, it is also interesting to investigate the observational features of this novel black hole from an astrophysical perspective. With the Event Horizon Telescope collaboration already capturing images of supermassive black holes at the centers of galaxies [21, 22, 23], studying image features, particularly the shadow of this black hole, becomes essential. The size and shape of a shadow can reflect the geometric structure and physical properties of the central black hole, thus having the potential to test the coupling parameter of the quantum-corrected black hole. There are already several studies on the shadows and images of quantum-corrected black holes [16, 17, 24, 25].
In this work, we focus on the light rings (LRs) [26, 27, 28] and shadows of the newly found black hole in [1]. We calculate the effective potential of particles in the equatorial plane and derive the equation governing the locations of the LRs. While the equation cannot be solved analytically, we numerically calculate the LRs as functions of the coupling constant under different spin. The existence of LRs implies a critical curve and a shadow in the observer's screen. Using a celestial light source, we illuminate the black hole and explore its shadow images.
The paper is organized as follows. In Sec. 2, we shall review the quantum-corrected Kerr black hole, then explore the particle motions and light rings in such background spacetime. In Sec. 3, we calculate the LRs and study the shadow images of the quantum-corrected Kerr black hole illuminated by a celestial source. We summarize and conclude this work in Sec. 4. We work in the geometrized unit with \(8\pi G=c=1\) in this paper.
## 2 The quantum-corrected Kerr black hole and its light rings
To begin, let us briefly examine the semiclassical Einstein gravity with type-A anomaly and quantum-corrected Kerr black holes as described in [1]. In this framework, the background geometry remains classical while the quantum fields influence the geometry through their expectation value of the renormalized stress tensor, denoted as \(\langle T_{\mu\nu}\rangle\), in the Einstein equations. It is worth noting that the trace of \(\langle T_{\mu\nu}\rangle\) is non-zero and dependent only on local curvature. If we consider the type-A trace anomaly, we have
\[g^{\mu\nu}\langle T_{\mu\nu}\rangle=\frac{\alpha}{2}\,\mathcal{G}\,, \tag{2.1}\]
where \({\cal G}=R^{2}-4R_{\mu\nu}R^{\mu\nu}+R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma}\) is the Gauss-Bonnet scalar. By combining Eq. (2.1) with the semiclassical Einstein equations, \(R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R=\langle T_{\mu\nu}\rangle\), we arrive at the following result:
\[{\cal R}=\frac{\alpha}{2}{\cal G}\,. \tag{2.2}\]
In general, it is typically impossible to solve the semiclassical Einstein equations when the renormalized stress tensor remains undetermined. However, in a fascinating discovery, the author of [1] was able to find a stationary and axisymmetric solution by adopting a Kerr-Schild ansatz and directly solving Eq. (2.2). These solutions can be interpreted as quantum-corrected Kerr black holes. The initial line-element was originally written in the ingoing Kerr-like coordinate system \((\nu,r,\theta,\varphi)\). For the sake of simplicity in our study, we will transform this into the BL coordinates \((t,r,\theta,\Phi)\),
\[d\nu=dt+\frac{r^{2}-a^{2}}{\Delta}dr\,,\quad d\varphi=\phi+\frac{a}{\Delta}dr\,, \tag{2.3}\]
\[ds^{2}=-\frac{\Delta}{\Sigma}\big{(}dt-a\sin^{2}\theta d\phi\big{)}^{2}+ \Sigma\bigg{(}\frac{dr^{2}}{\Delta}+d\phi^{2}\bigg{)}+\frac{\sin^{2}\theta}{ \Sigma}\big{[}adt-(r^{2}+a^{2})d\phi\big{]}^{2}\,, \tag{2.4}\]
where \(\Delta=r^{2}-2{\cal M}r+a^{2}\). We see that \({\cal M}\) is the mass function introduced in Eq. (1.1). Since \({\cal M}\) is dependent on \(\theta\), the resulting spacetime does not satisfy the circularity conditions [29]. Fig. 1 provides various examples of \({\cal M}\) under typical coupling constants. It is worth noting that a significant difference exists between the cases with positive and negative coupling constants. Additionally, the radius of the event horizon is also a function of \(\theta\), which renders it non-spherically symmetric and requires numerical solving. Moreover, it has been observed that another Killing horizon is present at \(\Delta=0\). This horizon does not coincide with the primary horizon but is located quite close to it.
The focus of our work is on particle motions around the quantum-corrected Kerr black hole. Specifically, we are interested in null geodesics since they are a crucial component of the observational properties associated with black holes. The constrained Hamiltonian for a point particle is given by
\[{\cal H}=\frac{1}{2}g_{\mu\nu}p^{\mu}p^{\nu}=-\frac{1}{2}m^{2}\,, \tag{2.5}\]
Figure 1: \({\cal M}\) as function of \(r\) for differnet values of \(\alpha\), evaluated at the pole (Left) and the equatorial plane (Right). The spin is fix at \(a=0.8\).
where \(p^{\mu}\) represents the momentum and \(m\) is the rest mass of the particle. As the spacetime we are dealing with is stationary and axisymmetric, the particle acquires a conserved energy \(E\) and angular momentum \(L\) through the contraction of \(p_{\mu}\) with the Killing vectors \(\partial_{t}\), \(\partial_{\phi}\). However, due to the non-circular nature of the mass function in Eq. (1.1), it seems impossible to separate the equations of motion into first-order equations of \(r\) and \(\theta\) as can be done in Kerr spacetime. Nonetheless, we can still gain valuable physical insights by examining particle motions in the equatorial plane, \(\theta=\pi/2\). In the case of equatorial motions, Eq. (2.5) reduces to
\[\left(\frac{dr}{d\lambda}\right)^{2}=V_{eff}(r)=E^{2}-m^{2}+\frac{2\mathcal{M} m^{2}}{r}+\frac{a^{2}(E^{2}-m^{2})-L^{2}}{r^{2}}+\frac{2\mathcal{M}(L-aE)^{2}}{ r^{3}}\,, \tag{2.6}\]
where we have defined an effective potential \(V_{eff}\) for convenience. Let us now focus on the case of photons with \(m=0\). The key property of photons is that their conserved energy can be absorbed into the affine parameter by rescaling \(\lambda\to\lambda E\). Then, by defining \(l\equiv L/E\), the effective potential becomes
\[V_{eff}(r)=1+\frac{a^{2}-l^{2}}{r^{2}}+\frac{2\mathcal{M}(l-a)^{2}}{r^{3}}\,. \tag{2.7}\]
For photons, the unstable photon orbits, also known as light rings (LRs), are circular orbits of photons that play a crucial role in the observational properties of black holes. LRs can be obtained by setting
\[V_{eff}=0\,,\quad\partial_{r}V_{eff}=0\,. \tag{2.8}\]
From these equations, we obtain
\[l=-\frac{a\bigg{(}r^{3}+9r-8\alpha+6r^{2}\sqrt{1-\frac{8\alpha}{ r^{3}}}\bigg{)}}{r^{3}-9r+8\alpha}\,,\] \[1+\frac{a^{2}-l^{2}}{r^{2}}+\frac{4(l-a)^{2}}{r^{3}\sqrt{1-\frac{ 8\alpha}{r^{3}}}}=0\,, \tag{2.9}\]
where and thereafter, for simplicity and without loss of generality, we set \(M=1\). Eq. (2.9) provides the equation for the LRs
\[1+\frac{16a^{2}\bigg{(}r^{3}-8\alpha+3r^{2}\sqrt{1-\frac{8\alpha}{r^{3}}}\bigg{)} ^{2}}{r^{3}\big{(}r^{3}-9r-8\alpha\big{)}^{2}\bigg{(}1+\sqrt{1-\frac{8\alpha}{ r^{3}}}\bigg{)}}+\frac{a^{2}}{r^{2}}-\frac{a^{2}\bigg{(}r^{3}+9r-8\alpha+6r^{2} \sqrt{1-\frac{8\alpha}{r^{3}}}\bigg{)}^{2}}{r^{2}(r^{3}-9r-8\alpha)^{2}}=0\,. \tag{2.10}\]
As Eq. (2.10) is somewhat complicated, performing analytical calculations can be difficult; therefore, we solve the LRs numerically. There are two real roots, \(r_{p}\), \(r_{m}\), which correspond to prograde and retrograde photon orbits, respectively. It has been confirmed that these LRs are unstable under radial perturbations, that is, \(\partial_{r}^{2}V_{eff}>0\). Consequently, similar to the Kerr spacetime, the LRs in
the quantum-corrected Kerr solution imply a _shadow_ region in the image of such black hole, which is studied and discussed in the next section.
Fig. 2 illustrates how the radii of LRs change as the coupling constant \(\alpha\) increases, while the spin parameter remains fixed at different values. It's worth noting that although Eq. (2.10) has solutions throughout the range of \(-1\leq\alpha\leq 8\), only the colored parts of the curves correspond to the actual black hole solution, i.e., where the event horizon encloses the curvature singularities. The gray portions of the curves correspond either to cases where the event horizon is complex or where the singularity crosses the event horizon. For the case of \(a=0.9\) (bottom right), there are two distinct intervals for black hole solutions: \(-0.0649\leq\alpha\leq 0.3487\) and \(2.25\leq\alpha\leq 6.784\), consistent with the domain of black hole solutions found in [1]. In the case of \(a=1\) (bottom left), the black hole touches the Kerr bound, and a black hole still exists for \(\alpha=0\) (red dots) and \(4.5698\leq\alpha\leq 6.47976\). Anyway, it is apparent that both \(r_{p}\) and \(r_{m}\) increase with \(\alpha\). This implies that a larger value of \(\alpha\) could potentially result in a larger shadow in the black hole image. This finding aligns with previous research studies [16, 17, 24].
Figure 2: LRs as functions of \(\alpha\), under different spin parameters. The red and orange colors denote the prograde orbit and retrograde orbit, respectively.
Black hole shadows illuminated by a celestial light source
It is known that the LRs affect the observational signature of a black hole significantly. Photons launched near these unstable orbits complete multiple loops around the black hole before reaching the observer, which leads to the formation of a critical curve in the observer's screen [30]. To investigate this characteristic for the quantum-corrected Kerr black hole, we utilize a celestial light source to illuminate the black hole. The center of the black hole is located at the origin, and its radius is much smaller than that of the event horizon or the distance between the observer and the black hole [31]. This setup allows the celestial source to outline the critical curve accurately and reveal the unstable photon orbits. However, since the horizon is concealed behind the LRs, it cannot be directly illuminated by the celestial source. To explore the horizon, a more realistic model such as the BH-disk system needs to be considered [32, 33, 34, 35].
However, in this paper, we are only interested in the observational features of the unstable photon orbits. When illuminated by the celestial source, any spacetime information that lies behind the LRs remains invisible on the screen. Light rays that penetrate the interior of the prograde LR \(r_{p}\) are inevitably captured by the black hole, creating a shadow region on the screen. Therefore, during the imaging process, we can replace the radius of the event horizon with that of the Killing horizon1 so that photons falling into the Killing horizon are treated similarly to those falling into the black hole.
Footnote 1: The reason for replacing the event horizon with the Killing horizon is that it is relatively easy to solve numerically compared to the former. One can check that both horizons reside within the LRs.
Once the celestial model is established, we utilize our backward ray-tracing method, developed in [36, 31], to generate images of the black hole. The numerical strategy involves setting up a camera model at the observer and integrating the equations of motion along the null geodesics moving backward from the observer. To achieve this, we employ a fisheye camera model that incorporates the stereographic projection of the momentum \(p_{\mu}\) of photons onto the screen. For additional details, refer to [31]. With the values of \(p_{\mu}\) on the screen already determined, we may now proceed with determining the trajectories of the photons by performing backward integration of the Hamiltonian equations,
\[\frac{\partial\mathcal{H}}{\partial p_{\mu}}=\dot{x}^{\mu}\,,\quad\frac{ \partial\mathcal{H}}{\partial x^{\mu}}=-\dot{p}_{\mu}\,, \tag{3.1}\]
where the dot denotes the derivative concerning the affine parameter for null geodesics. During the raytracing process, light rays that reach the celestial sphere are colored based on their positions. Rays that reach the horizon are colored black, creating a shadow region on the screen. Additionally, to quantify the impact of quantum corrections on the shadow size, we introduce a parameter \(\eta\equiv S_{\rm BH}/S_{\rm Kerr}\). This parameter represents the area ratio between the shadow of a quantum-corrected black hole and that of a Kerr black hole with identical spin.
Let us firstly consider the case of \(\alpha\geq 0\). Figure 3 displays black hole images for three different positive values of the coupling parameter, with an identical black hole spin \(a=0.99\) for an equatorial observer (\(\theta_{o}=\pi/2\)). It is evident that the black hole shadow increases in size due to the impact of a positive \(\alpha\). When the spin is nearly extreme, there is a vertical line segment visible in the left contour of the Kerr shadow observed by an equatorial observer, as shown in the left panel of Fig. 3. This vertical line is also referred to as the near-horizon-extreme-Kerr line (NHEKline) [37], since the emission near this line originates from the near-horizon-extreme-Kerr region of the Kerr spacetime. However, NHEKlines can be easily eliminated by using a small value of \(\alpha\), as seen in the middle panel of Fig. 3. As \(\alpha\) becomes sufficiently large, the shadow shape transforms into an ellipse, and the high spin signatures are no longer visible, as demonstrated in the right panel of Fig. 3. Moreover, regarding
Figure 4: The variation of the area ratio \(\eta\) with respect to postive \(\alpha\) for different black hole spin \(a\). The incline angle of the observer is fixed at \(\theta_{o}=\pi/2\).
Figure 3: Black hole images of \(a=0.99\) with \(\alpha\geq 0\). Left: \(\alpha=0\), which corresponds to a near extreme Kerr BH. Middle: \(\alpha=0.02\). Right: \(\alpha=6.2\). The camera is placed at \(r_{o}=200\), \(\theta_{o}=\pi/2\).
the case of large \(\alpha\), the lensed images surrounding the shadow become narrower, which may indicate a different gravitational lensing process compared to that in the usual Kerr spacetime.
Figure 4 shows the variation of \(\eta\) concerning \(\alpha\) under different black hole spins. The area ratio \(\eta\) progressively increases as the quantum-corrected parameter grows, with a more substantial impact on higher spin black holes. This finding aligns with the behavior of the LRs, whose radii are monotonically increasing functions of \(\alpha\). Furthermore, as illustrated in Fig. 5, we present an example of a violation of the Kerr bound, where \(a=1.06\) and \(\alpha=6.275\), which is close to the maximum value found in [1]. However, we do not observe any remarkable new features compared to the right panel of Fig. 3, where \(a=0.99\) and \(\alpha=6.2\). We hypothesize that strong quantum corrections suppress the high spin effect, even when it surpasses the Kerr bound.
Next, we consider the case of \(\alpha\leq 0\). For clarity, we present the black hole shadows for three different values of \(\alpha\) but with the same spin \(a=0.8\) for an equatorial observer in Fig. 6. For a Kerr
Figure 5: Black hole image of \(a=1.06\) and \(\alpha=6.275\), which violates the Kerr bound \(a\leq 1\). The camera is placed at \(r_{o}=200\), \(\theta_{o}=\pi/2\).
black hole, \(a=0.8\) is insufficient to form an NHEKline in the BH shadow. However, when considering a negative \(\alpha\), we observe that the right-hand side of the shadow curve starts to become increasingly straight, even for small \(|\alpha|\) (as shown in the middle panel of Fig. 6). Furthermore, as \(\alpha\) decreases, the right-hand side of the shadow curve exhibits a concave shape, as seen in the right panel of Fig. 6. Due to this effect, the shadow area may decrease as \(\alpha\) decreases. To verify this hypothesis, we plot the variation of the area ratio \(\eta\) concerning \(\alpha\) in this case in Fig. 7. The range of \(\alpha\) follows the domain of existence for a black hole. By combining Fig. 7 with Fig. 4, we conclude that, similar to the radii of LRs, the area ratio \(\eta\) is a monotonically increasing function concerning the quantum-corrected parameter \(\alpha\).
## 4 Summary
In this work, we focused on the newly found rotating black hole solution in the semiclassical gravity with type-A trace anomaly [1], and studied the LRs and shadows. Considering that the mass function \(\mathcal{M}\) is a function of both \(r\) and \(\theta\), which are closely interconnected. As a result, the Hamiltonian for null particles cannot separate these variables. To determine the radii of LRs, we numerically calculated their values based on the effective potential of the photons. Our findings revealed that, for fixed \(a\) and \(\alpha\), the radii of retrograde LRs are consistently larger than those of prograde ones, regardless of whether \(\alpha\) is negative or positive. Moreover, we verified that the LRs are inherently unstable in the radial direction. This implies that if there is a light source, a black hole shadow can be formed. As with LRs, the Hamiltonian of photons cannot separate variables, so numerical methods were required
Figure 7: The variation of the area ratio \(\eta\) with respect to negative \(\alpha\). The black hole spin is fixed at \(a=0.8\) and the incline angle of the observer is fixed at \(\theta_{o}=\pi/2\).
to calculate the black hole shadow. We therefore assumed a spherical light source illuminating the black hole and used the backward ray-tracing method to determine the shape and area of the shadow on the observer's screen. To describe the characteristics of the variation in the black hole shadow's area, we introduced a parameter \(\eta=S_{\rm BH}/S_{\rm Kerr}\). It represented the ratio between the shadow's area for a quantum-corrected black hole and that of a Kerr black hole under the same spin. Our findings showed that as \(\alpha\) increases gradually within its value range, \(\eta\) also increases. When \(\alpha\) is zero, the quantum-corrected black hole reduces to the Kerr black hole, and thus \(\eta\) equals one. We observe that when \(\alpha\) is negative, the shadow area for a quantum-corrected black hole is smaller than that of a Kerr black hole, whereas for positive \(\alpha\), the area is larger, indicating a distinct difference in behavior. An intriguing discovery concerning the variation in the shape of the black hole shadow was that the NHEKline disappears when \(\alpha\) is greater than zero. Conversely, when \(\alpha\) is less than zero, the NHEKline appears when \(|\alpha|\) is relatively small. Moreover, the middle part of the NHEKline sinks in when \(|\alpha|\) is relatively large. This behavior is possibly related to the non-circularity of the spacetime.
## Acknowledgments
The work is partly supported by NSFC Grant No. 12275004, 12205013 and 11873044. MG is also endorsed by "the Fundamental Research Funds for the Central Universities" with Grant No. 2021NTST13.
|
2310.01621 | The RESET and MARC Techniques, with Application to Multiserver-Job
Analysis | Multiserver-job (MSJ) systems, where jobs need to run concurrently across
many servers, are increasingly common in practice. The default service ordering
in many settings is First-Come First-Served (FCFS) service. Virtually all
theoretical work on MSJ FCFS models focuses on characterizing the stability
region, with almost nothing known about mean response time.
We derive the first explicit characterization of mean response time in the
MSJ FCFS system. Our formula characterizes mean response time up to an additive
constant, which becomes negligible as arrival rate approaches throughput, and
allows for general phase-type job durations.
We derive our result by utilizing two key techniques: REduction to Saturated
for Expected Time (RESET) and MArkovian Relative Completions (MARC).
Using our novel RESET technique, we reduce the problem of characterizing mean
response time in the MSJ FCFS system to an M/M/1 with Markovian service rate
(MMSR). The Markov chain controlling the service rate is based on the saturated
system, a simpler closed system which is far more analytically tractable.
Unfortunately, the MMSR has no explicit characterization of mean response
time. We therefore use our novel MARC technique to give the first explicit
characterization of mean response time in the MMSR, again up to constant
additive error. We specifically introduce the concept of "relative
completions," which is the cornerstone of our MARC technique. | Isaac Grosof, Yige Hong, Mor Harchol-Balter, Alan Scheller-Wolf | 2023-10-02T20:31:30Z | http://arxiv.org/abs/2310.01621v1 | # The RESET and MARC Techniques, with Application to Multiserver-Job Analysis
###### Abstract
Multiserver-job (MSJ) systems, where jobs need to run concurrently across many servers, are increasingly common in practice. The default service ordering in many settings is First-Come First-Served (FCFS) service. Virtually all theoretical work on MSJ FCFS models focuses on characterizing the stability region, with almost nothing known about mean response time.
We derive the first explicit characterization of mean response time in the MSJ FCFS system. Our formula characterizes mean response time up to an additive constant, which becomes negligible as arrival rate approaches throughput, and allows for general phase-type job durations.
We derive our result by utilizing two key techniques: REduction to Saturated for Expected Time (RESET) and MArkovian Relative Completions (MARC).
Using our novel RESET technique, we reduce the problem of characterizing mean response time in the MSJ FCFS system to an M/M/1 with Markovian service rate (MMSR). The Markov chain controlling the service rate is based on the saturated system, a simpler closed system which is far more analytically tractable.
Unfortunately, the MMSR has no explicit characterization of mean response time. We therefore use our novel MARC technique to give the first explicit characterization of mean response time in the MMSR, again up to constant additive error. We specifically introduce the concept of "relative completions," which is the cornerstone of our MARC technique.
keywords: queueing, response time, RESET, MARC, multiserver, MSJ, markovian service rate, heavy traffic +
Footnote †: journal: IFIP Performance
## 1 Introduction
Multiserver queueing theory predominantly emphasizes models in which each job utilizes only one server (one-server-per-job models), such as the M/G/k. For decades, such models were popular in the study of computing systems, where they provided a faithful reflection of the behavior of such systems while remaining conducive to theoretical analysis. However, one-server-per-job models no longer reflect the behavior of many modern computing systems.
**Multiserver jobs:** In modern datacenters, such as those of Google, Amazon, and Microsoft, each job now requests many servers (cores, processors, etc.), which the job holds simultaneously. A job's "server need" refers to the number of servers requested by the job. In Google's recently published trace of its "Borg" computation cluster [17; 46], the server needs vary by a factor of 100,000 across jobs. Throughout this paper, we will focus on this "multiserver-job model" (MSJ), in which each job requests some number of servers, and concurrently occupies that many servers throughout its time in service (its "duration").
**FCFS service:** We specifically study the first-come first-served (FCFS) service ordering for the MSJ model, a natural and practical policy that is the default in both cloud computing [9; 26; 43] and supercomputing [10; 23]. Currently, little is known about FCFS service in MSJ models.
**Stability under FCFS:** Even the stability region under FCFS scheduling is not generally understood. Some papers characterize the stability region under restrictive assumptions on the job duration distributions [1; 18; 32; 41; 42]. A key technique in these papers is the _saturated system_ approach [2; 12]. The saturated
system is a closed system in which completions trigger new arrivals, so that the number of jobs in the system is always constant. We are the first to use the saturated system for analysis beyond characterizing the stability region.
**Response time for FCFS:** Even less is known about mean response time \(\mathbb{E}[T]\) in MSJ FCFS systems: The only MSJ FCFS system in which mean response time has been analytically characterized is the simpler case of 2 servers and exponentially distributed durations [3; 11]. Mean response time is much better understood under more complex scheduling policies such as ServerFilling and ServerFilling-SRPT [17; 19], but these policies require assumptions on both preemption and the server need distribution, and do not capture current practices, which emphasize nonpreemptive policies. Mean response time is also better understood in MSJ FCFS scaling regimes, where the number of servers and the arrival rate both grow asymptotically [22; 49]. We are the first to analyze MSJ FCFS mean response time under a fixed number of servers.
**Why FCFS is hard to analyze:** One source of difficulty in studying the FCFS policy is the lack of work conservation. In simpler one-server-per-job models, a work-conservation property holds: If enough jobs are present, no servers will be idle. The same is true under the ServerFilling and ServerFilling-SRPT policies [17], which focus on the power-of-two server-need setting. Each policy selects a subset of the jobs available, and places jobs from that subset into service in largest-server-need-first order. By doing so, and using the power-of-two assumption, these policies always fill all of the servers, whenever sufficiently many jobs are present, thereby achieving work conservation.
Work conservation is key to the mean response time analysis of those systems, as one can often reduce the analysis of response time to the analysis of work. In contrast, the multiserver-job model under FCFS service is not work conserving: a job must wait if it demands more servers than are currently available, leaving those servers idle.
**First response time analysis:** We derive the first characterization of mean response time in the MSJ FCFS system. We allow any phase-type duration distribution, and any correlated distribution of server need and duration. Our result holds at all loads up to an additive error, which becomes negligible as the arrival rate \(\lambda\) approaches \(\lambda^{*}\), the threshold of stability.
**Proof structure:** We illustrate the structure of our proof in Fig. 1. We first use our RESET technique (REduction to Saturated for Expected Time) to reduce from the MSJ FCFS system to the At-least-\(k\) system (see Section 3.3). The At-least-\(k\) system is equivalent to a M/M/1 with Markovian service rate (MMSR) (see Section 3.2), where the service rate is based on the saturated system. By "Markovian service rate", we refer to a system in which the completion rate fluctuates over time, driven by an external finite-state Markov chain. We next use our MARC technique (MARovian Relative Completions) to prove Theorem 4.1, the first characterization of mean response time in the MMSR.
Both steps are novel, hard, and of independent interest. We prove our MARC result first because it is a standalone result, characterizing mean response time for any MMSR system up to an additive constant. We then prove Theorem 4.2, our characterization of mean response time in the MSJ FCFS system, by layering our RESET technique on top of MARC. Theorem 4.2 characterizes mean response time in terms of several quantities that can be characterized explicitly and in closed form via a straightforward analysis of the saturated system. We walk through a specific example of using our result to explicitly characterize mean response time in Appendix C.
**Breadth of the RESET technique:** Our RESET technique is very broad, and applies to a variety of generalizations of the MSJ model and beyond (See Section 7). For instance, RESET can handle cases
Figure 1: The structure of our main results: RESET (Theorem 4.2) and MARC (Theorem 4.1).
where a job's server need varies throughout its time in service, and where the service rates at the servers can depend on the job. Finally, we can analyze scheduling policies that are close to FCFS but allow limited reordering, such as some backfilling policies.
**Breadth of the MARC technique:** Our MARC technique is also very broad, and applies to any MMSR system. For example, we can handle systems in which machine breakdowns lead to reduced service rate, or where servers are taken away by higher-priority customers.
This paper is organized as follows:
* Section 2: We discuss prior work on the MSJ model.
* Section 3: We define the MSJ model, the MMSR, the saturated system, relative completions, and related concepts.
* Section 4: We state our main results, and walk through an example of applying our results to a specific MSJ FCFS system.
* Section 5: We characterize mean response time in the MMSR using our MARC technique.
* Section 6: We build upon Section 5 to characterize MSJ FCFS mean response time using our RESET technique.
* Section 7: Our results apply to a very broad class of models which we call "finite skip" models, and which we define in this section.
* Section 8: We empirically validate our theoretical results.
## 2 Prior work
The bulk of the prior work we discuss is in Section 2.1, which focuses on specific results in the multiserver-job model. In Section 2.2, we briefly discuss prior work on the saturated system, an important tool in our analysis. Finally, in Section 2.3, we discuss prior work on the M/M/1 with Markovian service rate.
### Multiserver-job model
Theoretical results in the multiserver-job model are limited. We first discuss the primary setting of this paper: a fixed number of servers and FCFS service.
#### 2.1.1 Fixed number of servers, FCFS service
In this setting, most results focus on characterizing the stability region. Rumyantsev and Morozov characterize stability for an MSJ system with an arbitrary distribution of server needs, where the duration distribution is exponential and independent of server need [42]. This result can implicitly be seen as solving the saturated system, which has a product-form stationary distribution in this setting. A setting with two job classes, each with distinct server needs and exponential duration distributions has also been considered [16; 40]. In this setting, the saturated system was also proven to have a product-form stationary distribution, which was also used to characterize the stability region.
The only setting in which mean response time \(\mathbb{E}[T]\) is known is in the case of \(k=2\) servers and exponential duration independent of server need [3; 11]. In this setting, the exact stationary distribution is known. Mean response time is open in all other settings, including whenever \(k>2\).
#### 2.1.2 Advanced scheduling policies
More advanced scheduling policies for the MSJ system have been investigated, in order to analyze and optimize the stability region and mean response time.
The MaxWeight policy was proven to achieve optimal stability region in the MSJ setting [27]. However, its implementation requires solving an NP-hard optimization problem upon every transition, and it performs frequent preemption. It is also too complex for response time analysis to be tractable. The Randomized Timers policy achieves optimal throughput with no preemption [13; 39], but has very poor empirical mean response time, and no response time analysis.
In some settings, it is possible for a scheduling policy to ensure that all servers are busy whenever there is enough work in the system, which we call "work conservation." Work conservation enables the optimal
stability region to be achieved and mean response time to be characterized. Two examples are ServerFilling and ServerFilling-SRPT scheduling policies [17; 19]. However, the work-conservation-based techniques used in these papers cannot be used to analyze non-work-conserving policies such as FCFS.
#### 2.1.3 Scaling number of servers
The MSJ FCFS model has also been studied in settings where the number of servers, the arrival rate, and the server need distribution all grow in unison to infinity. Analogues of the Halfin-Whitt and non-diminishing-slowdown regimes have been established, proving bounds on the probability of queueing and mean waiting time [22; 49]. These results focus on settings where an _approximate_ work conservation property holds, and there is enough excess capacity that this approximate work conservation is sufficient to determine the first-order behavior of the system. These results do not apply to the \(\lambda\to\lambda^{*}\) limit.
### Prior work on the saturated system
The _saturated system_ is a queueing system which is used as analysis tool to understand the behavior of an underlying non-saturated queueing system [2; 12]. Baccelli and Foss state that it is a "folk theorem" that the threshold of the stability region of the original open queueing system is equivalent to the completion rate of the saturated system: If the completion rate of the saturated system is \(\mu\), then the original system is stable for arrival rate \(\lambda\) if and only if \(\lambda<\lambda^{*}=\mu\)[2]. Baccelli and Foss give sufficient conditions for this folk theorem, known as the "saturation rule," to hold rigorously. These conditions are mild, and are easily shown to hold for the MSJ FCFS system. The strongest stability results in the MSJ FCFS system have either been proven by characterizing the steady state of the saturated system, or are equivalent to such a characterization [16; 18; 42].
Our novel contribution is characterizing the _mean response time_ behavior of an original system by reducing its analysis to the analysis of a saturated system. All previous uses of the saturated system focused on characterizing stability. Specifically, our main theorem, Theorem 4.2, characterizes mean response time in terms of \(\Delta_{\mathrm{Sat}}(y),\lambda^{*}\), and \(Y_{d}^{\mathrm{Sat}}\). These functions and random variables are specific to the saturated system. They are defined in Section 3, and can be calculated in closed-form by analyzing the saturated system, as we walk through in C.
### M/M/1 with Markovian Service Rate
The M/M/1 with Markovian service rate (MMSR) has been extensively studied since the 50's, often alongside Markovian arrival rates [5; 6; 20; 24; 29; 33]. A variety of mathematical tools have been applied to the MMSR, including generating function methods, matrix-analytic and matrix-geometric methods, and spectral expansion methods [5; 6; 25; 33]. However, these methods primarily result in _numerical results_, rather than theoretical insights [6; 31].
More is known for special cases of the MMSR system [7; 38]. For instance, the case where arrival rates alternate between a high and low completion rate at some frequency has received specific study. In this case, the generating function can be explicitly solved as the root of a cubic equation [50], but the resulting expression is too complex for analytical insights. In this simplified setting, scaling results [34; 35; 36; 47] and monotonicity results [20] have been derived, but those results do not extend to more complex MMSR systems.
By contrast, our MARC technique provides the first explicit characterization of mean response time for the general MMSR system, up to an additive constant.
### Drift method and MARC
The drift method is a popular method for steady-state analysis of queueing models (see, e.g., [8; 22; 28; 49]). In the drift method, one takes a suitable _test function_ (also known as a Lyapunov function) of the system state and computes its instantaneous rate of change starting from each state under the transition dynamics, which is called the drift. The drift can be formally calculated using the _instantaneous generator_, defined in Section 3.9. One then utilizes the fact that the drift of any test function has zero steady-state expectation (Lemma 3.2) to characterize system behavior in steady state, through metrics such as mean
queue length. Through more specialized choices of test function, stronger results such as State Space Collapse can also be proven.
In prior work which analyzes the mean queue length, the test function is usually a quadratic function of the queue length. For instance, when analyzing the MaxWeight policy in the switch setting, an appropriate test function is \(\sum_{i}q_{i}^{2}\), where \(q_{i}\) is the number of jobs present of each class \(i\)[44]. For such a test function to provide useful information about the expected queue length, the system must achieve a constant work completion rate whenever there are enough jobs in the system. This constant work completion rate ensures that the test function's drift depends linearly on the queue length, allowing the mean queue length to be characterized. However, in our MSJ system, the work completion rate is variable regardless of the number of jobs in the system, because servers may always be left empty if a job in the queue requires more servers than are available. As a result, the standard test functions for the drift method do not provide useful information about the MSJ system.
Our innovation is to construct a novel test function that combines the queue length \(q\) and a new quantity called _relative completions_, defined in Section 3.8. Our use of relative completions allows us to ensure that the test functions \(f_{\Delta}\) and \(f_{\Delta}^{\text{MSJ}}\), defined in Definitions 5.1 and 6.1, have drift which depend linearly on the queue length. As a result, we can apply the drift method with our novel test functions to characterize mean queue length in the MSJ system, and hence characterize mean response time.
We call this technique the MArkovian Relative Completions (MARC) technique: using relative completions to define a test function for the drift method, to apply the drift method to systems with variable work-completion rate.
## 3 Model
In this section, we introduce five queueing models: the multiserver-job (MSJ) model, the M/M/1 with Markovian service rate (MMSR), the At-least-\(k\) (Ak) model, the saturated system, and the simplified saturated system (SSS). The MSJ is the main focus of this paper. Our RESET technique reduces its analysis to analyzing the Ak system. The Ak system is equivalent to a MMSR system whose completion process is controlled by the saturated system. Our MARC technique allows us to analyze this MMSR system. The SSS is a simpler equivalent of the saturated system. We also introduce the concepts of relative completions and the generator approach, which are key to our analysis.
Table 1 describes each of the abbreviations used in this paper.
### Multiserver-job Model
The MSJ model is a queueing model in which each job requests an integer number of servers, the _server need_, for some duration of time, the _service duration_. Each job requires concurrent service on all of its servers throughout its duration. Let \(k\) denote the total number of servers in the system.
We assume that each job's server need and service duration are drawn i.i.d. from some joint distribution. The duration distribution is phase type, and it may depend on the job's server need. This assumption can likely be generalized, which we leave to future work. We assume a Poisson(\(\lambda\)) arrival process.
\begin{table}
\begin{tabular}{c|c|c} Abbreviation & Meaning & Definition \\ \hline MSJ & Multiserver-job & Section 3.1 \\ FCFS & First-come first-served & Section 3.1 \\ MMSR & M/M/1 with Markovian service rate & Section 3.2 \\ Ak & At-least-\(k\) system & Section 3.3 \\ Sat & Saturated system & Section 3.5 \\ SSS & Simplified saturated system & Section 3.11, Appendix B \\ MARC & Markovian relative completions & Section 5 \\ RESET & Reduction to saturated for expected time & Section 6 \\ \end{tabular}
\end{table}
Table 1: Table of abbreviations
We focus on the first-come first-served (FCFS) service discipline. Our RESET technique also applies to many other scheduling policies, as we discuss in Section 7. Under FCFS, jobs are placed into service, one by one, in arrival order, as long as the total server need of the jobs in service is at most \(k\). If a job is reached whose server need would push the total over \(k\), that job does not receive service until sufficient completions occur. We consider head-of-the-line blocking, so no subsequent jobs in arrival order receive service. It has been shown that in the MSJ FCFS setting, there exists a threshold \(\lambda^{*}\), such that the system is stable if and only if \(\lambda<\lambda^{*}\)[2; 12]. We assume that \(\lambda<\lambda^{*}\).
Note that the only jobs eligible for service are the \(k\) oldest jobs in arrival order. We conceptually divide the system into two parts: the _front_ and the _back_. When the total number of jobs in the system is at least \(k\), the front consists of the \(k\)-oldest jobs in the arrival order; otherwise, the front consists of all jobs in the system. The back consists of all jobs that are not in the front. Note that all of the jobs which are in service must be in the front, because at most \(k\) jobs can be in service at a time, and service proceeds in strict FCFS order. The front may also contain some jobs which are not in service, whenever less than \(k\) jobs are in service. All of the jobs in the back are not in service.
### M/M/1 with Markovian Service Rate
The MMSR-\(\pi\) system is a queueing system where jobs arrive to the system according to a Poisson process, and complete at a variable rate, where the completions are determined by the transitions of a finite-state Markov chain \(\pi\). We refer to \(\pi\) as the "service process". When a job arrives, it stays in the queue until it reaches the head of the line, entering service. The job then completes when \(\pi\) next undergoes a transition associated with a completion. Jobs are identical until they reach service. The service process \(\pi\) is unaffected by the number of jobs in the queue.
### At-least-\(k\) System
To connect the MSJ FCFS and MMSR systems, we define two systems: the "At-least-\(k\)" (Ak) system, and the "saturated system" in Section 3.5. The Ak model mimics the MSJ model, except that the Ak system always has at least \(k\) jobs present. Specifically, in addition to the primary Poisson(\(\lambda\)) arrival process, whenever there are exactly \(k\) jobs in the system, and a job completes, a new job immediately arrives. The server need and service duration of this job are sampled i.i.d. from the same distribution as the primary arrivals. Due to these extra arrivals, the front of the Ak system always has exactly \(k\) jobs present.
Intuitively, the Ak system should have about \(k\) more jobs present in steady state than the MSJ system. We thus expect the Ak and MSJ systems to have the same asymptotic mean response time, up to an \(O_{\lambda}(1)\) term. We make this intuition rigorous by using our RESET technique to prove Theorem 4.2.
### Running Example
Throughout this section, we will use a running example to clarify notation and concepts. Consider a MSJ setting with \(k=2\) servers, and two classes of jobs: \(2/3\) of jobs have server need \(1\) and duration \(Exp(1)\), and the other \(1/3\) of jobs have server need \(2\) and duration \(Exp(1/2)\).
### Saturated System
The saturated system is a closed multiserver-job system, where completions trigger new arrivals.1 Jobs are served according to the same FCFS service discipline. There are always exactly \(k\) jobs in the system. Whenever a job completes, a new job with i.i.d. server need and service duration is sampled. The state descriptor is just an ordered list of exactly \(k\) jobs.
Footnote 1: Baccelli and Foss [2] consider a system with infinitely many jobs not in service, which is equivalent to our closed system.
In our running example with \(k=2\) servers, the state space of the saturated system consists of all orderings of \(2\) jobs:
\[\mathbb{Y}^{\mathrm{Sat}}=\{[1,1],[1,2],[2,1],[2,2]\}.\]
The leftmost entry in each of the lists is the oldest job in FCFS order. In state \([1,2]\), a \(1\)-server job is in service and a \(2\)-server job is not in service, while in state \([2,1]\), a \(2\)-server job is in service and a \(1\)-server job is not in service.
### Equivalence between MMSR-Sat and At-least-\(k\)
Now we are ready to connect the MMSR and At-least-\(k\) (Ak) systems. Consider the subsystem consisting only of the front of the Ak system, i.e., the \(k\) oldest jobs in the Ak system. This subsystem is stochastically identical to the saturated system. Whenever a job completes at the front of the Ak system, a new job enters the front, either from the back (i.e. the jobs not in the front) or from the auxiliary arrival process, if the back is empty. This matches the saturated system's completion-triggered arrival process.
As a result, the Ak system is stochastically equal to an MMSR-\(\pi\) system whose service process \(\pi\) is identical to the saturated system. We refer to this system as the "MMSR-Sat" system. To clarify this equivalency, assume the Ak system starts in a certain front state \(y\) with an empty back. Then equivalently the MMSR-Sat system starts empty, with its service process in state \(y\). If a job in the Ak system completes its service, a new job is generated, and the same transition occurs in the service process in the MMSR-Sat system. Similarly, assume a job arrives to the Ak system and enters the back. At the same time, a job arrives in the MMSR-Sat system and enters the queue. Through this mapping, the two systems are sample-path equivalent.
The above arguments are summarized in Lemma 3.1 below.
**Lemma 3.1**.: _There exists a coupling under which the front of the Ak system is identical to the Sat system, and the back of the Ak system is identical to the queue of the MMSR-Sat system._
### Notation
**MSJ system state:** A state of the MSJ system consists of a front state, \(y^{\rm MSJ}\), and a number of jobs in the back \(q^{\rm MSJ}\). A job state consists of a server need and a phase of its phase-type duration. The front state \(y^{\rm MSJ}\) is a list of up to \(k\) job states. If \(q^{\rm MSJ}>0\), then \(y^{\rm MSJ}\) must consist of exactly \(k\) job states, while if \(q^{\rm MSJ}=0\), \(y^{\rm MSJ}\) may consist of anywhere from \(0\) to \(k\) job states. Let \(\mathbb{Y}^{\rm MSJ}\) denote the set of all possible front states \(y^{\rm MSJ}\) of the MSJ system. For instance, in our running example, \(\mathbb{Y}^{\rm MSJ}=\{[],[1],[2],[1,1],[1,2],[2,1],[2,2]\}\). Note that in the first three states, the back must be empty, so \(q^{\rm MSJ}\) must equal \(0\).
**MMSR system state:** In the MMSR system, let \(\pi\) denote the Markov chain that modulates the service rate. As a superscript, it signifies "the MMSR system controlled by the Markov chain \(\pi\)." A state of the MMSR-\(\pi\) system consists of a pair \((q^{\pi},y^{\pi})\). The queue length \(q^{\pi}\) is a nonnegative integer. The state \(y^{\pi}\) is a state of the service process \(\pi\), and \(\mathbb{Y}^{\pi}\) is the state space of \(\pi\).
Because the MMSR-Sat system is stochastically equal to the Ak system, with the MMSR-Sat system's queue length equal to the Ak system's back length, we use the superscripts \({}^{\rm Sat}\) and \({}^{\rm Ak}\) interchangeably. A state of the Ak system is a pair \((q^{\rm Ak},y^{\rm Ak})\). In contrast to the MSJ system, \(y^{\rm Ak}\) always consists of exactly \(k\) job states. In particular, \(\mathbb{Y}^{\rm Ak}\subset\mathbb{Y}^{\rm MSJ}\).
**MMSR service process:** When the service process \(\pi\) transitions from state \(y\) to \(y^{\prime}\), there are two possibilities: Either a completion occurs, which we write as \(a=1\), or no completion occurs, which we write as \(a=0\). We therefore define \(\mu^{\pi}_{y,y^{\prime},a}\) to denote the system's transition rate from front state \(y\) to front state \(y^{\prime}\), accompanied by \(a\) completions, where \(a\in\{0,1\}\). For instance, in our running example \(\mu^{\rm Sat}_{[1,1],[1,2],1}=2/3\). Let the total completion rate from state \(y\) be denoted by \(\mu^{\pi}_{y,\cdot,1}=\sum_{y^{\prime}}\mu^{\pi}_{y,y^{\prime},1}\). For instance, in our running example \(\mu^{\rm Sat}_{[1,1],\cdot,1}=2\).
**MSJ service transitions:** Let \(\mu^{\rm MSJ}_{y,y^{\prime},a,b}\) denote a transition rate in the Multiserver-job system, where \(y,y^{\prime}\), and \(a\) have the same meaning as in \(\mu^{\rm Ak}_{y,y^{\prime},a}\). Let \(b=\mathbb{1}_{q>0}\) denote whether this transition is associated with an empty back (\(b=0\)), or an occupied back (\(b=1\)). Note that if \(y\not\in\mathbb{Y}^{\rm Ak}\), then \(b=0\) for all nonzero \(\mu^{\rm MSJ}_{y,y^{\prime},a,b}\), while if \(y\in\mathbb{Y}^{\rm Ak}\), then both values of \(b\) are possible. Note that \(\forall y\in\mathbb{Y}^{\rm Ak},\mu^{\rm MSJ}_{y,y^{\prime},a,1}=\mu^{\rm Ak}_ {y,y^{\prime},a}\).
If a job arrives to the MSJ system and finds that the front state \(y\) has fewer than \(k\) jobs (\(y\not\in\mathbb{Y}^{\rm Ak}\)), a fresh job state is sampled and appended to \(y\). Let \(S\) be a random variable denoting a fresh job state, let \(i\) be a particular fresh job state, let \(p_{i}\) be the probability \(\mathbb{P}(S=i)\), and let \(y\cdot i\) be the new front state with a job in state \(i\) appended. For instance, in the running example, \(p_{1}=2/3,p_{2}=1/3\).
**Steady-state notation:** We will study the time-average steady states of each of these systems, which we write \((Q^{\rm MSJ},Y^{\rm MSJ})\), \((Q^{\pi},Y^{\pi})\), etc. Let \(Y^{\pi}_{d}\) denote the departure-average steady state of the MMSR
service process \(\pi\): the steady-state distribution of the embedded DTMC which samples states after each departure from \(\pi\).
Let \(X^{\pi}\) denote the long-term throughput of the service process \(\pi\). Let \(\lambda^{*}_{\pi}\) denote the threshold of the stability region of the MMSR-\(\pi\) system. The MMSR-\(\pi\) system is stable if and only if \(\lambda<\lambda^{*}\). Note that \(X^{\pi}=\lambda^{*}_{\pi}\) by prior results relating the saturated system to the stability region of the original system [2; 12]. In particular, \(X^{\mathrm{Sat}}=\lambda^{*}_{\mathrm{Sat}}=\lambda^{*}\), where \(\lambda^{*}\) denotes the threshold of the stability region of the MSJ FCFS system. We will typically write \(\lambda^{*}\) to avoid confusion between \(X^{\mathrm{Sat}}\) and a random variable.
A concrete example of this notation is provided in Section 4.1.
### Relative completions
Key to our MARC technique is the novel idea of _relative completions_, which we define for a general MMSR-\(\pi\) system. Let \(y_{1}\) and \(y_{2}\) be two states of the service process \(\pi\). The difference in relative completions between two states \(y_{1}\) and \(y_{2}\) is the long-term difference in expected completions between an instance of the service process starting in state \(y_{1}\) and one starting in \(y_{2}\). Specifically, let \(C_{\pi}(y,t)\) denote the number of completions up to time \(t\) of the service process of \(\pi\) initialized in state \(y\) at time \(t=0\). Then let \(\Delta_{\pi}(y_{1},y_{2})\) denote the relative completions between states \(y_{1}\) and \(y_{2}\):
\[\Delta_{\pi}(y_{1},y_{2})=\lim_{t\to\infty}\mathbb{E}[C_{\pi}(y_{1},t)-C_{\pi }(y_{2},t)].\]
We prove that \(\Delta_{\pi}(y_{1},y_{2})\) always exists and is always finite in Lemma A.1. We also allow \(y_{1}\) and/or \(y_{2}\) to be distributions over states, rather than single states. Specifically, we will often focus on the case where \(y_{2}\), rather than being a single state, is the steady state distribution \(Y^{\pi}\). In this case, note that \(\mathbb{E}[C_{\pi}(Y^{\pi},t)]=X^{\pi}t=\lambda^{*}_{\pi}t\). When it is clear from context, we write \(\Delta_{\pi}(y)\) to denote \(\Delta_{\pi}(y,Y^{\pi})\). The relative completions formula for this case simplifies:
\[\Delta_{\pi}(y)=\Delta_{\pi}(y,Y^{\pi})=\lim_{t\to\infty}\mathbb{E}[C_{\pi}(y, t)]-\lambda^{*}_{\pi}t. \tag{1}\]
The relative completions function \(\Delta_{\pi}(y)\) can be seen as the relative value of a given state \(y\) under a Markov reward process whose state is a state of the service process \(\pi\) and whose reward is the instantaneous completion rate in a given state \(y\).
### Generator
We also make use of the _instantaneous generator_ of each of our queueing systems, which is the stochastic equivalent of the derivative operator. The instantaneous generator is an operator which takes a function from system states to real values, and returns a function from system states to real values. The latter function is known as the _drift_ of the original function.
The generator operator is specific to a given Markov chain. Let \(\eta\) be a Markov chain, and let \(G^{\eta}\) denote the generator operator for \(\eta\), which is defined as follows:
For any real-valued function of the state of \(\eta\), \(f(q,y)\),
\[G^{\eta}\circ f(q,y):=\lim_{t\to 0}\frac{1}{t}\mathbb{E}[f(Q^{\eta}(t),Y^{\eta}( t))-f(q,y)|Q^{\eta}(0)=q,Y^{\eta}(0)=y].\]
Importantly, the expected value of the generator in steady state is zero:
**Lemma 3.2**.: _Let \(f\) be a real-valued function of the state of a Markov chain \(\eta\). Assume that the transition rates of the Markov chain \(\eta\) are uniformly bounded, and \(\mathbb{E}[f(Q^{\eta},Y^{\eta})]<\infty.\) Then_
\[\mathbb{E}_{(q,y)\sim(Q^{\eta},Y^{\eta})}[G^{\eta}\circ f(q,y)]=0. \tag{2}\]
Proof.: Follows from [14, Proposition 3]. Discussion deferred to Appendix A.
We show in Appendix A that (2) holds for the MSJ, MMSR, At-least-\(k\), and Saturated systems, for any \(f(q,y)\) with polynomial dependence on \(q\).
### Asymptotic notation
We use the notation \(O_{\lambda}(f(\lambda))\) to represent a function \(g(\lambda)\) such that
\[\exists\text{ a constant }M\text{ such that }|g(\lambda)|\leq M|f(\lambda)|\quad \forall\lambda,0<\lambda<\lambda^{*}.\]
### Simplified saturated system
While the saturated system is a finite-state system, it can have a very large number of possible states. However, many of the states have identical behavior, and can be combined to reduce the state space. For instance, in our running example, the states \([2,1]\) and \([2,2]\) are nearly identical: in both states just a 2-server job is in service. We therefore simplify the system by combining the two states into the state \([2]\), and delaying sampling the next job until needed.
We refer to the resulting system as the "simplified saturated system" (SSS), in contrast to the original saturated system, which is the focus of the bulk of this paper. SSS is equivalent to the original saturated system, in the sense of 3.3 stated below.
**Lemma 3.3**.: _There exists a coupling under which the main saturated system and simplified saturated system have identical completions._
The full definition of the SSS, and the proof of the equivalence of SSS to the original saturated system, are in B.
The reduction in state space from the SSS can be dramatic. For instance, consider a system where \(k=30\), jobs have server needs 3 or 10, and jobs have exponential duration. The original saturated system has \(2^{30}\) states, while the SSS has just 13 states. We discuss this reduction further in B.
## 4 Results
In this paper, we give the first analysis of mean response time in the MSJ FCFS system. To do so, we reduce the problem to the analysis of mean response time in an M/M/1 with Markovian service rate (MMSR) in which the saturated system controls the service process (i.e. the At-least-\(k\) system). We call this reduction the RESET technique. Before applying the RESET technique, we start by analyzing the general MMSR-\(\pi\) system.
We prove the first explicit characterization of mean response time in the MMSR. To do so, we use our MARC technique, which is based on the novel concept of _relative completions_ (See Section 3.8).
**Theorem 4.1** (Mean response time asymptotics of MMSR systems).: _In the MMSR-\(\pi\) system, the expected response time in steady state satisfies_
\[\mathbb{E}[T^{\pi}]=\frac{1}{\lambda_{\pi}^{*}}\frac{1+\Delta_{\pi}(Y_{d}^{\pi },Y^{\pi})}{1-\lambda/\lambda_{\pi}^{*}}+O_{\lambda}(1), \tag{3}\]
_where \(\Delta_{\pi}\) is the relative completions function defined in Section 3.8:_
\[\Delta_{\pi}(Y_{d}^{\pi},Y^{\pi}):=\lim_{t\to\infty}\mathbb{E}[C_{\pi}(Y_{d}^{ \pi},t)]-\lambda_{\pi}^{*}t.\]
To understand (3), first note that the dominant term has order \(\Theta(\frac{1}{1-\lambda/\lambda_{\pi}^{*}})\). This is the equivalent of the \(\Theta(\frac{1}{1-\rho})\) behavior seen in simpler systems such as the M/G/1/FCFS. Next, to understand the numerator, examine the \(\Delta_{\pi}(Y_{d}^{\pi},Y^{\pi})\) term. \(\Delta_{\pi}\), the relative completions function, smooths out the irregularities in completion times, so that the function \(q-\Delta_{\pi}(y)\) has a constant negative drift. \(\Delta_{\pi}\) is the analog of the remaining size of the job in service in the M/G/1. When a generic job arrives, it sees a time-average state of the service process, namely \(Y^{\pi}\). When it departs, it leaves behind a departure-average state of the service process, namely \(Y_{d}^{\pi}\). The difference in relative completions between these states captures the asymptotic behavior of mean response time. The overall numerator, \(1+\Delta_{\pi}(Y_{d}^{\pi},Y^{\pi})\), is analogous to the \(\mathbb{E}[S_{e}]\) term in
the M/G/1/FCFS mean response time formula. We walk through calculating \(\Delta_{\pi}(y),\lambda_{\pi}^{*}\), and \(Y_{d}^{\pi}\) explicitly and in closed-form in Appendix C.
Now that we have characterized the mean response time of the MMSR system, we can use this result to characterize the MSJ FCFS system. With our RESET technique, we show that the MSJ FCFS system has the same mean response time, up to an \(O_{\lambda}(1)\) term, as the MMSR system whose service rate is controlled by the saturated system, or equivalently the At-least-\(k\) system.
**Theorem 4.2** (Mean response time asymptotics of MSJ systems).: _In the multiserver-job system, the expected response time in steady state satisfies_
\[\mathbb{E}[T^{\mathrm{MSJ}}]=\frac{1}{\lambda^{*}}\frac{1+\Delta_{\mathrm{Sat} }(Y_{d}^{\mathrm{Sat}},Y^{\mathrm{Sat}})}{1-\lambda/\lambda^{*}}+O_{\lambda}( 1). \tag{4}\]
Empirically, the \(O_{\lambda}(1)\) term is very small, as seen in Fig. 1(a) in Section 8. To clarify the meaning of the \(O_{\lambda}(1)\) term in Theorem 4.2, let us restate the theorem explicitly:
**Theorem 4.3** (Restatment of Theorem 4.2).: _In the multiserver-job system, for any joint duration and server need distribution and for any number of servers \(k\), there exist constants \(c_{\ell}\) and \(c_{h}\) such that for all arrival rates \(\lambda<\lambda^{*}\),_
\[\frac{1}{\lambda^{*}}\frac{1+\Delta_{\mathrm{Sat}}(Y_{d}^{\mathrm{Sat}},Y^{ \mathrm{Sat}})}{1-\lambda/\lambda^{*}}+c_{\ell}\leq\mathbb{E}[T^{\mathrm{MSJ }}]\leq\frac{1}{\lambda^{*}}\frac{1+\Delta_{\mathrm{Sat}}(Y_{d}^{\mathrm{Sat }},Y^{\mathrm{Sat}})}{1-\lambda/\lambda^{*}}+c_{h}.\]
Rather than calculating \(\Delta_{\mathrm{Sat}}(Y_{d}^{\mathrm{Sat}},Y^{\mathrm{Sat}})\) in Theorem 4.2, we can calculate the equivalent value in the simplified saturated system (SSS) (due to Lemma 3.3). Define \(\Delta_{\mathrm{SSS}},Y_{d}^{\mathrm{SSS}}\), and \(Y^{\mathrm{SSS}}\) analogously to the primary saturated system.
**Corollary 4.1**.: _In the MSJ FCFS model,_
\[\mathbb{E}[T^{\mathrm{MSJ}}]=\frac{1}{\lambda^{*}}\frac{1+\Delta_{\mathrm{ SSS}}(Y_{d}^{\mathrm{SSS}},Y^{\mathrm{SSS}})}{1-\lambda/\lambda^{*}}+O_{\lambda}(1).\]
Corollary 4.1 follows from Theorem 4.2 because \(\Delta_{\mathrm{Sat}}(y_{1},y_{2})\) is defined based on the completion times in the primary saturated system, and by Lemma 3.3, the SSS can be coupled to have the same completion times as the primary saturated system.
The quantities \(\Delta_{\mathrm{SSS}}(y),\lambda^{*}\), and \(Y_{d}^{\mathrm{SSS}}\) can be calculated explicitly and in closed-form for any given parameterized distribution of server need and job duration, and any number of servers \(k\), giving an explicit closed-form bound on mean response time. We walk through this calculation in Appendix C, and give the explicit closed-form expressions for a 2-server setting in Appendix C.2, to demonstrate the technique.
### Example for demonstration
We now demonstrate applying Theorem 4.2 and Corollary 4.1 to characterize the asymptotic mean response time of our running example from Section 3.4. See Appendix C for a more extensive example, handling a setting with parameterized completion rates and arrival probabilities.
We start with the MSJ system. First, we convert to the Ak system, whose front has state space \(\mathbb{Y}^{\mathrm{Ak}}=\{[1,1],[1,2],[2,1],[2,2]\}\). By the RESET technique, this only increases mean response time by \(O_{\lambda}(1)\). By Lemma 3.1, the Ak system is identical to a MMSR-Sat system. By Lemma 3.3, the Sat system is equivalent to Simplified Saturated System (SSS), which has state space \(\mathbb{Y}^{\mathrm{SSS}}=\{[1,1],[1,2],[2]\}\).
For the rest of this section, we focus on the SSS, leaving the superscript implicit. Transitions between these states only happen as a result of completions, leading to the following transition rates:
\[\mu_{[1,1],[1,1],1} =2\cdot\frac{2}{3}=\frac{4}{3},\quad\mu_{[1,1],[1,2],1}=2\cdot \frac{1}{3}=\frac{2}{3},\quad\quad\quad\mu_{[1,2],[2],1}=1,\] \[\mu_{[2],[1,1],1} =\frac{1}{2}\frac{2}{3}\frac{2}{3}=\frac{2}{9},\quad\mu_{[2],[1,2],1}=\frac{1}{2}\frac{1}{3}=\frac{1}{6}.\]
Now, we can calculate the steady states \(Y^{\rm SSS}\) and \(Y^{\rm SSS}_{d}\) of the SSS's CTMC and DTMC respectively, and calculate the throughput \(X^{\rm SSS}=X^{\rm Sat}=\lambda^{*}\). The vectors are in the order \(\{[1,1],[1,2],[2]\}\):
\[Y=\Big{[}\frac{1}{5},\frac{1}{5},\frac{3}{5}\Big{]},\quad Y^{d}=\Big{[}\frac{4 }{9},\frac{2}{9},\frac{1}{3}\Big{]},\quad X^{\rm SSS}=X^{\rm Sat}=\lambda^{*}= \frac{9}{10}.\]
Now, we can solve for \(\Delta(y)\), defined in (1). To do so, we split up the completions \(\mathbb{E}[C(y,t)]\) into the time until the first completion, and the time after the first completion. For example, starting in state \(y=[1,1]\), the first completion takes an expected \(\frac{1}{2}\) second, during which 1 completion occurs, compared to the long-term average rate \(\frac{1}{2}\lambda^{*}=\frac{9}{20}\) completions. The system then transitions to a new state, with corresponding \(\Delta(y)\). This gives rise to the following equation:
\[\Delta([1,1])=1-\frac{9}{20}+\frac{2}{3}\Delta([1,1])+\frac{1}{3}\Delta([1,2]).\]
We use the same process to derive a system of equations that uniquely determines \(\Delta(y)\), given in Corollary D.1. We solve for \(\Delta(y)\) for each state \(y\):
\[\Delta([1,1])=1.38,\quad\Delta([1,2])=-0.27,\quad\Delta([2])=-0.37. \tag{5}\]
All decimals are exact. We can then average over the distribution \(Y^{d}\) to find that \(\Delta(Y^{d})=0.43\). Recall that \(\Delta(Y^{d})\) is just shorthand for \(\Delta(Y^{d},Y)\).
We can therefore apply Theorem 4.2 and Corollary 4.1 to characterize the asymptotic mean response time of the original system:
\[\mathbb{E}[T^{\rm MSJ}]=\frac{10}{9}\frac{1.43}{1-\frac{\lambda}{9/10}}+O_{ \lambda}(1).\]
## 5 MARC Proofs
We start by analyzing the M/M/1 with Markovian service rate \(\pi\) (MMSR-\(\pi\)). Our main result in this section is the proof of Theorem 4.1, a characterization of the asymptotic mean response time of the MMSR-\(\pi\) system.
The main challenge is choosing an appropriate test function \(f(q,y)\), to leverage (2), the fact that \(\mathbb{E}[G^{\pi}\circ f(Q^{\pi},Y^{\pi})]=0\), to give an expression for \(\mathbb{E}[Q^{\pi}]\). To gain information about \(\mathbb{E}[Q^{\pi}]\) via this approach, it is natural to choose a function \(f\) which is quadratic in \(q\), because \(G^{\pi}\) is effectively a derivative. However, if we choose \(f_{1}(q,y)=\frac{1}{2}q^{2}\), the expression \(G^{\pi}\circ f_{1}(q,y)\) will have cross-terms in which both \(q\) and \(y\) appear, preventing further progress.
Instead, our key idea is to use relative completions \(\Delta_{\pi}\) in our test function:
**Definition 5.1**.: _Let \(f^{\pi}_{\Delta}(q,y)=\frac{1}{2}(q-\Delta_{\pi}(y))^{2}\)._
The \(\Delta_{\pi}(y)\) term smooths out the fluctuations in the system's service rate, so that the quantity \(q-\Delta_{\pi}(y)\) has a constant drift of \(-\lambda^{*}_{\pi}\) whenever \(q>0\).
This choice of test function ensures that \(G^{\pi}\circ f^{\pi}_{\Delta}(q,y)\) separates into a linear term dependent only on \(q\) and a term dependent only on \(y\). The separation allows us to characterize \(\mathbb{E}[Q^{\pi}]\), and hence \(\mathbb{E}[T^{\pi}]\), in Theorem 4.1.
Let \(u=1\{q=0\wedge a=1\}\) denote the unused service caused by a given transition. Only completion transitions (\(a=1\)) can cause unused service.
We start by decomposing \(G^{\pi}\circ f^{\pi}_{\Delta}(q,y)\), into a term linearly dependent on \(q\), and terms dependent only on \(y,a\), and \(u\):
**Lemma 5.1**.: _For any state \((q,y)\) of the MMSR-\(\pi\) system,_
\[G^{\pi}\circ f^{\pi}_{\Delta}(q,y)=(\lambda-\lambda^{*}_{\pi})q-\lambda\Delta_ {\pi}(y)+\frac{1}{2}\lambda+\sum_{y^{\prime},a}\mu^{\pi}_{y,y^{\prime},a}\left( \frac{1}{2}(-a+u-\Delta_{\pi}(y^{\prime}))^{2}-\frac{1}{2}\Delta_{\pi}(y)^{2} \right). \tag{6}\]
Proof deferred to Appendix D.: We can now characterize the mean response time of the MMSR-\(\pi\) system. We will use the fact that by Lemma 5.1, \(G^{\pi}\circ f_{\Delta}^{\pi}(q,y)\) decomposes into a term linearly dependent on the queue length \(q\), and terms that are not dependent on \(q\) except through the unused service \(u\). We define \(c_{0}(y,q)\) to comprise the later group of terms. We also define \(c_{1}(y)\) and \(c_{2}(y)\), which are simpler functions that are closely related to \(c_{0}(y,q)\).
**Definition 5.2**.: _Define \(c_{0}(y,q),c_{1}(y),\) and \(c_{2}(y)\) as follows:_
\[c_{0}(y,q) =G^{\pi}\circ f_{\Delta}^{\pi}(q,y)-(\lambda-\lambda_{\pi}^{*})q\] \[=-\lambda\Delta_{\pi}(y)+\frac{1}{2}\lambda+\sum_{y^{\prime},a} \mu_{y,y^{\prime},a}^{\pi}\left(\frac{1}{2}(-a+u-\Delta_{\pi}(y^{\prime}))^{2} -\frac{1}{2}\Delta_{\pi}(y)^{2}\right),\] \[c_{1}(y) =-\lambda\Delta_{\pi}(y)+\frac{1}{2}\lambda+\sum_{y^{\prime},a} \mu_{y,y^{\prime},a}^{\pi}\left(\frac{1}{2}(-a-\Delta_{\pi}(y^{\prime}))^{2} -\frac{1}{2}\Delta_{\pi}(y)^{2}\right),\] \[c_{2}(y) =c_{1}(y)-G^{\pi}\circ h(y),\text{ where }h(y)=\frac{1}{2} \Delta_{\pi}(y)^{2}\] \[=-\lambda\Delta_{\pi}(y)+\frac{1}{2}\lambda+\sum_{y^{\prime},a} \mu_{y,y^{\prime},a}^{\pi}\left(\frac{1}{2}a^{2}+a\Delta_{\pi}(y^{\prime}) \right).\]
We will show that these functions' expected values, \(\mathbb{E}[c_{0}(Y^{\pi},Q^{\pi})],\mathbb{E}[c_{1}(Y^{\pi})],\) and \(\mathbb{E}[c_{2}(Y^{\pi})])\), are all equal up to a \(O_{\lambda}(1-\frac{\lambda}{\lambda_{\pi}^{*}})\) error. This fact is crucial to our proof of Theorem 4.1.
**Theorem 4.1** (Mean response time asymptotics of MMSR systems).: _In the MMSR-\(\pi\) system, the expected response time in steady state satisfies_
\[\mathbb{E}[T^{\pi}]=\frac{1}{\lambda_{\pi}^{*}}\frac{1+\Delta_{\pi}(Y_{d}^{ \pi},Y^{\pi})}{1-\lambda/\lambda_{\pi}^{*}}+O_{\lambda}(1). \tag{7}\]
Proof.: In this proof we omit \(\pi\) in the subscript of \(\Delta_{\pi}(y)\) and in the superscript of \(\mu_{y,y^{\prime},a}^{\pi}\). We start from Lemma 5.1, which states that
\[G^{\pi}\circ f(q,y)=(\lambda-\lambda_{\pi}^{*})q+c_{0}(y,q).\]
Applying Lemma 3.2, we find that
\[0 =\mathbb{E}[G^{\pi}\circ f(Q^{\pi},Y^{\pi})]=(\lambda-\lambda_{ \pi}^{*})\mathbb{E}[Q^{\pi}]+\mathbb{E}[c_{0}(Y^{\pi},Q^{\pi})],\] \[\mathbb{E}[Q^{\pi}] =\frac{\mathbb{E}[c_{0}(Y^{\pi},Q^{\pi})]}{\lambda_{\pi}^{*}- \lambda}.\]
We therefore focus on \(c_{0}(q,y)\): By characterizing \(\mathbb{E}[c_{0}(Y^{\pi},Q^{\pi})]\), we will characterize \(\mathbb{E}[Q^{\pi}]\).
Let us separate out the terms where \(u\) appears in \(c_{0}(y,q)\) from the terms without \(u\):
\[c_{0}(y,q)-c_{1}(y)=\sum_{y^{\prime},a}\mu_{y,y^{\prime},a}u\left(\frac{1}{2}u- a-\Delta(y^{\prime})\right). \tag{8}\]
Note that in the time-average steady state \(Y^{\pi}\), the fraction of service-process completions that occur while the queue is empty (i.e. where \(u=1\)) is \(1-\frac{\lambda}{\lambda_{\pi}^{*}}\), because \(\lambda\) jobs arrive per second, and \(\lambda^{*}\) service-process completions occur per second. As a result,
\[E_{y\sim Y^{\pi}}\big{[}\sum_{y^{\prime},a}\mu_{y,y^{\prime},a}u\big{]}=1- \frac{\lambda}{\lambda_{\pi}^{*}}.\]
Note that \(a\leq 1\) and \(u\leq 1\), because at most \(1\) job completes at a time. Note that \(\Delta(y^{\prime})\) is bounded by a constant over all \(y^{\prime}\), because \(y^{\prime}\in\mathbb{Y}^{\pi}\), which is a finite state space. Thus, the \(u/2-a-\Delta(y^{\prime})\) term in (8) is bounded by a constant. As a result, (8) contributes \(O_{\lambda}(1-\frac{\lambda}{\lambda_{\pi}^{*}})\) to \(\mathbb{E}[c_{0}(Y^{\pi},Q^{\pi})]\):
\[\mathbb{E}[c_{1}(Y^{\pi})-c_{0}(Y^{\pi},Q^{\pi})]=O_{\lambda}(1-\lambda/ \lambda_{\pi}^{*}).\]
Next, recall that \(c_{2}(y):=c_{1}(y)-G^{\pi}\circ h(y)\). By Lemma 3.2, \(\mathbb{E}[G^{\pi}\circ h(Y^{\pi})]=0\), so \(\mathbb{E}[c_{2}(Y^{\pi})]=\mathbb{E}[c_{1}(Y^{\pi})]\). Let us now simplify \(c_{2}(y)\), using the fact that \(a=0\) or \(1\):
\[c_{2}(y) =-\lambda\Delta(y)+\frac{1}{2}\lambda+\sum_{y^{\prime},a}\mu_{y, y^{\prime},a}^{\pi}\left(\frac{1}{2}a^{2}+a\Delta_{\pi}(y^{\prime})\right)\] \[=-\lambda\Delta(y)+\frac{1}{2}\lambda+\frac{1}{2}\mu_{y,\cdot,1} +\sum_{y^{\prime}}\mu_{y,y^{\prime},1}\Delta(y^{\prime}).\]
We now apply Lemma D.3 to simplify the summation term of \(c_{2}(y)\). Lemma D.3 states that
\[\frac{1}{\lambda_{\pi}^{*}}\mathbb{E}_{y\sim Y^{\pi}}[\mu_{y,y^{\prime},1}^{ \pi}]=\mathbb{P}(Y_{d}^{\pi}=y^{\prime}).\]
Thus, taking the expectation of the summation term of \(c_{2}(y)\) over \(y\sim Y^{\pi}\), we find that
\[\mathbb{E}_{y\sim Y^{\pi}}[\sum_{y^{\prime}}\mu_{y,y^{\prime},1} \Delta(y^{\prime})]=\lambda_{\pi}^{*}\sum_{y^{\prime}}\mathbb{P}(Y_{d}^{\pi}=y ^{\prime})\Delta(y^{\prime})=\lambda_{\pi}^{*}\Delta(Y_{d}^{\pi}),\] \[\mathbb{E}[c_{2}(Y^{\pi})]=\mathbb{E}[-\lambda\Delta(Y^{\pi})+ \frac{1}{2}(\mu_{Y^{\pi},\cdot,1}+\lambda)+\lambda_{\pi}^{*}\Delta(Y_{d}^{\pi })].\]
Now note that \(\mathbb{E}[\Delta(Y^{\pi})]=0\), \(\mathbb{E}[\mu_{Y^{\pi},\cdot,1}]=\lambda_{\pi}^{*}\), and \(\lambda=\lambda_{\pi}^{*}+O_{\lambda}(1-\frac{\lambda}{\lambda_{\pi}^{*}})\):
\[\mathbb{E}[c_{1}(Y^{\pi})] =\mathbb{E}[c_{2}(Y^{\pi})]=\lambda_{\pi}^{*}+\lambda_{\pi}^{*} \Delta(Y_{d}^{\pi})+O_{\lambda}(1-\frac{\lambda}{\lambda_{\pi}^{*}}). \tag{9}\] \[\mathbb{E}[c_{0}(Y^{\pi},Q^{\pi})] =\mathbb{E}[c_{1}(Y^{\pi})]+O_{\lambda}(1-\frac{\lambda}{\lambda_ {\pi}^{*}})=\mathbb{E}[c_{2}(Y^{\pi})]+O_{\lambda}(1-\frac{\lambda}{\lambda_ {\pi}^{*}}).\] \[\mathbb{E}[Q^{\pi}] =\frac{\mathbb{E}[c_{0}(Y^{\pi},Q^{\pi})]}{\lambda_{\pi}^{*}- \lambda^{*}}=\frac{\lambda_{\pi}^{*}+\lambda_{\pi}^{*}\Delta(Y_{d}^{\pi})}{ \lambda_{\pi}^{*}-\lambda}+O_{\lambda}(1)=\frac{\Delta(Y_{d}^{\pi})+1}{1- \lambda/\lambda_{\pi}^{*}}+O_{\lambda}(1).\]
Now, we apply Little's Law, which states that \(\mathbb{E}[T^{\pi}]=\frac{1}{\lambda}\mathbb{E}[Q^{\pi}]\):
\[\mathbb{E}[T^{\pi}]=\frac{1}{\lambda}\frac{1+\Delta(Y_{d}^{\pi})}{1-\lambda/ \lambda_{\pi}^{*}}+O_{\lambda}\left(\frac{1}{\lambda}\right).\]
Note that for any \(x\), \(\frac{1}{\lambda}\frac{x}{1-\lambda/\lambda^{*}}=\frac{1}{\lambda^{*}}\frac{x }{1-\lambda/\lambda^{*}}+\frac{x}{\lambda}\), so
\[\mathbb{E}[T^{\pi}]=\frac{1}{\lambda_{\pi}^{*}}\frac{1+\Delta(Y_{d}^{\rm Sat}) }{1-\lambda/\lambda_{\pi}^{*}}+O_{\lambda}\left(\frac{1}{\lambda}\right). \tag{10}\]
Note that in the \(\lambda\to\lambda_{\pi}^{*}\) limit, \(O_{\lambda}(\frac{1}{\lambda})=O_{\lambda}(1)\). Consider the \(\lambda\to 0\) limit: \(\mathbb{E}[T^{\pi}]\) is bounded for small \(\lambda\). Likewise, \(\frac{1}{\lambda_{\pi}^{*}}\frac{1+\Delta(Y_{d}^{\pi})}{1-\lambda/\lambda_{ \pi}^{*}}\) is bounded for small \(\lambda\). As a result, the two differ by \(O_{\lambda}(1)\):
\[\mathbb{E}[T^{\rm MSJ}]=\frac{1}{\lambda_{\pi}^{*}}\frac{1+\Delta(Y_{d}^{\pi}) }{1-\lambda/\lambda_{\pi}^{*}}+O_{\lambda}(1).\qed\]
## 6 RESET Proofs
To characterize the asymptotic behavior of mean response time of the MSJ system, we use the At-least-\(k\) (Ak) system, which is stochastically equal to the MMSR-Sat system. The MARC results from Section 5 allow us to characterize the MMSR-Sat system. To prove that the MSJ FCFS and Ak systems have the same asymptotic mean response time behavior, our key idea is to show that \(Y^{\mathrm{MSJ}}\) and \(Y^{\mathrm{Ak}}\), the steady states of their fronts, are "almost identical."
To formalize and prove the relationship between \(Y^{\mathrm{MSJ}}\) and \(Y^{\mathrm{Ak}}\), we design a coupling in Section 6.1 between the MSJ system and the Ak system. We use a renewal-reward argument based on busy periods to prove Lemma 6.2, which states that under the coupling, \(\mathbb{P}(Y^{\mathrm{MSJ}}\neq Y^{\mathrm{Ak}})=O_{\lambda}(1-\frac{\lambda}{ \lambda^{*}})\).
Then, in Section 6.2, we combine Theorem 4.1 and Lemma 6.2 to prove Theorem 4.2, our main result, in which we give the first analysis of the asymptotic mean response time in the MSJ system, by reduction to the saturated system. Theorem 4.2 parallels the proof steps that Theorem 4.1 uses to characterize the MMSR system, using Lemma 6.2 to prove that the equivalent proof steps hold for the MSJ system.
We will make use of a test function \(f_{\Delta}^{\mathrm{MSJ}}(q,y)\) for the multiserver-job system which is similar to \(f_{\Delta}^{\pi}(q,y)\), which was defined in Definition 5.1.
**Definition 6.1**.: _For states \(y\in\mathbb{Y}^{\mathrm{Ak}}\),_
\[f_{\Delta}^{\mathrm{MSJ}}(q,y):=f_{\Delta}^{\mathrm{Ak}}(q,y)=f_{\Delta}^{ \mathrm{Sat}}(q,y).\]
_Otherwise,_
\[f_{\Delta}^{\mathrm{MSJ}}(q,y):=0.\]
Importantly, \(G^{\mathrm{MSJ}}\circ f_{\Delta}^{\mathrm{MSJ}}(q,y)\) is similar to \(G^{\mathrm{Ak}}\circ f_{\Delta}^{\mathrm{Ak}}(q,y)\):
**Lemma 6.1**.: \[G^{\mathrm{MSJ}}\circ f_{\Delta}^{\mathrm{MSJ}}(q,y)=\mathbb{1}_{q>0}G^{ \mathrm{Ak}}\circ f_{\Delta}^{\mathrm{Ak}}(q,y)+\mathbb{1}_{q=0}O_{\lambda}( 1).\]
Proof deferred to Appendix E.:
### Coupling between At-least-\(k\) and MSJ
To show that the Ak system and the MSJ system have identical asymptotic mean response time, we define the following coupling of the two systems. We let the arrivals of the two systems happen at the same time. We couple the transitions of their front states based on their joint state \((q^{\mathrm{MSJ}},y^{\mathrm{MSJ}},q^{\mathrm{Ak}},y^{\mathrm{Ak}})\). If \(y^{\mathrm{MSJ}}=y^{\mathrm{Ak}}\), \(q^{\mathrm{MSJ}}>0\), and \(q^{\mathrm{Ak}}>0\), the completions happen at the same time in both systems, the same jobs complete, the same job phase transitions occur, and the jobs entering the front are the same. We call the two systems "merged" during such a time period. Note that under this coupling, if the two systems become merged, they will stay merged until \(q^{\mathrm{MSJ}}=0\) or \(q^{\mathrm{Ak}}=0\). If the systems are not merged, the two systems have independent completions and phase transitions, and independently sampled jobs.
The two systems transition according to synchronized Poisson timers whenever they are merged, and independent Poisson timers otherwise. Because all transitions are exponentially distributed, this poses no obstacle to the coupling.
We want to show that under this coupling, the two systems spend almost all of their time merged, in the limit as \(\lambda\to\lambda^{*}\). Specifically, we will show that the fraction of time in which the two systems are _unmerged_ is \(O_{\lambda}(1-\frac{\lambda}{\lambda^{*}})\). This implies Lemma 6.2, which is the key lemma we need for our main RESET result, Theorem 4.2.
**Lemma 6.2** (Tight coupling).: _In the MSJ system, for any \(\lambda<\lambda^{*}\), we have the following two properties:_
1. _Property 1:_ \(P(Q^{\mathrm{MSJ}}=0)=O_{\lambda}(1-\frac{\lambda}{\lambda^{*}})\)_._
2. _Property 2:_ \(P(Y^{\mathrm{MSJ}}\neq Y^{\mathrm{Ak}})=O_{\lambda}(1-\frac{\lambda}{\lambda^{* }})\)_._
_where property 2 holds under the coupling in Section 6.1._
To prove Lemma 6.2, we prove two key lemmas:
* Lemma 6.3: Whenever the two systems are unmerged, the expected time until the systems become merged is \(O_{\lambda}(1)\).
* Lemma 6.4: Whenever the two systems are merged, the expected time for which they stay merged is \(\Omega_{\lambda}(\frac{1}{1-\lambda/\lambda^{*}})\).
We then use a renewal-reward approach to prove Lemma 6.2.
**Lemma 6.3** (Quick merge).: _From any joint MSJ, Ak state, for any \(\epsilon>0\), under the coupling above, the expected time until \(y^{\mathrm{MSJ}}=y^{\mathrm{Ak}}\), \(q^{\mathrm{MSJ}}\geq k+1\), and \(q^{\mathrm{Ak}}\geq k+1\) is at most \(m_{1}(\epsilon)\) for some \(m_{1}(\epsilon)\) independent of the arrival rate \(\lambda\) and initial joint states, given that \(\lambda\in[\epsilon,\lambda^{*})\)._
**Lemma 6.4** (Long merged period).: _From any joint MSJ, Ak state such that \(y^{\mathrm{MSJ}}=y^{\mathrm{Ak}}\), \(q^{\mathrm{MSJ}}\geq k+1\), and \(q^{\mathrm{Ak}}\geq k+1\), the expected time until \(q^{\mathrm{MSJ}}=0\), \(q^{\mathrm{Ak}}=0\), or \(y^{\mathrm{MSJ}}\neq y^{\mathrm{Ak}}\), is at least \(\frac{m_{2}}{1-\lambda/\lambda^{*}}\) for some \(m_{2}\) independent of the arrival rate \(\lambda\) and initial joint states, given that \(\lambda<\lambda^{*}\)._
Proofs deferred to Appendix G.: Using Lemmas 6.3 and 6.4, we can prove Lemma 6.2:
Proof.: Let \(\epsilon=\frac{\lambda^{*}}{2}\). Note that if \(\lambda<\epsilon\), the properties are trivial: \(O_{\lambda}(1-\frac{\lambda}{\lambda^{*}})\equiv O_{\lambda}(1)\), and probabilities are bounded. Therefore, we will focus on the case where \(\lambda\geq\epsilon\), where we can apply Lemmas 6.3 and 6.4.
Let us define a _good period_ to begin when \(Y^{\mathrm{MSJ}}(t)=Y^{\mathrm{Ak}}(t)\), \(Q^{\mathrm{MSJ}}(t)\geq k+1\) and \(Q^{\mathrm{Ak}}(t)\geq k+1\), and end when \(Q^{\mathrm{MSJ}}(t)=0\) or \(Q^{\mathrm{Ak}}(t)=0\). Let a _bad period_ be the time between two good periods. Note that throughout a good period, the front states are merged (\(Y^{\mathrm{MSJ}}(t)=Y^{\mathrm{Ak}}(t)\)) and both queues are nonempty.
To bound the fraction of time that the joint system is in a good period, we introduce the concept of a "\(y^{*}\)-cycle." Let \(y^{*}\) be an arbitrary state in \(\mathbb{Y}^{\mathrm{Ak}}\). Let a \(y^{*}\)-cycle be a renewal cycle whose renewal points are moments when a bad period begins, and \(Y^{\mathrm{MSJ}}(t)=Y^{\mathrm{Ak}}(t)=y^{*}\), and \(Q^{\mathrm{MSJ}}(t)=Q^{\mathrm{Ak}}(t)=0\), for some designated state \(y^{*}\). We will show that a \(y^{*}\)-cycle has finite mean time. Given that fact, we can apply renewal reward to derive the equations below:
\[P(Q^{\mathrm{MSJ}}=0) =\frac{\mathbb{E}[Q^{\mathrm{MSJ}}(t)=0\text{ time per }y^{*} \text{-cycle}]}{\mathbb{E}[\text{total time per }y^{*}\text{-cycle}]}, \tag{11}\] \[P(Y^{\mathrm{MSJ}}\neq Y^{\mathrm{Ak}}) =\frac{\mathbb{E}[Y^{\mathrm{MSJ}}(t)\neq Y^{\mathrm{Ak}}(t)\text { time per }y^{*}\text{-cycle}]}{\mathbb{E}[\text{total time per }y^{*}\text{-cycle}]}. \tag{12}\]
Note that \(Q^{\mathrm{MSJ}}(t)=0\) or \(Y^{\mathrm{MSJ}}(t)\neq Y^{\mathrm{Ak}}(t)\) only during a bad period, so the two probabilities in (11) and (12) are both bounded by the fraction of time spent in bad periods. By Lemma 6.3 and Lemma 6.4, the expected length of a bad period is at most \(m_{1}\) and the expected length of a good period is at least \(\frac{m_{2}}{1-\lambda/\lambda^{*}}\), conditioned on any initial joint state. Let \(Z\) be a random variable denoting the number of good periods in a \(y^{*}\) cycle. Note that good and bad periods alternate.
\[\mathbb{E}[\text{total time per }y^{*}\text{-cycle}] \geq\frac{m_{2}}{1-\lambda/\lambda^{*}}\mathbb{E}[Z],\] \[\mathbb{E}[\text{bad period time per }y^{*}\text{-cycle}] \leq m_{1}\mathbb{E}[Z].\]
If a \(y^{*}\)-cycle has finite mean time, then we also have \(\mathbb{E}[Z]<\infty\) because each good period and bad period take a positive time. Plugging the above inequalities into (11) and (12), we derive Properties 1 and 2:
\[P(Q^{\mathrm{MSJ}}=0)\leq\frac{m_{1}}{m_{2}}\left(1-\frac{\lambda}{\lambda^{*} }\right),\qquad P(Y^{\mathrm{MSJ}}\neq Y^{\mathrm{Ak}})\leq\frac{m_{1}}{m_{2}} \left(1-\frac{\lambda}{\lambda^{*}}\right).\]
It remains to show that a \(y^{*}\)-cycle has finite mean time. We first use a Lyapunov argument to show that the joint states of the two systems return to a bounded set in a finite mean time. Consider the Lyapunov function \(f^{\mathrm{MSJ}}_{\Delta}(q^{\mathrm{MSJ}},y^{\mathrm{MSJ}})+f^{\mathrm{Ak}}_{ \Delta}(q^{\mathrm{Ak}},y^{\mathrm{Ak}})\). Its drift is:
\[G^{\mathrm{MSJ},\mathrm{Ak}}\circ\left(f^{\mathrm{MSJ}}_{\Delta}(q^{\mathrm{ MSJ}},y^{\mathrm{MSJ}})+f^{\mathrm{Ak}}_{\Delta}(q^{\mathrm{Ak}},y^{\mathrm{Ak}}) \right)=G^{\mathrm{MSJ}}\circ f^{\mathrm{MSJ}}_{\Delta}(q^{\mathrm{MSJ}},y^{ \mathrm{MSJ}})+G^{\mathrm{Ak}}\circ f^{\mathrm{Ak}}_{\Delta}(q^{\mathrm{Ak}},y ^{\mathrm{Ak}}).\]
Applying Lemma 5.1 to the Ak system,
\[G^{\mathrm{Ak}}\circ f_{\Delta}^{\mathrm{Ak}}(q^{\mathrm{Ak}},y^{\mathrm{Ak}})=( \lambda-\lambda^{*})q^{\mathrm{Ak}}+c_{0}(y^{\mathrm{Ak}},q^{\mathrm{Ak}}),\]
where \(c_{0}(y,q)\) is defined in Definition 5.2. Note that \(c_{0}(y,q)\) is a bounded function because \(\Delta(y)\) is bounded, by Lemma A.1. Let \(c_{\max}^{\mathrm{Ak}}\) be the maximum of \(c_{0}(y,q)\). For all \(y^{\mathrm{Ak}},q^{\mathrm{Ak}}\),
\[G^{\mathrm{Ak}}\circ f_{\Delta}^{\mathrm{Ak}}(q^{\mathrm{Ak}},y^{\mathrm{Ak} })\leq(\lambda-\lambda^{*})q^{\mathrm{Ak}}+c_{\max}^{\mathrm{Ak}}.\]
By similar reasoning, applying Lemma 6.1, there exists a \(c_{\max}^{\mathrm{MSJ}}\) such that
\[G^{\mathrm{MSJ}}\circ f_{\Delta}^{\mathrm{MSJ}}(q^{\mathrm{MSJ}},y^{\mathrm{ MSJ}})\leq(\lambda-\lambda^{*})q^{\mathrm{Ak}}+c_{\max}^{\mathrm{MSJ}}\]
Let \(c_{\max}=\max(c_{\max}^{\mathrm{Ak}},c_{\max}^{\mathrm{MSJ}})\). Consider any \(q^{\mathrm{Ak}}\geq\frac{2c_{\max}+1}{\lambda^{*}-\lambda}\). Then for any \(y^{\mathrm{Ak}}\),
\[G^{\mathrm{Ak}}\circ f_{\Delta}^{\mathrm{Ak}}(q^{\mathrm{Ak}},y^{\mathrm{Ak} })\leq-c_{\max}-1.\]
Similarly, for any \(q^{\mathrm{MSJ}}\geq\frac{2c_{\max}+1}{\lambda^{*}-\lambda}\) and any \(y^{\mathrm{MSJ}}\),
\[G^{\mathrm{MSJ}}\circ f_{\Delta}^{\mathrm{MSJ}}(q^{\mathrm{MSJ}},y^{\mathrm{ MSJ}})\leq-c_{\max}-1.\]
Let \(c_{\mathrm{cap}}=\max\{\frac{2c_{\max}+1}{\lambda^{*}-\lambda},k+1\}\). We define the bounded set \(\mathbb{S}\) as
\[\mathbb{S}=\left\{(q^{\mathrm{MSJ}},q^{\mathrm{Ak}},y^{\mathrm{MSJ}},y^{ \mathrm{Ak}})\colon q^{\mathrm{MSJ}}\leq c_{\mathrm{cap}},q^{\mathrm{Ak}}\leq c _{\mathrm{cap}}\right\}.\]
By the calculation above, outside \(\mathbb{S}\),
\[G^{\mathrm{MSJ}}f_{\Delta}^{\mathrm{MSJ}}(q^{\mathrm{MSJ}},y^{\mathrm{MSJ}})+G ^{\mathrm{Ak}}f_{\Delta}^{\mathrm{Ak}}(q^{\mathrm{Ak}},y^{\mathrm{Ak}})\leq-1.\]
In particular, outside of \(\mathbb{S}\), either \(q^{\mathrm{MSJ}}>c_{\max}\) or \(q^{\mathrm{Ak}}>c_{\max}\), yielding a drift term \(\leq-c_{\max}-1\), outweighing the term where \(q\) is small. Thus, by the Foster-Lyapunov theorem [30, Theorem A.4.1], the system returns to \(\mathbb{S}\) in finite mean time.
We call a period of time inside the bounded set \(\mathbb{S}\) an _\(\mathbb{S}\)-visit_. Each \(\mathbb{S}\)-visit has a finite mean time because there is a positive probability of having a lot of arrivals in the next second and leaving \(\mathbb{S}\). Moreover, as proved above using the Lyapunov argument, the time between two \(\mathbb{S}\)-visits has finite mean.
Each \(\mathbb{S}\)-visit has a positive probability of ending the \(y^{*}\)-cycle. To prove this, we construct a positive probability sample path of beginning a good period with \(q^{\mathrm{MSJ}}=q^{\mathrm{Ak}}\) and ending the good period in \((0,0,y^{*},y^{*})\), while remaining in \(\mathbb{S}\).
* First, we have a lot of completions in the two systems, completely emptying both. \(q^{\mathrm{MSJ}}=q^{\mathrm{Ak}}=|y^{\mathrm{MSJ}}|=0\). Next, \(k\) jobs arrive. Now \(q^{\mathrm{Ak}}=k\) and \(q^{\mathrm{MSJ}}=0\). During this time \(y^{\mathrm{MSJ}}\neq y^{\mathrm{Ak}}\).
* Then \(k\) jobs complete in the Ak system, no jobs complete in the MSJ system, and the newly generated Ak jobs are sampled such that \(y^{\mathrm{MSJ}}=y^{\mathrm{Ak}}\), while \(q^{\mathrm{MSJ}}=q^{\mathrm{Ak}}=0\).
* Next, \(k+1\) jobs arrive, and a good period begins.
* Finally, \(k+1\) jobs complete in both systems, ending with \(y^{\mathrm{MSJ}}=y^{\mathrm{Ak}}=y^{*}\), and \(q^{\mathrm{MSJ}}=q^{\mathrm{Ak}}=0\). Now a \(y^{*}\)-cycle ends, and the next begins.
All of these events have strictly positive probability and are independent of each other, so their joint occurrence has strictly positive probability as well. Thus, the length of a \(y^{*}\)-cycle is bounded by a geometric number of \(\mathbb{S}\)-visits, each of which has finite mean time, completing the proof.
### Proof of Theorem 4.2
We now are ready to prove our main theorem, Theorem 4.2, progressing along similar lines as Theorem 4.1 and making use of Lemmas 5.1 and 6.2. First, we restate several definitions from Definition 5.2, specialized to Ak system:
**Definition 6.2**.: _Recall the definitions of \(c_{0}(y,q)\) and \(c_{1}(y)\) from Definition 5.2:_
\[c_{0}(y,q) =G^{\mathrm{Ak}}\circ f^{\mathrm{Ak}}_{\Delta}(q,y)-(\lambda- \lambda^{*})q\] \[=-\lambda\Delta(y)+\frac{1}{2}\lambda+\sum_{y^{\prime},a}\mu_{y,y ^{\prime},a}\left(\frac{1}{2}(-a+u-\Delta(y^{\prime}))^{2}-\frac{1}{2}\Delta(y) ^{2}\right),\] \[c_{1}(y) =-\lambda\Delta(y)+\frac{1}{2}\lambda+\sum_{y^{\prime},a}\mu_{y,y ^{\prime},a}\left(\frac{1}{2}(-a-\Delta(y^{\prime}))^{2}-\frac{1}{2}\Delta(y) ^{2}\right),\]
_where \(u=\mathbb{1}\{q=0\wedge a=1\}\)._
We also make use of a key fact about \(c_{1}(y)\), from (9):
\[\mathbb{E}[c_{1}(Y^{\mathrm{Ak}})]=\lambda^{*}+\lambda^{*}\Delta(Y^{\mathrm{ Sat}}_{d})+O_{\lambda}\left(1-\frac{\lambda}{\lambda^{*}}\right).\]
Throughout this section, whenever we make use of results from Section 5, we set \(\pi=\mathrm{Sat}\). In particular, we make use of \(c_{0}(y,q)\) and \(c_{1}(y)\), from Definition 5.2.
**Theorem 4.2**.: _In the multiserver-job system, the expected response time in steady state satisfies_
\[\mathbb{E}[T^{\mathrm{MSJ}}]=\frac{1}{\lambda^{*}}\frac{1+\Delta(Y^{\mathrm{ Sat}}_{d},Y^{\mathrm{Sat}})}{1-\lambda/\lambda^{*}}+O_{\lambda}(1).\]
Proof.: We will show that the MSJ model has the same asymptotic mean response time as the Ak system. We will make use of the test function \(f^{\mathrm{MSJ}}_{\Delta}(q,y)\), from Definition 6.1. Recall from Lemma 6.1 that
\[G^{\mathrm{MSJ}}\circ f^{\mathrm{MSJ}}_{\Delta}(q,y)=G^{\mathrm{ Ak}}\circ f^{\mathrm{Ak}}_{\Delta}(q,y)\mathbb{1}_{q>0}+\mathbb{1}_{q=0}O_{ \lambda}(1).\]
We will next use (2), the fact that the expected value of a generator function in steady state is zero, which implies that
\[0=\mathbb{E}[G^{\mathrm{Ak}}\circ f^{\mathrm{Ak}}_{\Delta}(Q^{ \mathrm{MSJ}},Y^{\mathrm{MSJ}})\mathbb{1}\{Q^{\mathrm{MSJ}}>0\}]+\mathbb{P}(Q ^{\mathrm{MSJ}}=0)O_{\lambda}(1). \tag{13}\]
By Lemma 6.2, \(\mathbb{P}(Q^{\mathrm{MSJ}}=0)=O_{\lambda}(1-\frac{\lambda}{\lambda^{*}})\). Next, we apply Lemma 5.1 to the Ak system, finding that
\[G^{\mathrm{Ak}}\circ f^{\mathrm{Ak}}_{\Delta}(q,y)=(\lambda- \lambda^{*})q+c_{0}(y,q).\]
From Definition 6.2, we can see that \(c_{0}(y,q)\mathbb{1}_{q>0}=c_{1}(y)\mathbb{1}_{q>0}\). Combining with (13) and invoking Lemmas 5.1 and 6.2 and the fact that \(c_{1}(y)\) is bounded, we have
\[(\lambda-\lambda^{*})\mathbb{E}[Q^{\mathrm{MSJ}}]+\mathbb{E}[c_{0 }(Y^{\mathrm{MSJ}},Q^{\mathrm{MSJ}})\mathbb{1}\{Q^{\mathrm{MSJ}}>0\}]=O_{ \lambda}(1-\lambda/\lambda^{*}),\] \[(\lambda-\lambda^{*})\mathbb{E}[Q^{\mathrm{MSJ}}]+\mathbb{E}[c_{1 }(Y^{\mathrm{MSJ}})]=O_{\lambda}(1-\lambda/\lambda^{*}),\] \[\mathbb{E}[Q^{\mathrm{MSJ}}]=\frac{\mathbb{E}[c_{1}(Y^{\mathrm{ MSJ}})]}{\lambda^{*}-\lambda}+O_{\lambda}(1). \tag{14}\]
Next, specializing (9) in the proof of Theorem 4.1 to the Ak system, we know that
\[\mathbb{E}[c_{1}(Y^{\mathrm{Ak}})]=\lambda^{*}+\lambda^{*}\Delta(Y^{\mathrm{ Sat}}_{d})+O_{\lambda}\left(1-\frac{\lambda}{\lambda^{*}}\right).\]
By Lemma 6.2, we know that \(\mathbb{P}(Y^{\mathrm{Ak}}\neq Y^{\mathrm{MSJ}})=O_{\lambda}(1-\frac{\lambda }{\lambda^{*}})\). Again because \(c_{1}(y)\) is bounded,
\[\mathbb{E}[c_{1}(Y^{\mathrm{MSJ}})]=\mathbb{E}[c_{1}(Y^{\mathrm{ Ak}})]+O_{\lambda}\left(1-\frac{\lambda}{\lambda^{*}}\right)=\lambda^{*}+\lambda^{*} \Delta(Y^{\mathrm{Sat}}_{d})+O_{\lambda}\left(1-\frac{\lambda}{\lambda^{*}} \right).\]
Therefore, applying (14), we find that
\[\mathbb{E}[Q^{\rm MSJ}]=\frac{1+\Delta(Y_{d}^{\rm Sat})}{1-\lambda/\lambda^{*}}+O _{\lambda}(1).\]
Now, we apply Little's Law, which states that \(\mathbb{E}[T^{\rm Ak}]=\frac{1}{\lambda}\mathbb{E}[N^{\rm Ak}]\). Note that \(Q^{\rm Ak}\) and \(N^{\rm Ak}\) differ by the number of jobs in the front, which is \(O_{\lambda}(1)\):
\[\mathbb{E}[T^{\rm MSJ}]=\frac{1}{\lambda}\frac{1+\Delta(Y_{d}^{\rm Sat})}{1- \lambda/\lambda^{*}}+O_{\lambda}\left(\frac{1}{\lambda}\right)=\frac{1}{ \lambda^{*}}\frac{1+\Delta(Y_{d}^{\rm Sat})}{1-\lambda/\lambda^{*}}+O_{\lambda }\left(\frac{1}{\lambda}\right).\]
For the second equality, note that for any \(x\), \(\frac{1}{\lambda}\frac{x}{1-\lambda/\lambda^{*}}=\frac{1}{\lambda^{*}}\frac{x }{1-\lambda/\lambda^{*}}+\frac{x}{\lambda}\). Here \(x\) is a constant, so the extra term is absorbed by the \(O_{\lambda}(1/\lambda)\).
By the same bounding argument as used for (10) in the \(\lambda\to 0\) limit,
\[\mathbb{E}[T^{\rm MSJ}]=\frac{1}{\lambda^{*}}\frac{1+\Delta(Y_{d}^{\rm Sat})}{ 1-\lambda/\lambda^{*}}+O_{\lambda}(1).\qed\]
## 7 Extensions of RESET: Finite skip models
While our main MSJ result, Theorem 4.2, was stated for the MSJ FCFS model, our techniques do not depend on the details of that model. Our RESET technique can handle a wide variety of models, which we call "finite skip" models:
**Definition 7.1**.: _A finite skip queueing model is one in which jobs are served in near-FCFS order. Only jobs among the \(n\) oldest jobs in arrival order are eligible for service, for some constant \(n\). Service is only dependent on the states of the \(n\) oldest jobs in arrival order, plus an optional environmental state from a finite-state Markov chain. Furthermore, jobs must have finite state spaces, and arrivals must be Poisson with i.i.d. initial job states._
Definition 7.1 generalizes the work-conserving finite-skip (WCFS) class [17]. The MARC and RESET techniques can characterize the asymptotic mean response time of _any_ finite skip model, via the procedure in Fig. 1. Additional finite skip MSJ models include nontrivial scheduling policies, including some backfilling policies; changing server need during service; multidimensional resource constraints; heterogeneous servers; turning off idle servers; and preemption overheads. For discussion of each of these variants, see Appendix H.
## 8 Empirical Validation
We have characterized the asymptotic mean response time behavior of the FCFS multiserver-job system. To illustrate and empirically validate our theoretical results, we simulate the mean response time of the MSJ model to compare it to our predictions. Recall (4) from Theorem 4.2, in which we proved mean response time can be characterized as a dominant term plus a \(O_{\lambda}(1)\) term:
\[\mathbb{E}[T^{\rm MSJ}]=\frac{1}{\lambda^{*}}\frac{1+\Delta(Y_{d}^{\rm Sat},Y ^{\rm Sat})}{1-\lambda/\lambda^{*}}+O_{\lambda}(1). \tag{15}\]
In this section, we simulate mean response time \(\mathbb{E}[T^{\rm MSJ}]\), and compare it against the dominant term of (15), which we compute explicitly.
### Accuracy of formula
In Fig. 1(a), we show that our predictions are an excellent match for the empirical behavior of the MSJ system in two different settings. In the first, there are \(k=3\) servers and jobs have server needs of \(1\), \(2\), and \(3\). In the second, there are \(k=20\) servers, and jobs have server needs \(1\) and \(20\). We thereby cover a spectrum from few-server-systems to many-server-systems, demonstrating extremely high accuracy in both regimes. The \(O_{\lambda}(1)\) term in (15) is negligible in both of these examples.
In Fig. 1(b), we compare mean response time in two settings with the same size distribution and stability region, but which have very different \(\Delta\). We discuss these settings further in Section 8.2.
The first setting has \(k=4\), and \(42\%\) of jobs have server need \(1\), while \(58\%\) of jobs have server need \(4\). The second setting has \(k=10\), and \(10\%\) of jobs have server need \(1\), while \(90\%\) of jobs have server need \(10\). The settings' stability regions are near-identical, with thresholds \(\lambda_{4}^{*}\approx 0.5413,\lambda_{10}^{*}\approx 0.5411\), and their _size_ distributions, defined as duration times server need over \(k\), are both \(Exp(1)\). However, our predictions for mean response time are very different in the two settings: \(\Delta(Y_{d}^{\text{Sat}})_{4}\approx 0.3271,\Delta(Y_{d}^{\text{Sat}})_{10} \approx 1.850\). The \(k=10\) setting considered here, with its relatively large value of \(\Delta(Y_{d}^{\text{Sat}})\), is an especially difficult test-case. Nonetheless, our predictions are validated by the simulation results in Fig. 1(b).
In Fig. 3, we illustrate the relative error between our predicted mean response time and the simulated mean response time for the four settings depicted in Fig. 2. In all four settings, as the arrival rate \(\lambda\) approaches \(\lambda^{*}\), the threshold of the stability region, the relative error converges to \(0\).
Note that the convergence rate is slowest in the \(k=10\) setting, which also has the largest \(\Delta(Y_{d}^{\text{Sat}})\) value. We further explore the relationship between \(\Delta(Y_{d}^{\text{Sat}})\) values and convergence rates in Appendix I. We find that such a correlation exists in some settings, but it is not robust or reliable.
### Understanding the importance of \(\Delta\)
Our results show that the relative completions function \(\Delta\) is key to understanding the response time behavior of non-work-conserving systems such as the MSJ FCFS system. This is in contrast to work-conserving systems, in which response time is determined by the size distribution and load [17]. This contrast is illustrated by Fig. 1(b), in which we compare mean response time in two settings with the same size distribution and stability region, but which have very different \(\Delta\).
The differing mean response time behavior in these two settings is caused by the difference in _waste correlation_. In the \(k=10\) case, wasteful states persist for long periods of time: If a \(1\)-server job is the only job in service, it takes more time for it to complete than in the \(k=4\) system. Thus, in the \(k=4\) case, wasteful states are more short-lasting. This difference in waste correlation produces the differences in \(\Delta(Y_{d}^{\text{Sat}})\) and in mean response time.
Figure 2: Empirical and predicted mean response time \(\mathbb{E}[T]\) for two MSJ settings in each of figures (a) and (b). Simulated \(10^{8}\) arrivals at arrival rates ranging over \(\lambda/\lambda^{*}\in[0.5,0.99]\).
This example highlights a crucial feature of MSJ FCFS: The failure of work conservation injects idiosyncratic idleness patterns in to the system. To characterize \(\mathbb{E}[T]\), we need to characterize these patterns, which the RESET and MARC techniques enable us to do for the first time.
## 9 Conclusion
We introduce the RESET and MARC techniques. The RESET technique allows us to reduce the problem of characterizing mean response time in the MSJ FCFS system, up to an additive constant, to the problem of characterizing the M/M/1 with Markovian service rate (MMSR), where the service process is controlled by the saturated system. The MARC technique gives the first explicit characterization of mean response time in the MMSR, up to an additive constant. Together, our techniques reduce \(\mathbb{E}[T^{\text{MSJ}}]\) to two properties of the saturated system: the departure-average steady state \(Y_{d}^{\text{Sat}}\), and the relative completions function \(\Delta(y_{1},y_{2})\). Our RESET and MARC techniques apply to any finite skip model, including many MSJ generalizations.
We also introduce the simplified saturated system, a yet-simper variant of the saturated system with identical behavior. We empirically validate our theoretical result, showing that it closely tracks simulation at all arrival rates \(\lambda\).
An important direction for future work is to analytically characterize the relative completions \(\Delta(y_{1},y_{2})\) for specific MSJ FCFS settings, such as settings where \(Y_{d}^{\text{Sat}}\) is known to have a product-form distribution [16; 42].
## 10 Acknowledgements
Isaac Grosof and Mor Harchol-Balter were supported by the National Science Foundation under grant number CMMI-2307008. Yige Hong was supported by the National Science Foundation under grant number ECCS-2145713. We thank the shepherd and the anonymous reviewers for their helpful comments.
|
2308.04513 | A conjecture concerning *-algebras that unifies some matrix
decompositions | In this note, we propose a simple-looking but broad conjecture about
star-algebras over the field of real numbers. The conjecture enables many
matrix decompositions to be represented by star-algebras and star-ideals. This
paper is written for people with a background in representation theory and
module theory. The motivation for investigating this is the possibility of
expressing polymorphic algorithms in numerical and theoretical linear algebra.
This is similar to but different from algebraic (semiring based) approaches to
dynamic programming. We prove certain cases of the conjecture. | Ran Gutin | 2023-08-08T18:19:37Z | http://arxiv.org/abs/2308.04513v1 | # A conjecture concerning \(*\)-algebras that unifies some matrix decompositions
###### Abstract
In this note, we propose a simple-looking but broad conjecture about star-algebras over the field of real numbers. The conjecture enables many matrix decompositions to be represented by star-algebras and star-ideals. This paper is written for people with a background in representation theory and module theory. The motivation for investigating this is the possibility of expressing polymorphic algorithms in numerical and theoretical linear algebra. This is similar to but different from algebraic (semiring based) approaches to dynamic programming. We prove certain cases of the conjecture.
## 1 Introduction
In this note, we propose a conjecture about finite-dimensional \(*\)-algebras (also called involution algebras) [17] which we think has applications in numerical and theoretical linear algebra. Our conjecture is that a simple-looking but broad generalisation of the spectral theorem is true for all finite-dimensional \(*\)-algebras over the field \(\mathbb{R}\). We provide a proof for some cases.
We state the conjecture:
**Conjecture 1**: _Let \(H\) be a self-adjoint element (that is, satisfying \(H^{*}=H\)) of a finite-dimensional \(*\)-algebra \(\mathcal{A}\) over \(\mathbb{R}\). Let \(k\in\mathbb{N}\) be the maximum number for which the following decomposition exists:_
\[H=\sum_{i=1}^{k}P_{i}HP_{i},\]
_where for all \(i,j\in[k]\):_
* \(P_{i}\in\mathcal{A}\)_,_
* \(P_{i}^{*}=P_{i}\)_,_
* \(P_{i}P_{j}=\delta_{ij}P_{i}\)_,_
* \(P_{i}\neq 0\)_._
_Consider another such decomposition for the same \(H\):_
\[H=\sum_{i=1}^{k}Q_{i}HQ_{i}.\]
_Then there exists a permutation \(\sigma\in S_{k}\) and a \(U\in\mathcal{A}\) such that:_
* \(U^{*}=U^{-1}\)_,_
* \(UHU^{*}=H\)_,_
* _for all_ \(i\in[k]\)_:_ \(UQ_{i}U^{*}=P_{\sigma(i)}\)_._
By assuming the conjecture, we can pick \(*\)-algebras which in some sense _represent_ certain matrix decompositions (with prior investigation here [9]). By a matrix decomposition, we mean a way of writing matrices as a product of other matrices - a connection which is partly justified by lemma 1. This unification of some matrix decompositions emphasises the uniqueness
aspect, as opposed to the existence aspect (which is made trivial), of those decompositions. One of the motivations is that the use of \(*\)-algebras allows computer code written to compute _one_ decomposition to be directly used to compute different ones (related to but different from the somewhat well-known ideas in [19, 13]). Note that we will not develop these computing applications here because this is a theoretical paper.
We demonstrate the correspondence this creates using some examples below.
## 2 Consequences for different \(*\)-algebras
We're going to show how this conjecture easily re-derives some existing matrix decompositions. These re-derivations are justified by the observation (later proven in lemma 1) that the conjecture is sometimes equivalent to the statement that over certain \(*\)-algebras, the conjecture is equivalent to the cancellation property of unitary similarity.
### Matrices over the complex numbers, with conjugate-transpose as their involution
Let \(H\) be an \(n\times n\) Hermitian matrix over \(\mathbb{C}\). We recall the spectral theorem from linear algebra:
\[H=VDV^{*},\]
and recall that we're interested in decompositions of the form
\[H=\sum_{i=1}^{k}P_{i}HP_{i},\]
for maximum \(k\). In fact, we may obtain this by taking each column of \(V\), which we will call \(v_{i}\), and letting \(P_{i}=v_{i}v_{i}^{*}\). We get \(k=n\) and that \(P_{i}HP_{i}=\lambda_{i}v_{i}v_{i}^{*}\). So we have:
\[H=\sum_{i=1}^{k}P_{i}HP_{i}=\sum_{i=1}^{n}\lambda_{i}v_{i}v_{i}^{*}.\]
This illustrates (and sketches the proof for) the conjecture for \(\mathbb{C}\), and shows that the conjecture generalises the spectral theorem.
### Matrices over the double numbers, with conjugate-transpose as their involution
Let \({}^{2}\mathbb{R}\) denote the \(*\)-algebra of _double numbers_: The underlying algebra is \(\mathbb{R}\oplus\mathbb{R}\), and the involution is \((a,b)\mapsto(b,a)\).
Given any finite-dimensional (_without_\(\mathrm{star}\)) \(\mathbb{R}\)-algebra \(R\), we can make a \(*\)-algebra \(R\oplus R^{\mathrm{op}}\), with involution being \((x,y)\mapsto(y,x)\)[10]. Note that for some such algebras \(R\) (but not all), there exists some (any) involution, and we therefore have \(R^{\mathrm{op}}\cong R\). In those cases, we have \(R\oplus R^{\mathrm{op}}\cong R\oplus R\cong R\otimes{}^{2}\mathbb{R}\), where the algebra \(R\) in the last expression is equipped with any involution.
Observe that the conjecture for \(M_{n}(R)\oplus M_{n}(R)^{\mathrm{op}}\) is equivalent to the invariant subspace decomposition for matrices in \(M_{n}(R)\). The existence of such a decomposition is a corollary of the Krull-Schmidt theorem [14] - albeit for \(R\in\{\mathbb{R},\mathbb{C},\mathbb{H}\}\) it is better described as the Jordan Normal Form or primary decomposition.
In particular, this means that the conjecture for \(M_{n}({}^{2}\mathbb{R})\) - that is, \(n\times n\) matrices over the double numbers - is equivalent to the Jordan Normal Form for \(\mathbb{R}\)-matrices.
### General picture
In general, we obtain a correspondence between on the one hand:
* Matrix decompositions,
and
* finite-dimensional \(*\)-algebras over \(\mathbb{R}\) along with an ideal.
The corresponding decomposition is obtained by considering Hermitian matrices over the given \(*\)-algebra whose elements all belong to the specified ideal, and then (nearly all the time) considering block-diagonal canonical forms of those matrices under unitary similarity.
We make this explicit using a table, where we exhaust all indecomposable 2 and 3-dimensional \(*\)-algebras (which can be done with the aid of the Wedderburn-Malcev theorem) and consider the decomposition which the conjecture implies for all of its matrix algebras:
\begin{tabular}{l|l|l|l|l} \hline Algebra & Involution & \(*\)-ideal & Corresponding decomposition & Tame? \\ \hline \hline \(\mathbb{R}\) & id & \(\langle 1\rangle\) & Spectral theorem & Y \\ \hline \(\mathbb{C}\) & id & \(\langle 1\rangle\) & Spectral theorem for \(\mathbb{C}\)-Hermitian matrices & Y \\ \hline \(\mathbb{C}\) & id & \(\langle 1\rangle\) & Complex-symmetric spectral theorem & Y1 \\ \hline \(\mathbb{R}[X]/(X^{2})\) & id & \(\langle 1\rangle\) & Spectral decomposition of \(H+\varepsilon H^{\prime}+o(\varepsilon)\) & \\ & & & where \(H,H^{\prime}\in M_{n}(\mathbb{R})\) and both are symmetric & Y [4, 8] \\ \hline \(\mathbb{R}[X]/(X^{2})\) & \(a+bX\mapsto a-bX\) & \(\langle 1\rangle\) & Spectral decomposition of \(H+\varepsilon H^{\prime}+o(\varepsilon)\) & \\ & & where \(H,H^{\prime}\in M_{n}(\mathbb{R})\); \(H\) is symmetric; \(H^{\prime}\) & Y [4, 8] \\ & & is skew-symmetric & \\ \hline \(\mathbb{R}[X]/(X^{2})\) & \(a+bX\mapsto a-bX\) & \(\langle X\rangle\) & Spectral theorem for skew-symmetric \(\mathbb{R}\)-matrices & Y \\ \hline \(\mathbb{R}\oplus\mathbb{R}\) & \((a,b)\mapsto(b,a)\) & \(\langle 1\rangle\) & Jordan Normal Form & Y \\ \hline \((\mathbb{R}\oplus\mathbb{R})+\mathbb{R}\delta\)2 & Unique one where & \(\langle 1\rangle\) & Canonical basis for a pair \((\omega,L)\) consisting a of symplectic form \(\omega\) and an operator \(L\) self-adjoint with respect to \(\omega\) & Y [5] \\ \hline \((\mathbb{R}\oplus\mathbb{R})+\mathbb{R}\delta\)2 & Unique one where & \(\langle 1\rangle\) & Canonical basis for a pair \((\omega,L)\) consisting a of symplectic form \(\omega\) and an operator \(L\) self-adjoint with respect to \(\omega\) & Y [5] \\ \hline \((\mathbb{R}\oplus\mathbb{R})+\mathbb{R}\delta\)2 & Unique one where & \(\langle 1\rangle\) & Canonical basis for a pair \((\omega,L)\) consisting a of symplectic form \(\omega\) and an operator \(L\) self-adjoint with respect to \(\omega\) & Y [5] \\ \hline \((\mathbb{R}\oplus\mathbb{R})+\mathbb{R}\delta\)2 & Unique one where & \(\langle \delta\rangle\) & Sylvester’s Law of Inertia & Y \\ \hline \((\mathbb{R}\oplus\mathbb{R})+\mathbb{R}\delta\)2 & Unique one where & \(\langle 1\rangle\) & 2nd-order perturbation theory of invariant subspace decomposition where skew-symmetric part is infinitesimal & Y [4] \\ \hline \(\mathbb{R}[X]/(X^{3})\) & \(X\mapsto-X\) & \(\langle X\rangle\) & Spectral decomposition of \(H+\varepsilon H^{\prime}+o(\varepsilon)\) where \(H,H^{\prime}\in M_{n}(\mathbb{R})\); \(H\) is skew-symmetric; \(H^{\prime}\) is symmetric & Y \\ \hline \(\mathbb{R}[X,Y]/(X^{2},Y^{2},XY)\) & \(X\mapsto X,Y\mapsto-Y\) & \(\langle 1\rangle\) & Spectral decomposition of \(H+\varepsilon H^{\prime}+o(\varepsilon)\) where \(H,H^{\prime}\in M_{n}(\mathbb{R})\); \(H\) is symmetric; \(H^{\prime}\) is an _arbitrary_ matrix & Y \\ \hline \(\mathbb{R}[X,Y]/(X^{2},Y^{2},XY)\) & \(X\mapsto X,Y\mapsto Y\) & \(\langle 1\rangle\) & \begin{tabular}{l} \(1\)st-order perturbation theory of spectral theorem, with 2 independent perturbations in symmetric directions \\ \end{tabular} & N4 \\ \hline \(\mathbb{R}[X,Y]/(X^{2},Y^{2},XY)\) & \(X\mapsto-X,Y\mapsto-Y\) & \(\langle 1\rangle\) & \begin{tabular}{l} \(1\)st-order perturbation theory of spectral theorem, with 2 independent perturbations in skew-symmetric directions \\ \end{tabular} & N4 \\ \hline \(\mathbb{R}[X,Y]/(X^{2},Y^{2},XY)\) & \(X\mapsto X,Y\mapsto-Y\) & \(\langle 1\rangle\) & \begin{tabular}{l} \(1\)st-order perturbation theory of spectral theorem, with 2 independent perturbations in skew-symmetric directions \\ \end{tabular} & N4 \\ \hline \(\mathbb{R}[X,Y]/(X^{2},Y^{2},XY)\) & \(X\mapsto-X,Y\mapsto-Y\) & \(\langle 1\rangle\) & \begin{tabular}{l} \(1\)st-order perturbation theory of spectral theorem, with 2 independent perturbations in skew-symmetric directions \\ \end{tabular} & N4 \\ \hline \(\mathbb{R}[X,Y]/(X^{2},Y^{2},XY)\) & \(X\mapsto X,Y\mapsto-Y\) & \(\langle X,Y\rangle\) & \begin{tabular}{l} Block-diagonal form for \(\mathbb{R}\)-matrices under orthogonal similarity \\ \end{tabular} & N [18, sec. 4] \\ \hline \(\mathbb{R}[X,Y]/(X^{2},Y^{2},XY)\) & \(X\mapsto X,Y\mapsto Y\) & \(\langle X,Y\rangle\) &
\begin{tabular}{l} Block-diagonal form for pairs of symmetric \(\mathbb{R}\)-matrices under orthogonal similarity. \\ \end{tabular} & N [18, sec. 4] \\ \hline \end{tabular}
Footnote 4: With even when restricted to an ideal
\begin{tabular}{l|l|l|l} \hline \hline \(\mathbb{R}[X,Y]/(X^{2},Y^{2},XY)\) & \(X\mapsto-X,Y\mapsto-Y\) & \(\langle X,Y\rangle\) & Block-diagonal form for pairs of skew-symmetric \(\mathbb{R}\)-matrices under orthogonal similarity \\ \hline \hline \end{tabular} The above lists all 2- and 3-dimensional cases. Note that we treat much of the perturbation theory of matrix decompositions as - in some sense - being matrix decompositions in their own right.
We also consider some notable 4-dimensional cases. There are too many cases to exhaustively list here, so we've listed only a few below. Pay attention to the corresponding decompositions. Note that these are not the only tame ones in 4 dimensions.
\begin{tabular}{l l l l l} \hline Algebra & Involution & \(*\)-ideal & Corresponding decomposition & Tame? \\ \hline \hline \((a,b)+\delta(a^{\prime},b^{\prime})\)5 & Any which sends \((x,y)\mapsto(y,x)\) & \(\langle\delta\rangle\) & Singular Value Decomposition & Y \\ \hline \((a+bi)+\delta(a^{\prime}+b^{\prime}i)\)6 & \(\delta\mapsto\delta,i\mapsto-i\) & \(\langle\delta\rangle\) & Autone-Takagi decomposition [11] & Y \\ \hline \((a+bi)+\delta(a^{\prime}+b^{\prime}i)\)6 & \(\delta\mapsto-\delta,i\mapsto-i\) & \(\langle\delta\rangle\) & Skew-symmetric Takagi decomposition & Y \\ \hline \multirow{4}{*}{\(M_{2}(\mathbb{R})\)} & \multirow{4}{*}{Matrix adjugate} & \multirow{4}{*}{\(\langle 1\rangle\)} & “Symplectic spectral theorem”: Analogue of the spectral theorem for \(2n\times 2n\) symplectic-self adjoint matrices under symplectic similarity & Y7 \\ \cline{1-1} \cline{3-3}
our conjecture, but the proofs we found in the literature for those generalisations were wrong. We confirmed this with the authors. We will not cite examples for obvious reasons.
Trying to generalise the conjecture may be fraught: The conjecture over arbitrary fields is false, as follows from basic Witt theory and the failure of Sylvester's Law of Inertia (undermining a prediction of the generalised conjecture) over such fields. In spite of these obstacles, some steps to produce something like a theory of matrix decompositions for finite fields have been done [7]. There is prior work unifying the classical groups [10] instead of the decompositions (with an eye towards algebraic K-theory), but the structures considered (form rings, form modules) are quite different, and we are not sure how to apply those general tools here.
## 4 Proofs of certain cases
We can prove some cases of the conjecture.
**Definition 1**: _Self-adjoint matrices over a \(*\)-algebra \({\cal A}\) satisfy **the cancellation property under unitary similarity** if whenever \(A\) is unitarily similar to \(A^{\prime}\) and \(A\oplus B\) is unitarily similar to \(A^{\prime}\oplus B^{\prime}\), then \(B\) is unitarily similar to \(B^{\prime}\). Note that the matrices \(A,A^{\prime},B,B^{\prime}\) are understood to be self-adjoint._
**Lemma 1**: _If for a given local \(*\)-algebra \({\cal A}\), if the self-adjoint matrices over \({\cal A}\) satisfy the cancellation property under unitary similarity, then for all \(n\in{\mathbb{N}}\) the conjecture holds for \(M_{n}({\cal A})\)._
**Proof** Consider \(H=\sum_{i=1}^{k}P_{i}HP_{i}=\sum_{i=1}^{k}Q_{i}HQ_{i}\) for largest \(k\).
Use \({\rm im}(P_{i})\) for each \(i\). Clearly, \({\cal A}^{n}\cong\bigoplus_{i}{\rm im}(P_{i})\). By Kaplansky's theorem, we have that each \({\rm im}(P_{i})\) is a free submodule. Now take any basis for each \({\rm im}(P_{i})\), put the column vectors together, and then use the polar decomposition [15] trick to arrive at the multiplicative decomposition \(H=[U_{1}\mid U_{2}\mid\ldots\mid U_{k}](E_{1}\oplus E_{2}\oplus\ldots\oplus E _{k})[U_{1}\mid U_{2}\mid\ldots\mid U_{k}]^{*}\).
The same can be done for the \(Q_{i}\)s to arrive at \(H=[V_{1}\mid V_{2}\mid\ldots\mid V_{k}](F_{1}\oplus\ldots\oplus F_{k})[V_{1} \mid V_{2}\mid\ldots\mid V_{k}]^{*}\).
Imagine for the sake of contradiction that \(E_{1}\) is not unitarily similar to any \(F_{i}\). This then produces the decomposition \({\cal A}^{n}\cong\bigoplus_{i}({\rm im}(Q_{i})\cap{\rm im}(P_{1}))\oplus \bigoplus_{i}({\rm im}(Q_{i})\cap{\rm im}(P_{1})^{\perp})\), which has more than \(k\) non-zero factors. This contradicts the maximality of \(k\). So there must be some \(F_{i}\) unitarily similar to \(E_{1}\). Assume without loss of generality that this is \(F_{1}\). Use the cancellation property to cancel \(E_{1}\) and \(F_{1}\) and arrive at the fact that \(\bigoplus_{i\geq 2}E_{i}\) is unitarily similar to \(\bigoplus_{i\geq 2}F_{i}\). In a similar way, we conclude that each \(E_{j}\) is unitarily similar to some \(F_{i}\). The conclusion follows. \(\Box\)
**Proposition 1**: _The above conjecture is true when the underlying algebra of \({\cal A}\) is the ring of \(n\times n\) matrices \({\cal M}_{n}({\cal D})\)over any division \(*\)-algebra \({\cal D}\)._
**Proof** Either \({\cal D}\) is:
* The real numbers with the trivial involution.
* The complex numbers with either the involution \({\rm id}_{\mathbb{C}}\) or \((-)^{*}\).
* The quaternions with either the involution \(t+x\dot{\mathbf{i}}+y\mathbf{j}+z\mathbf{k}\mapsto t -x\dot{\mathbf{i}}-y\mathbf{j}-z\mathbf{k}\) or \(t+x\dot{\mathbf{i}}+y\mathbf{j}+z\mathbf{k}\mapsto t -x\dot{\mathbf{i}}+y\mathbf{j}+z\mathbf{k}\). Note that while it is true that the quaternions have infinitely many involutions, it admits only two up to isomorphism of \(*\)-algebras. [16]
We verify the conjecture for each case in turn.
Let \({\cal D}\) be the real numbers. The theorem is equivalent to the spectral theorem here.
Let \({\cal D}\) be the complex numbers with the involution \((-)^{*}\). The theorem is equivalent to the spectral theorem here.
Let \({\cal D}\) be the complex numbers with the involution \({\rm id}_{\mathbb{C}}\). Note that every square complex matrix is similar to a complex symmetric matrix. Thus, given a complex-symmetric matrix \(S\), take its Jordan Normal Form, and replace each Jordan block with the complex symmetric matrix which it's similar to - giving \(S\sim J_{1}\oplus\ldots\oplus J_{k}\) where each \(J_{i}\) is complex-symmetric. We have that \(S\) is similar to another complex-symmetric matrix. But then \(S\) is furthermore orthogonally similar to \(J_{1}\oplus\ldots\oplus J_{k}\), by the polar decomposition trick [15].
Let \({\cal D}\) be the quaternions with the standard involution \(t+x\mathbf{i}+y\mathbf{j}+z\mathbf{k}\mapsto t-x \mathbf{i}-y\mathbf{j}-z\mathbf{k}\). The theorem is equivalent to the spectral theorem here.
Let \({\cal D}\) be the quaternions with the non-standard involution \(t+x\dot{\mathbf{i}}+y\dot{\mathbf{j}}+z\mathbf{k} \mapsto t-x\dot{\mathbf{i}}+y\dot{\mathbf{j}}+z\mathbf{k}\). The proof here is the same as in the case of \(\mathbb{C}\) equipped with the involution \(\mbox{id}_{\mathbb{C}}\). We only need to observe that:
* an analogue of the Jordan Normal Form exists, [16]
* every square quaternion matrix is similar to a complex-symmetric matrix,
* the polar decomposition generalises to this setting. This is presently proved in a MathOverflow post [20].
\(\Box\)
**Lemma 2**: _Non-singular matrices over the quaternions equipped with the non-standard involution \(t+x\dot{\mathbf{i}}+y\dot{\mathbf{j}}+z\mathbf{k} \mapsto t-x\dot{\mathbf{i}}+y\dot{\mathbf{j}}+z\mathbf{k}\) admit polar decompositions._
**Proof** This would be a corollary of the statement that for every non-singular matrix \(M\) that is Hermitian with respect to the above involution, such a matrix admits a polynomial \(p\) with real coefficients such that \(p(M)^{2}=M\).
Let \(M\) be a matrix Hermitian with respect to this involution. Consider a representation of \(M\) as an \(\mathbb{R}\)-matrix \(\chi(M)\). We can use the standard technique for generalising analytic functions to matrices by way of Hermite interpolation. We would like though for the coefficients of the interpolating polynomial \(p\in\mathbb{C}[z]\) (for which \(p(M)^{2}=M\)) to be real numbers, otherwise we might encounter problems with non-commutativity. We can ensure this whenever \(\chi(M)\) has no negative real eigenvalues, by ensuring that for every congruence
\[p(z)\equiv\sqrt{z}\pmod{(z-\lambda)^{n}}\]
we have another congruence of the form
\[p(z)\equiv\overline{\sqrt{z}}\pmod{\left(z-\overline{\lambda}\right)^{n}}.\]
This system of congruences becomes contradictory when \(\chi(M)\) has a negative real eigenvalue.
In the event that \(M\) has a negative real eigenvalue, we may perturb \(M\) in such a way as to eliminate its real eigenvalues. We may construct a sequence of approximations \(M_{n}\) to \(M\), consider the sequence \(\sqrt{M_{n}}\), refine this to a convergent subsequence if need be (by way of Bolzano-Weierstrass) and then take the limit to obtain a square root of \(M\). \(\Box\)
A corollary of the Wedderburn-Malcev theorem is that every _local finite-dimensional \(\mathbb{R}\)-algebra_ admits a vector space decomposition \(A\oplus B\) where \(A\in\{\mathbb{R},\mathbb{C},\mathbb{H}\}\), and \(B\) consists only of nilpotent elements.
**Definition 2**: _Call an involution \((-)^{*}\) of a local finite-dimensional \(\mathbb{R}\)-algebra \({\cal A}\)**standard** if over the subalgebra \(A\) (which is a division algebra) of \({\cal A}=A\oplus B\), we have that \(z^{2}=-1\) implies \(z^{*}=-z\)._
Call any other involution _non-standard_. Notice that the only standard involution for \(\mathbb{C}\) is \(a+bi\mapsto a-bi\), while the only non-standard involution is \(a+bi\mapsto a+bi\). Both involutions for the algebra of dual numbers are standard.
**Lemma 3**: _Every unit element \(x\) over a local finite-dimensional \(*\)-algebra \({\cal A}\) over \(\mathbb{R}\) that carries a standard involution admits a square root and a polar decomposition._
**Proof** We only need to verify the existence of a square root. Going from the existence of a square root to a polar decomposition is a fairly standard argument.
Consider an \(\mathbb{R}\)-matrix \(M\) representing \(x^{*}x\). We perform Hermite interpolation to obtain a polynomial \(p\) with only real coefficients for which \(p(M)^{2}=M\). To ensure the coefficients of \(p\) are real, the interpolation problem should be set up so that the interpolation points are complex conjugates of each other. This is only possible to do if \(M\) has no negative eigenvalues. But \(M\) can't have a negative eigenvalue because \(x^{*}x\,\mbox{mod}\,J({\cal A})\) (where \(J({\cal A})\) is the Jacobson radical of \({\cal A}\)) is a positive definite element of \(M_{n}({\cal A}/J({\cal A}))\).
We see now that \(p(x^{*}x)^{2}=x\). \(\Box\)
**Proposition 2**: _The above conjecture is true when the underlying algebra of \({\cal A}\) is \(n\times n\) matrices over a local ring \({\cal B}\), and the involution is standard._
**Proof** By the Krull-Schmidt theorem, there exists a decomposition:
\[H=\sum_{i=1}^{k}P_{i}HP_{i},\]
with
* \(P_{i}\in\mathcal{M}(\mathcal{A})\),
* \(P_{i}P_{j}=\delta_{ij}P_{i}\),
* \(P_{i}\neq 0\).
but not necessarily with \(P_{i}^{*}=P_{i}\). We seek to make this last identity true as well.
Choose some \(i\in\{1,2,\ldots,k\}\). Since \(\mathcal{A}\) is a local ring, we may choose a basis \(\{v_{1},v_{2},\ldots\}\) where each \(v_{j}\) belongs to the image of \(P_{i}\). We now show how to construct an improved basis \(\{v_{1}^{\prime},v_{2}^{\prime},\ldots\}\) of \(\operatorname{im}(P_{i})\) that is orthonormal.
We choose \(v_{1}^{\prime}\) to equal \(v_{1}\sqrt{v_{1}^{*}v_{1}^{-1}}\). This definition should make sense as long as \(v_{1}^{*}v_{1}\) is a unit, because we know that every unit has a square root. Assume for the sake of contradiction that it isn't a unit. Then quotienting by the Jacobson radical, and using the property of _standard_ involutions, gives that each component of \(v_{1}\) is a non-unit. But then the module spanned by \(v_{1}\) is a projective module which isn't free, which contradicts Kaplansky's theorem. So we conclude that \(v_{1}^{*}v_{1}\) is indeed a unit and the definition of \(v_{1}^{\prime}\) makes sense.
We now must choose a value of \(v_{2}^{\prime}\) to take the place of \(v_{2}\). To this end, consider the linear map \(f:\operatorname{im}(P_{i})\to\operatorname{im}(P_{i}),x\mapsto v_{1}^{\prime }{v_{1}^{*}}^{x}\). Observe that \(f(f(x))=f(x)\) for all \(x\). From this, observe that for every vector \(x\), we have that \(x=f(x)-(x-f(x))\), where \(x-f(x)\) is in the kernel of \(f\). Assume that we have an \(x\in\operatorname{im}(f)\cap\ker(f)\). Then we have \(x=f(y)\) for some \(y\), and then \(x=f(y)=f(f(y))=f(x)=0\), so \(x=0\). Therefore we have the decomposition \(\operatorname{im}(P_{i})=\ker(f)\oplus\operatorname{im}(f)\). Since \(\mathcal{A}\) is local, \(\ker(f)\) is a free module, whose every element is orthogonal to \(v_{1}^{\prime}\). Continuing in the obvious way produces an orthonormal basis for \(\operatorname{im}(P_{i})\).
Putting these orthonormal bases for \(\operatorname{im}(P_{i})\) for each \(i\) together gives that \(H\) is similar to a self-adjoint matrix.
We now use the polar decomposition trick to make \(H\)_unitarily similar_ to a self-adjoint matrix. \(\Box\)
## 5 Suggestion for programme
We propose a programme, which we think might be useful in applications:
* Prove the conjecture above.
* Classify the complexity of the corresponding decompositions or canonical forms. It's common to use the three-way label _domestic, tame_ and _wild_. [3, 2]
* Investigate the use of polymorphism in programming languages to write the same numerical algorithm for multiple decompositions. This has some resemblance to the well-known possibility of using polymorphism in dynamic programming algorithms [19]. We might limit the numerical algorithms to all those decompositions of low enough complexity. Are many numerical algorithms simply the QR algorithm [6] in disguise, written in a polymorphic way?
|
2301.08622 | The Effect of the Peculiar Motions of the Lens, Source and the Observer
on the Gravitational Lensing Time Delay | An intervening galaxy acts as a gravitational lens and produces multiple
images of a single source such as a remote galaxy. Galaxies have peculiar
speeds in addition to the bulk motion arising due to the expansion of the
universe. There is a difference in light arrival times between lensed images.
We calculate more realistic time delays between lensed images when galaxy
peculiar motions, that is the motion of the Lens, the Source and the Observer
are taken into consideration neglecting the gravitomagnetic effects. | Gihan Weerasekara, Thulsi Wickramasinghe, Chandana Jayaratne | 2023-01-20T15:04:47Z | http://arxiv.org/abs/2301.08622v1 | The Effect of the Peculiar Motions of the Lens, Source and the Observer on the Gravitational Lensing Time Delay
###### Abstract
An intervening galaxy acts as a gravitational lens and produces multiple images of a single source such as a remote galaxy. Galaxies have peculiar speeds in addition to the bulk motion arising due to the expansion of the universe. There is a difference in light arrival times between lensed images. We calculate more realistic time delays between lensed images when galaxy peculiar motions, that is the motion of the Lens, the Source and the Observer are taken into consideration neglecting the gravitomagnetic effects.
keywords: gravitational lensing: strong - galaxies: peculiar
## 1 Introduction
A remote galaxy S at redshift \(z_{g}\) (Shown in Figure 1) is lensed by an intervening galaxy L at redshift \(z_{d}\). A light ray from S bends by an angle \(\alpha\) before arriving at the observer O. The image 1 of S forms at an angle \(\theta\) while S is at \(\theta\). The distances \(D_{d}\), \(D_{s}\) and \(D_{ds}\) shown are the angular diameter distances. Walsh (1979), Chen (1995)
From the theory of lensing, we can derive the angular positions \(\theta_{1}\) and \(\theta_{2}\) of the two lensed images formed due to a single _point_ lens. There is a delay \(\Delta\tau\) of light arrival times from these two images. This delay is arising due to both geometrical path difference and the fact that two light rays are traveling in two different potential wells on either side of the lens. The total time delay is given by, Schneider (1992), Bradt (2008)
\[\Delta\tau=\frac{D_{f}}{c}(1+z_{d})\left[\begin{array}{c}\frac{1}{2}(\theta _{1}^{2}-\theta_{2}^{2})+|\theta_{1}\theta_{2}|\ \ln\left|\frac{\theta_{1}}{\theta_{2}}\right|\end{array}\right] \tag{1}\]
where,
\[D_{f}=\frac{D_{d}D_{s}}{D_{ds}} \tag{2}\]
We calculate analytically a more realistic time delay between the two images when the peculiar speeds of the lens, the source and the observer are considered. These peculiar speeds are random speeds With respect to the cosmic microwave background radiation - Hubble flow.
But as we already know a point mass lens is a highly idealized and less practical lensing model for a real lensing system, in the next part of the paper we will be considering a more practical Singular Isothermal Sphere (SIS) lensing model to calculate the time delay difference when the peculiar speeds of the objects are considered.
Figure 1: Gravitational Lensing Diagram. The peculiar speed \(v\) of the lens L is measured with respect to a freely falling observer with the Hubble flow at the location of the lens. The angle \(\epsilon\) is measured from the optic axis OL.
## 2 Theory
The angular diameter distance D of a source _having no peculiar motion_ at a red shift \(z\) is given by, Weinberg (1972), Hobson (2006)
\[D(z,\Omega_{\Lambda,0})=\frac{c}{H_{0}}\frac{1}{1+z}\int\limits_{\frac{1}{1+z}}^ {1}\frac{dx}{\sqrt{x^{4}\,\Omega_{\Lambda,0}+x\,\Omega_{m,0}+\Omega_{r,0}}} \tag{3}\]
where \(\Omega_{i,0}\) is the density parameter of the substance \(i\) of the cosmic fluid measured at the present time \(t_{0}\). We assume a flat universe (\(k=0\)) for which Perlmutter (1999),
\[\Omega_{m,0}+\Omega_{r,0}+\Omega_{\Lambda,0}=1 \tag{4}\]
The red shift \(z_{ds}\) of S as measured by L is given by,
\[1+z_{s}=(1+z_{d})(1+z_{ds}) \tag{5}\]
Thus, from the equations (3), (4) and (5), neglecting \(\Omega_{r,0}\) and eliminating \(\Omega_{m,0}\) and expressing everything with the dark energy, we can derive the value of \(D_{ds}\), the angular diameter distance of the source as measured by an observer on the lens as,
\[\begin{split} D_{ds}(z_{d},z_{s},\Omega_{\Lambda,0})=\\ \frac{c}{H_{0}}\frac{1}{\sqrt{\Omega_{\Lambda,0}}}\frac{1+z_{d}} {1+z_{s}}\int\limits_{\frac{1+z_{d}}{1+z_{s}}}^{1}\frac{dx}{\sqrt{x^{4}+x\left( \frac{1}{\Omega_{\Lambda,0}}-1\right)\left(1+z_{d}\right)^{3}}}\end{split} \tag{6}\]
By evaluating the integral analytically, the value of \(D_{ds}\) can be written as
\[D_{ds}(z_{d},z_{s},\Omega_{\Lambda,0})=\frac{c}{H_{0}}\frac{1}{1+z_{s}}\left[ \Psi\left(z_{s},\Omega_{\Lambda,0}\right)-\Psi\left(z_{d},\Omega_{\Lambda,0} \right)\right] \tag{7}\]
where in terms of hypergeometric function \({}_{2}F_{1}\)
\[\Psi\left(z,\Omega_{\Lambda,0}\right)=\frac{1+z}{\sqrt{\Omega_{\Lambda,0}}} \,_{2}F_{1}\left(\frac{1}{3},\frac{1}{2};\frac{4}{3};\left(1-\frac{1}{\Omega _{\Lambda,0}}\right)(1+z)^{3}\right) \tag{8}\]
In the theory of lensing, the source S, lens L, and the observer O in Fig. 1 are all freely falling with the smooth expansion of the universe; that is, _experiencing no peculiar motions_. The angular diameter distances \(D_{s}\), \(D_{d}\) and \(D_{ds}\) are then measured between these objects which are freely falling with the Hubble flow. Thus, the redshifts entering Eq (8) should be associated with the freely falling objects.
However, all galaxies are subjected to peculiar or random motions, for an example in the scenario given here the Source S, the Lens L and the Observer O are having peculiar motions. Thus, the redshift of the lens we measure includes this peculiar motion. Therefore, the redshifts entering Eq (7), which should be the redshifts of freely falling objects, must be corrected for random peculiar motions. For this, consider initially the random motion of L neglecting the random motions of S and O. This is similar to OS axis being fixed and L having a peculiar motion with respect to this axis. An observer freely falling with the Hubble flow at the location of L will see a Doppler shift of L arising due to the random (peculiar) speed \(\nu\). In addition to this shift, we have the cosmological redshift of that freely falling observer arising due to the bulk expanding motion of the universe. Thus, the redshift z of the freely falling observer, from special theory of relativity, becomes (see Figure. 1)
\[1+z=\frac{\sqrt{1-\beta^{2}}}{1-\beta\cos\epsilon}\left(1+z^{observed}\right) \tag{9}\]
where \(\nu=\beta c\) is the peculiar speed of the object as seen by the freely falling observer and \(\epsilon\) is the angle between the peculiar velocity vector and the line-of-sight to L (see Fig. 1). It is this redshift \(z\) (Eq. 9) that should enter in (7) for the angular diameter distance calculation. If \(\epsilon=0\), L is approaching a freely falling observer and if \(\epsilon=\pi\) it is receding. Inserting (9) in (8) and expanding to first order in \(\beta\) we get,
\[\begin{split}&\Psi\left(z,\Omega_{\Lambda,0}\right)\sim\frac{1+z^{ observed}}{\sqrt{\Omega_{\Lambda,0}}}\times\\ &\,_{2}F_{1}\left[1+\left\{1+\frac{3}{8}\left(1-\frac{1}{\Omega _{\Lambda,0}}\right)\left(1+z^{observed}\right)^{3}\right\}\beta\cos\epsilon \right]\end{split} \tag{10}\]
where the hypergeometric function is the one appearing in (8) with \(z=z^{observed}\). Now that we have an expression to account for the peculiar motion of L, we can employ the same in our code to calculate the time delay taking all the peculiar motions into consideration. That is including the peculiar motions of S, L and O. while doing so, we find that the other higher order terms are very small and the time delay is _linear_ to first order in \(\beta\). Then the form of the observed time delay becomes,
\[\Delta\tau\approx\Delta\tau_{0}\left(1+\kappa\;\beta\cos\epsilon\right) \tag{11}\]
where \(\Delta\tau_{0}\) is when the peculiar motions are neglected.
As we now have an equation for the gravitational time delay difference when the peculiar speeds are considered for a point mass lens model, let us now proceed to the Singular Isothermal Sphere lensing model and derive the time delay difference equation for that.
According to the theory of lensing the time delay difference for a SIS model is given by the equation, Schneider (1992)
\[c\Delta\tau=\left[4\pi\left(\frac{\sigma_{v}}{c}\right)^{2}\right]^{2}\frac{D _{d}D_{ds}}{D_{s}}\left(1+z_{d}\right)2y \tag{12}\]
further by making use of the following equations,
\[y=\frac{\eta}{\eta_{0}} \tag{13}\]
\[\xi_{0}=4\pi\left(\frac{\sigma_{v}}{c}\right)^{2}\frac{D_{d}D_{ds}}{D_{s}} \tag{14}\]
we can arrive at the following equation that gives us the required time delay.
\[\Delta\tau=\frac{4\pi}{c}\left(\frac{\sigma_{v}}{c}\right)^{2}D_{d}(1+z_{d})2\beta \tag{15}\]
we do a realistic assumption for \(\beta\) by making use of the point mass lens model as,
\[\beta=\theta_{1}+\theta_{2} \tag{16}\]
In this equation when we consider the peculiar speeds of the objects, we have to use \(z=z^{observed}\) in accordance with (9) similar to the calculation we have carried out with the point mass lens.
## 3 Results and Discussion
The example we have used is the lensing system illustrated in the Figure 2. Koopmans (1998) This lens is referred to as B1600+434 and it has the following characteristics.
\[\begin{array}{llll}\mbox{Optical time delay}&=51\pm 2\mbox{ Days}\\ z_{s}&=1.59&\theta_{1}&=+1.14^{\ast}\\ z_{d}&=0.42&\theta_{2}&=-0.25^{\ast}\end{array}\]
According to the given set of angular distances and angles assuming the _non-realistic_ assumption that the lens is a point mass, we can calculate a theoretical lensing delay time of 73.92 days for the WMAP cosmological parameters. When we compare the theoretical time delay and the observed time delays it is clear that they are not matching. We believe that the discrepancy is arising due to the lens point-mass assumption and that we have not taken peculiar speeds into account. However we would like to illustrate the effect of the peculiar motions on the time delay assuming initially a point-mass lens here.
We simulated 1000 scenarios with the above given particular set of lensing parameters (\(z_{s}=1.59\), \(z_{d}=0.42\), \(\theta_{1}=+1.14^{\ast}\) and \(\theta_{2}=-0.25^{\ast}\) ). For each scenario the lens and the observer have random peculiar speeds in random directions with respect to the back ground radiation. In the simulations of Figure 3/4/5. the peculiar speeds are non relativistic and they range from 0 to 0.01\(c\).
The result we have obtained in Figure 5 is almost identical to the result we have obtained in the Figure 3.
From these results it is clear that the gravitational lensing time delay is highly sensitive to the peculiar speeds of the lens. Another interesting result of the simulation is the peculiar speeds of the observer and the source is not having a significant effect on the gravitational lensing time delay.
As we have figured out by now, the gravitational lensing time delay is mostly affected by the peculiar motions of the Lens. Thus we can neglect the peculiar motions of the Observer and the Source.
In the next simulation given in Figure 6, we have taken a lensing system with only the lens moving. In that we have taken the speed and the direction of the lens separately. The lens in the simulation is having speeds from 0 to 0.005c and the direction is 0 (The lens is approaching the observer) to \(\pi\) (The lens is receding from the observer). If the \(\epsilon\) is \(\pi\)/2 then the Lens is moving in a transverse direction.
From Figure 6, we can identify that when the lens is moving towards the observer the gravitational lensing time delay is increasing and it is attaining larger values directly in proportion with the peculiar speed of the lens. That is, when the lens is having larger approaching peculiar speeds the gravitational lensing time delay is also larger.
In contrast to that when the lens is receding from the observer the gravitation lensing time delay is decreasing. It can be also seen that when the receding peculiar speed is becoming larger the gravitational lensing time delay is becoming smaller.
If the lens is moving in a transverse direction then there is no measurable effect in the gravitational lensing time delay as the effect is in second order.
The lenses we have considered so far are having small velocities. But if we consider lenses having relativistic speeds then the effect become more prominent. That is the measurable gravitational lensing time delay becomes much larger. Results are illustrated in the Figure 7, where the peculiar speeds of the lens are relativistic.
In the example we have taken, the Lens B1400+434 is having an measured optical time delay of 51 days and a theoretical time delay of 73.92 days, assuming a point-mass lens. From our results we can account for the difference of this time delay. That is we can have this particular observed optical time delay difference if the lens is having a relativistic peculiar speed in the range of 0.05\(c\) to 0.06\(c\) in a receding direction from us provided that we model the lens as a point mass, which is _not exact_.
As we now have a clear idea on gravitational lensing time delays when the peculiar speeds of the objects are considered while using a point mass lensing model, let us now investigate the same effect when a more realistic Singular Isothermal Sphere lensing model is used for the calculations.
For this also we employ the same simulation with 1000 scenarios where random peculiar speeds are in random directions. when using Eq. (15) average velocity dispersion \(\sigma_{V}\) will be taken as \(150kms^{-1}\) Koopmans (1998). With this average velocity dispersion value and using Singular Isothermal Sphere model we have a very interesting result for the non peculiar motion lensing time delay, which is 51.45 days. this value is almost identical to the observed lensing time delay value of \(51\pm 2\) Days.
Figure 5: Point Mass lens. The Lens is having peculiar speeds in the range of 0 to 0.01c in any random direction. The source and the observer are stationary
Figure 6: Point Mass lens. The lens is having different peculiar speeds in different directions
Figure 7: Point Mass lens. The Lens is having relativistic peculiar speeds
The simulation for the non relativistic peculiar speeds is given in the Figure 8. In that the non relativistic peculiar speeds are from 0 to 0.01\(c\). further it can be noted in this simulation the time delays are ranging from 50.5 - 52.5 days while having a maximum delay difference of 1 day from the no peculiar motion instance. therefore even with non relativistic peculiar speeds it is clear that we can have measurable and significant time delay difference from the no peculiar motion instance when peculiar speeds of the lens is considered.
In the next simulation given in the Figure 9. we consider a relativistic peculiar speed distribution from 0 to 0.05\(c\). it can be noted in this figure when there is a relativistic peculiar speed distribution for the lens, the lensing time delays can range from 46-56 days with a maximum delay difference of 5 days from the no peculiar motion instance. therefore it is apparent from this simulation when there is a relativistic peculiar speed for the lens there can be a very significant gravitational lensing time difference from the non peculiar speed instance while using a more realistic Singular Isothermal Sphere to model the lens.
## 4 Conclusions
From the above simulations we have found out that in fact there is a significant measurable time delay difference arising from the peculiar speeds of the lens using both non realistic point mass lens and more realistic Singular Isothermal Sphere as the lensing model.
The important observation is that an approaching lens results in an increase of the time delay while a receding lens gives rise to a decrease in the delay.
We find that the time delay is not significantly affected by the source or observer peculiar motions.
We see from Figure 7. and Figure 9. that a relativistically moving lens in any direction can significantly affect the lensing time delays.
## Data Availability
The data underlying this article will be shared on reasonable request to the corresponding author.
|
2301.07511 | A Survey on Han's Conjecture | In 1989, D. Happel pointed out for a possible connection between the global
dimension of a finite-dimensional algebra and its Hochschild cohomology: is it
true that the vanishing of Hochschild cohomology higher groups is sufficient to
deduce that the global dimension is finite? After the discovery of a
counterexample, Y. Han proposed, in 2006, to reformulate this question to
homology. In this survey, after introducing the concepts and results involved,
I present the efforts made until now towards the comprehension of Han's
conjecture; which includes: examples of algebras that have been proven to
satisfy it and extensions that preserve it. | Guilherme da Costa Cruz | 2023-01-16T22:32:52Z | http://arxiv.org/abs/2301.07511v2 | # A Survey on Han's Conjecture
###### Abstract
In 1989, D. Happel pointed out for a possible connection between the global dimension of a finite-dimensional algebra and its Hochschild cohomology: is it true that the vanishing of Hochschild cohomology higher groups is sufficient to deduce that the global dimension is finite? After the discovery of a counterexample, Y. Han proposed, in 2006, to reformulate this question to homology. In this survey, after introducing the concepts and results involved, I present the efforts made until now towards the comprehension of Han's conjecture; which includes: examples of algebras that have been proven to satisfy it and extensions that preserve it.
**Keywords:** Hochschild homology; global dimension; Han's conjecture; homology of associative algebras; finite-dimensional algebras.
**MSC2020:** 16E40, 16-02
## 1 Introduction
Cohomology of associative algebras was introduced by G. Hochschild in 1945 [10]; just after the same had been made for groups by S. Eilenberg and S. Mac Lane; and some years before cohomology of Lie algebras was brought in by C. Chevalley and S. Eilenberg. After some years, all of these theories were brought together with a unified approach in H. Cartan and S. Eilenberg's book "Homological Algebra", published in 1956 [11]. This could only be done with a good deal of abstraction - which was carried out in parallel to the development of Category Theory - and with the introduction of derived functors. In this manner, Hochschild's cohomology received a new definition through the functor 'Ext', and homology was defined dually using 'Tor'.
Another important notion introduced in the book was that of projective and global dimension for modules and rings. During the decade of the 1950s, a great deal of research was made in order to understand these concepts and how properties of rings could be understood through them. This led to some significant rewards: for instance, after works of M. Auslander, D. Buchsbaum and J.-P. Serre, many problems concerning regular rings - which play a fundamental role in Algebraic Geometry - could be solved. Also concerning homological dimensions, H. Bass published, in 1960, what is now probably the oldest unsolved problem in Homological Algebra: the finitistic dimension conjecture.
By the beginning of 1980's, P. Gabriel had already given a concrete framework for the study of finite-dimensional algebras: quivers (i.e. oriented graphs). He proved that every finite-dimensional algebra could be associated - without great loss to the study of its modules - to a quotient of some quiver algebra. Possibly pushed by these results, some interest has risen towards the computation of Hochschild (co)homology for these algebras. This was made clear in an influential paper by D. Happel [14], in which important previous examples of C. Cibils were also surveyed.
The focus of the present survey resides essentially in a observation made in Happel's article [14, 1.4]: if an algebra has finite global dimension, then it can be proved that its Hochschild cohomology vanishes for higher degrees; what about the converse? An answer to it was given only in 2005, when Buchsweitz et al. [1] published a counterexample. In the meantime, important research was made concerning also Hochschild homology: the vanishing of Hochschild homology was proved to characterize finitude of global dimension for commutative algebras; E. Skoldberg [21] gave computations for two important quotients of quiver algebras; and others also gave valuable contributions to the understanding of Cyclic Homology - which is intrinsically related to Hochschild's. Taking all this into consideration, and also after noting that the above counterexample is well behaved when considering its homology, Y. Han [14, 3.4] proposed to reformulate Happel's question to homology, i.e. he conjectured that an algebra has finite global dimension if, and only if, its Hochschild homology vanishes in higher degrees.
This is where the present survey begins.
Our main objective is to give a good account on the partial answers already given to Han's conjecture. While some of them can even be deduced from results prior to Han's statement, others were motivated
especially by it. This is presented in section 4. To do so, we firstly give a succinct presentation of the notions of global dimension and Hochschild (co)homology of algebras in the preliminaries section 2. Afterward, in section 3, we establish crucial results providing a proper motivation to the precise statement of Han's conjecture. These are done for arbitrary algebras over a perfect field, in a slight contrast with Han's paper, which is focused in quotient of path algebras. At the final section 5, we conclude the paper by making some comments on possible future steps for research. Throughout the paper, we also try to show some subtle aspects in which homology differs from cohomology - what makes Han's question indeed distinct from Happel's.
In this manner, I hope to provide a clear picture of this topic of research as it is today. This was not done having in mind the specialist solely, in such a way that the beginning graduate student should also feel encouraged to read it - and invited to the research on the subject. With this in mind, I did not refrain from including references when presenting either a concept that asks for a better introduction or an argument that requires basic results from rings, modules and homology. That said, an acquaintance with some concepts of the theory are desired, such as: simple and semisimple modules; projective and injective modules; complexes and exact sequences; categories and functors; path algebras.
**Notation and Terminology:** Throughout this paper, by an algebra we mean an unital associative algebra over a field. In order to aid the exposition, the reader may also assume that all algebras are noetherian. The word "two-sided" will be omitted when talking about two-sided noetherian or artinian algebras, or about two-sided ideals. In addition, the following notations will be used:
* \(k\) for an arbitrary field;
* \(k^{\text{alg}}\) for the algebraic closure of \(k\);
* \(A\) and \(B\) for \(k\)-algebras;
* \(J(A)\) for the Jacobson radical of \(A\);
* \(\otimes\) for the tensor product over \(k\), i.e. \(\otimes=\otimes_{k}\);
* \(A\)-Mod (resp. Mod-\(A\)) for the category of left (resp. right) \(A\)-modules
* \(A^{op}\) for the opposite algebra, i.e. \(A\) with multiplication in reverse order.
## 2 Homology of Associative Algebras
We begin by the defining the notion of global dimension. As we will see in the example below, one may see it intuitively as a measure on how far an algebra is from being semisimple. For a better understanding on the concept and how it can be used to derive properties of an algebra, I recommend [20, Sections 4.1-4.4].
**Definition 2.1**.: Given an \(A\)-module \(M\), its _projective dimension_\(\operatorname{pd}_{A}(M)\) is defined as the minimum \(n\in\mathbb{N}\) such that \(M\) has a projective resolution of lenght \(n\), i.e. an exact sequence
\[0\to P_{n}\to\ldots\to P_{0}\to M\to 0\]
where each \(P_{i}\) is a projective module. If such a finite resolution does not exist, we write \(\operatorname{pd}_{A}(M)=\infty\). The _global dimension_ of \(A\) is defined as
\[\operatorname{gldim}(A):=\sup\{\operatorname{pd}_{A}(M)\mid M\in A\text{-Mod}\}.\]
**Remark 2.2**.: For a more precise definition, it would be necessary to distinguish the left and right global dimensions of \(A\), given when we consider the supremum either over \(A\)-Mod or over Mod-\(A\). However, as shown by M. Auslander [15, Corollary 5], they both coincide when \(A\) is noetherian.
**Example 2.3**.:
1. An algebra \(A\) is semisimple if, and only if, every left (or right) \(A\)-module is projective (see [1, 2.8]), so \(A\) is semisimple precisely when \(\operatorname{gldim}(A)=0\).
2. An algebra \(A\) satisfying \(\operatorname{gldim}(A)\leqslant 1\) is called _hereditary_. One of the most important examples of these are quiver algebras \(kQ\) (also known as path algebras). If its quiver \(Q\) does not have oriented cycles, then \(kQ\) is finite-dimensional and, in that case, it can be proved that the quotient algebra \(kQ/I\) has finite global dimension for any ideal \(I\) of \(kQ\), cf [14, Corollary 6]. For an introduction to path algebras, I refer to [1, Chapter II].
3. Noetherian self-injective algebras (also known as quasi-Frobenius) have global dimension equal to zero or to infinity, see [20, Exercise 4.2.2]. This class of algebras contains every Frobenius algebra \(A\), i.e finite-dimensional algebras satisfying \(A\cong\operatorname{Hom}_{k}(A,k)\) as \(A\)-modules, and every symmetric algebra, i.e. the ones satisfying \(A\cong\operatorname{Hom}_{k}(A,k)\) as \(A\)-bimodules. For a good account on these, I reccomend [13, Chapter 6].
4. Given a finite-dimensional Lie algebra \(\mathfrak{g}\) over a field \(k\), the global dimension of its universal enveloping algebra \(U\mathfrak{g}\) satisfies \[\operatorname{gldim}(U\mathfrak{g})=\operatorname{pd}_{U\mathfrak{g}}(k)= \dim_{k}(\mathfrak{g}),\] cf. [20, Ex. 7.3.5, Applicaton 7.7.4].
Now, we will give the definition of Hochschild (co)homology in terms of Ext and Tor functors. For that, the reader should be aware that an \(A\)-bimodule \(M\) may be considered, equivalently, as a left or right (\(A\otimes A^{op}\))-module by the following identities:
\[a\otimes a^{\prime}\rangle\cdot m=ama^{\prime}=(m\cdot(a^{\prime}\otimes a),\ a,a^{\prime}\in A,m\in M\]
**Definition 2.4**.: [1, IX:SS4] The _Hochschild homology groups_ (of degree \(n\)) of an algebra \(A\) with respect to a \(A\)-bimodule \(M\) are defined as
\[HH_{n}(A,M)=\operatorname{Tor}_{n}^{A\otimes A^{op}}(M,A),\ \ n\in\mathbb{N}\]
Its _Hochschild cohomology groups_ (of degree \(n\)) are given by
\[HH^{n}(A,M)=\operatorname{Ext}_{A\otimes A^{op}}^{n}(A,M),\ \ n\in\mathbb{N}\]
We shall use the notation \(HH_{n}(A)\) and \(HH^{n}(A)\) for the case \(M=A\).
One can note that each of the abelian groups \(HH_{n}(A,M)\) and \(HH^{n}(A,M)\) have also the structure of a \(k\)-vector space induced by \(A\) and \(M\).
A good introduction to this homological theory is given in [16]. For a more thorough study, I refer to [20, Chapter 9] and [11]. However, we also try to give some intuitions on how this (co)homology behaves. Firstly, we note that the zero degree groups satisfies the following isomorphims:
\[HH_{0}(A)\cong A/[A,A]\quad HH^{0}(A)=Z(A),\]
where \(Z(A)\) denotes the center of \(A\) and \([A,A]=\langle ab-ba\ |\ a,b\in A\rangle\) is the commutator subspace of \(A\). Therefore, zero degree (co)homology measures the commutativity of \(A\).
**Example 2.5**.:
1. If \(A=k\), then for any \(k\)-bimodule \(M\) (i.e. a vector space) the (co)homology is trivial: \[HH^{n}(k,M)\cong HH_{n}(k,M)\cong\begin{cases}k,\ \ n=0\\ 0,\ \ n>0\end{cases}\]
2. (Truncated polynomial algebras, [16, 5.9]) Let \(A=k[x]/(p)\) for a polynomial \(p\), then the homology groups \(HH_{n}(A)\) are given by the homology of the complex \[\ldots\xrightarrow{p^{\prime}}A\xrightarrow{0}A\xrightarrow{p^{\prime}}A \xrightarrow{0}A\to 0,\] where \(p^{\prime}\cdot\) represents the map of multiplication by the (formal) derivative \(p^{\prime}\). We also have \(HH_{n}(A)\cong HH^{n}(A)\), since \(A\) is symmetric [13, 3.15A, 16.55] (see item 5 of proposition below).
Now, we summarize some of the main properties of Hochschild (co)homology.
**Proposition 2.6**.: _Given algebras \(A\) and \(B\), we have for each \(n\in\mathbb{N}\) that:_
1. \(HH_{n}(A\times B)\cong HH_{n}(A)\oplus HH_{n}(B)\)__
2. _(Change of the ground field) Given an extension of fields_ \(\ell\subseteq k\)_, the_ \(\ell\)_-algebra_ \(A_{\ell}=A\otimes\ell\) _satisfies_ \(HH_{n}(A_{\ell})\cong HH_{n}(A)\otimes\ell\)_._
3. \(HH_{n}(A\otimes B)\cong\bigoplus_{i+j=n}HH_{i}(A)\otimes HH_{j}(B)\)__
4. _If_ \(A\) _and_ \(B\) _are Morita-equivalent (i.e._ \(A\)-Mod _is equivalent to_ \(B\)-Mod_), then_ \(HH_{n}(A)\cong HH_{n}(B)\) _._
_._
5. _If_ \(A\) _is a finite-dimensional symmetric algebra_1_, then_ \(HH_{n}(A)\cong HH^{n}(A)\)_._ Footnote 1: Not to be confused with the symmetric algebra \(\operatorname{Sym}(V)\) given by a vector space \(V\), which is isomorphic to \(k[x_{1},\dots,x_{n}]\) if \(\dim_{k}(V)=n\). Even so, the proposition is unintentionally also valid for these algebras, see [10, Exercise 9.1.3].
_Properties \(1\). to \(4\). are also valid for the cohomology groups \(HH^{n}(A)\) with the following additional hypothesis for property \(3\).: \(A\) or \(B\) need to be finite-dimensional._
Proof.:
1. [10, Theorem 9.1.8]
2. This follows from the following identities: \[HH_{n}(A_{\ell},A\otimes\ell)\cong HH_{n}(A,A\otimes\ell)\cong HH_{n}(A,A) \otimes\ell,\] where [10, Theorem 9.1.7] was used for the first equality, and that \((-\otimes\ell)\) is an exact functor for the second one, see [10, Ex. 2.4.2]. The same works for cohomology.
3. [10, Proposition 9.4.1] or [11, 4.2.5].
4. [10, Theorem 9.5.6] and [12, Theorem 2.11.1].
5. A symmetric algebra \(A\) is characterized by the property \(A\cong\operatorname{Hom}_{k}(A,k)\) as \(A\)-bimodules. Hence, \[HH^{n}(A)=\operatorname{Ext}_{A\otimes A^{\operatorname{op}}}^{n}(A,A)\cong \operatorname{Ext}_{A\otimes A^{\operatorname{op}}}^{n}(A,\operatorname{Hom}_{ k}(A,k))\] Using [12, Proposition 2.8.5] and that \(k\) is \(k\)-injective, we deduce that \[HH^{n}(A)\cong\operatorname{Hom}_{k}(\operatorname{Tor}_{A\otimes A^{ \operatorname{op}}}^{n}(A,A),k)=\operatorname{Hom}_{k}(HH_{n}(A),k).\] So, the cohomology groups are the dual spaces of homology ones. Therefore, they are isomorphic when \(A\) is finite-dimensional.
**Remark 2.7**.: It is worth mentioning that a generalization of item 4 was proved by D. Happel in the framework of finite-dimensional algebras [11, 4.2] - namely, that cohomology of \(A\) and \(B\) are equal if \(B\) is "tiltable" to \(A\). This was shown in a more general setting (including any algebra over a field) by J. Rickard [13, Proposition 2.5], soon after giving a more profound characterization on the tiltable property for any rings [13, Theorem 1.1]. In more details, he proved that \(B\) is tiltable to \(A\) if, and only if, its derived categories are equivalent - and in that case we say that \(A\) and \(B\) are _derived equivalent_. Similarly, it can be proved that Hochschild homology is derived invariant, cf. [10, Theorem 2.2].
The following example shows how these properties may be valuable in order to calculate Hochschild (co)homology of an algebra.
**Example 2.8**.: Given a finite-dimensional semisimple algebra \(A\) over an algebraically closed field \(k\), we know by the Wedderburn-Artin theorem that
\[A\cong\bigoplus_{i=1}^{m}M_{n_{i}}(k)\]
for some \(n_{i},m\in\mathbb{N}\). So, using properties 1 and 4 and that \(M_{n}(k)\) is Morita-equivalent to \(k\), we may conclude that
\[HH_{n}(A)\cong HH^{n}(A)\cong\begin{cases}k^{m},n=0\\ 0,n>0\end{cases}.\]
Now, we give an important representative for the Morita-equivalence class of an algebra.
**Theorem 2.9**.: _Assume \(k\) is a perfect field (definition 2.13), then every finite-dimensional algebra is Morita-equivalent to an (admissible) quotient of a path algebra \(kQ/I\)._
Proof.: This follows from two facts:
* Every finite-dimensional algebra is Morita-equivalent to a basic algebra [1, 18.37].
* Every basic algebra is isomorphic to an admissible quotient of a path algebra.
Over an algebraically closed field, the second item is a well-known result of P. Gabriel, see [1, section II.3]. An outline for the proof over perfect fields can be found in [12, Corollary 4.1.11]. For a more detailed approach, see [1, Theorem 3.12], where the proofs are carried out by using the notion of species.
For this reason, when studying Hochschild (co)homology of finite-dimensional algebras, not much generality is lost if one considers just quotients of path algebras - and that is what many authors do (e.g. D. Happel and Y. Han).
Now, we mention two properties that are valid exclusively for homology.
**Proposition 2.10**.: _For each \(n\in\mathbb{N}\), we have that:_
1. \(HH_{n}(-)\colon\mathrm{Alg}_{k}\to\mathrm{Vect}_{k}\) _is a functor from the category of_ \(k\)_-algebras to the category of_ \(k\)_-vector spaces._
2. _Given algebras_ \(A\) _and_ \(B\) _and a_ \(A\)_-_\(B\)_-bimodule_ \(M\)_,_ \[HH_{n}(\begin{bmatrix}A&M\\ 0&B\end{bmatrix})\cong HH_{n}(A)\oplus HH_{n}(B).\]
Proof.: [10, 1.1.4] and [10, 1.2.15]
**Remark 2.11**.: The first property is not valid, for example, in zero degree cohomology: the center \(Z(-)\) is not a functor.
As shown in the example above, Hochschild cohomology of matrix algebras over \(k\) vanishes for every \(n\) and every bimodule. In what follows, we provide some characterizations of algebras satisfying this property.
**Theorem-definition 2.12**.: _We say that an algebra \(A\) is separable if it satisfies the following equivalent conditions:_
1. \(HH^{i}(A,M)=0\) _for every_ \(i>0\) _and every_ \(A\)_-bimodule_ \(M\)_._
2. \(A\otimes A^{op}\) _is semisimple._
3. \(A\) _is finite-dimensional and_ \(A\otimes\ell\) _is semisimple for every field extension_ \(\ell\supseteq k\)_._
4. \(A\) _is finite-dimensional and_ \(A\otimes k^{\mathrm{alg}}\) _is semisimple._
Comments on the proof.: The equivalence \(1\Leftrightarrow 3\) was already proved in G. Hochschild's 1945 paper [10, Theorem 4.1], showing how his cohomology can be a useful tool to understand properties of associative algebras. More modern proofs may be found in [10, IX: Theorems 7.9, 7.10] and in [20, Theorem 9.2.11].
As it can be seen by the characterization 3, every separable algebra is semisimple. So, one may ask when the converse holds. As we will show below, the answer is to consider perfect fields. This good behaviour is one of the main reasons that many of the results in the next section will be formulated over fields of this class, which is not a small one: it includes fields that are either finite, algebraically closed or of characteristic zero.
**Definition 2.13**.: A field \(k\) is said to be _perfect_ if every finite (or algebraic) extension of \(k\) is separable.
We recall that an algebraic extension \(\ell\supset k\) is _separable_ if, and only if, for every \(\alpha\in\ell\) the derivative of the minimal polynomial of \(\alpha\) over \(k\) is non-zero. This is consistent with the above notion of separable algebras: a finite extension \(\ell\supset k\) is separable if, and only if, \(\ell\) is a separable \(k\)-algebra [20, 9.2.8].
**Proposition 2.14**.: _A field \(k\) is perfect if, and only if, every finite-dimensional semisimple \(k\)-algebra is separable._
Proof.: A finite-dimensional algebra \(A\) is semisimple if, and only if, \(J(A)=0\). Furthermore, \(J(A\otimes\ell)=J(A)\otimes\ell\) for every separable algebraic extension \(\ell\supseteq k\), cf. [1, 5.17]. So, if \(k\) is a perfect field, we have that \(J(A\otimes k^{\mathrm{alg}})=0\) for every semisimple algebra \(A\). The converse follows immediately from the definition: if \(k\) is not perfect, then there exists a field \(\ell\) which is finite-dimensional over \(k\) and is not separable.
Statement of Han's conjecture
In this section, restricting ourselves to finite-dimensional algebras \(A\), we will show that, if \(\operatorname{\mathrm{gldim}}(A)\) is finite, then its Hochschild homology is concentrated solely in degree zero. In this manner, we will get a legitimate motivation for the statement of Han's conjecture. Before that, we will prove a more elementary result, and which is also valid for cohomology.
In what follows, we will use the following standard notation:
**Definition 3.1**.: The _Hochschild homological_ (resp. _cohomological_) _dimension_ of an algebra \(A\) is defined as
\[\operatorname{hh.dim}(A) :=\sup\{n\in\mathbb{N}\mid HH_{n}(A)\neq 0\}\] \[\operatorname{hch.dim}(A) :=\sup\{n\in\mathbb{N}\mid HH^{n}(A)\neq 0\}.\]
If, by any chance, \(HH_{n}(A)=0\) (resp. \(HH^{n}(A)=0\)) for all \(n\), we settle, as a convention, that \(\operatorname{hh.dim}(A)=0\) (resp. \(\operatorname{hch.dim}(A)=0\)).
In the following results, we will assume that \(A/J(A)\) is separable, which is always true when \(k\) is a perfect field. Indeed, this follows by proposition 2.14 and the fact that \(A/J(A)\) is semisimple.
**Proposition 3.2**.: _If \(A\) is a finite-dimensional algebra such that \(A/J(A)\) is separable (e.g. \(k\) is a perfect field), then:_
1. \(\operatorname{\mathrm{gldim}}(A\otimes A^{op})=2\cdot\operatorname{\mathrm{ gldim}}(A)\)_._
2. \(\operatorname{\mathrm{gldim}}(A\otimes\ell)=\operatorname{\mathrm{gldim}}(A)\) _for every field extension_ \(\ell\supseteq k\)_._
Proof.: Using the notation \(\overline{A}=A/J(A)\), it follows from 2.12 that:
1. \(\overline{A}\otimes\overline{A^{op}}\) is semisimple;
2. \(\overline{A}\otimes\ell\) is semisimple for every field extension \(\ell\supseteq k\).
In this way, the proposition follows from a result of Auslander [1, Theorem 16].
From the definition of \(\operatorname{\mathrm{Ext}}\) and \(\operatorname{\mathrm{Tor}}\) functors, it is possible to conclude that
\[\operatorname{hh.dim}(\mathrm{A}),\operatorname{hch.dim}(\mathrm{A})\leqslant \operatorname{\mathrm{pd}}_{A\otimes A^{op}}(A)\leqslant\operatorname{ \mathrm{gldim}}(A\otimes A^{op}).\]
In this manner, we obtain the following consequence from the first item2:
Footnote 2: We could also use the following result: \(\operatorname{\mathrm{pd}}_{A\otimes A^{op}}(A)=\operatorname{\mathrm{gldim}}(A)\), see [1, §4].
**Corollary 3.3**.: _Every finite-dimensional algebra \(A\) such that \(A/J(A)\) is separable (e.g. \(k\) is a perfect field) satisfies:_
\[\operatorname{\mathrm{gldim}}(A)<\infty\implies\operatorname{hh.dim}(A)< \infty,\ \operatorname{hch.dim}(A)<\infty.\]
**Remark 3.4**.: In the results above, the hypothesis over the field is truly necessary: if \(k\) is not a perfect field, then it has a non-separable element \(\alpha\in k^{\mathrm{alg}}\setminus k\), so that its minimal polynomial \(m_{\alpha}\) has zero derivative. Therefore, \(k(\alpha)=k[x]/(m_{\alpha})\) is a finite-dimensional \(k\)-algebra with \(\operatorname{\mathrm{gldim}}(k(\alpha))=0\) (since it is a field) whose Hochschild (co)homology is, by example 2.5, always non-zero:
\[HH^{n}(k(\alpha))\cong HH_{n}(k(\alpha))\cong k(\alpha)\]
for every \(n\geqslant 0\). For a concrete example, one can take \(k=\mathbb{F}_{p}(t)\) and \(\alpha=\sqrt[p]{t}\) for some prime \(p\), so that \(m_{\alpha}=x^{p}-t\).
Now, we will see that we have a much stronger result for the homological dimension, which is essentially a consequence of the following result by B. Keller.
**Lemma 3.5**.: _[_1_, 2.5]_ _Suppose \(A\) is a finite-dimensional algebra such that \(\overline{A}=A/J(A)\) is a product of copies of \(k\) and \(\operatorname{\mathrm{Hom}}_{A}(S,S)\cong k\) for each simple \(A\)-module \(S\).3 If \(A\) has finite global dimension, then we have an isomorphism (induced by the surjection \(A\twoheadrightarrow\overline{A}\)) of the cyclic homology groups \(HC_{n}(\overline{A})\cong HC_{n}(A)\) for every \(n\geqslant 0\)._
Footnote 3: The second assumption can be proved to be superfluous, see [1, 4, 8, 7.7]
The reader not acquainted with Cyclic Homology should not be alarmed by its use in the formulation of the above. The cyclic homology groups have an intrinsic relation with Hochschild ones, given by the so-called Connes' long exact sequence:
\[\ldots\to HC_{n+1}(A)\to HC_{n-1}(A)\to HH_{n}(A)\to HC_{n}(A)\to HC_{n-2}(A)\to\ldots\]
For instance, when \(n=0\), we have the isomorphism \(HC_{0}(A)\cong HH_{0}(A)\). Furthermore, as we note below, we could have replaced \(HC\) by \(HH\) when writing the lemma.
**Lemma 3.6**.: _Given an ideal \(I\) of an algebra \(A\), we have:_
\[HC_{n}(A/I)\cong HC_{n}(A)\text{ for all }n\geqslant 0\implies HH_{n}(A/I)\cong HH _{n}(A)\text{ for all }n\geqslant 0,\]
_if the isomorphisms in cyclic homology are induced from the natural surjection \(A\twoheadrightarrow A/I\)._
Proof.: This is exactly what has been shown in the proof of [10, Proposition 6].
From these results, we obtain the following synthesis:
**Theorem 3.7** (Keller-Han).: _Every finite-dimensional algebra \(A\) such that \(A/J(A)\) is separable (e.g. \(k\) is a perfect field) satisfies:_
\[\operatorname{\mathrm{gldim}}(A)<\infty\implies\operatorname{\mathrm{hh.dim}} (A)=0.\]
Proof.: We fix the notation \(A_{\operatorname{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \cdot
(Partial) Answers to Han's Conjecture
### Algebras satisfying Han's or Happel's property
We summarize in tables 1 and 2 below, the classes of algebras which have been proven to satisfy, respectively, Han's and Happel's property. In what follows, some comments will be made in order to help with two types of difficulties when reading it. The first one is that many of the examples are not exactly well-know - and some have been defined only in the reference paper - so we provide the definitions for some of these cases. In a second aspect, it may not be clear the reason why Han's property follows from the theorems in the references, thus some clarifications are given in this direction.
As we will note below, two of these classes (group algebras and trivial extensions) are of symmetric algebras, so both properties are equivalent by proposition 2.6. With this in mind, even though they also satisfy Happel's property, we have recorded them only in Han's table.
**Group Algebras:** In this section, all groups are assumed to be finite. The fact that every group algebra
\begin{table}
\begin{tabular}{|l l|l|} \hline
**Class of algebras** & **Assumption over the field** & **References** \\ \hline group algebras & - & [15, Thm I.1] \\ \hline quotients of & & \\ acyclic quiver algebras & - & [16, Cor. 6], [17] \\ \hline commutative & - & [14] \\ \hline exterior algebras & - & [15, Theorem 2] \\ \hline monomial & - & [17, Theorem 3] \\ \hline quantum complete intersections & - & [18, Theorem 3.1] \\ \hline \(N\)-Koszul & \(\mathrm{char}(k)=0\) & [19, Theorem 4.5 ] \\ \hline homogeneous quotients of & \(\mathrm{char}(k)=0\) & [19, Theorem 4.7] \\ \hline graded cellular & \(\mathrm{char}(k)=0\) & [19, Theorem 4.9 ] \\ \hline a generalization of quantum & - & [14, Theorem I ] \\ complete intersections & - & [14, Theorem I ] \\ \hline local graded algebras & - & [14, Theorem II] \\ with a certain relation & \(\mathrm{char}(k)=0\) & [14, Theorem 3.9] \\ \hline quantum generalized & \(\mathrm{char}(k)=0\) & [14, Theorem 3.9] \\ Weyl algebras & \(\mathrm{char}(k)=0\) & [14, Theorem 3.9] \\ \hline trivial extensions of local & algebraically closed & [19, Theorem 3.2] \\ \hline trivial extensions & \(\mathrm{algebraically closed}\) & [19, Theorem 3.5] \\ \hline trivial extensions of graded & \(\mathrm{algebraically closed}\), & [19, Theorem 3.9] \\ \hline \end{tabular}
\end{table}
Table 1: Known examples of algebras satisfying Han’s property. With the exception of lines 10 and 12, all of them are assumed to be finite-dimensional. The list is organized in chronological order of the references.
\begin{table}
\begin{tabular}{|l|l|} \hline
**Class of algebras** & **Reference** \\ \hline commutative & [19, Corollary] \\ \hline exterior algebras & [15, Theorem 3] \\ \hline truncated & [19, Theorem 3] \\ \hline some quantum complete intersections & [18, Theorem 3.3] \\ \hline quantum generalized Weyl algebras & [14, Theorems 1.1, 1.2, 3.3] \\ \hline \end{tabular}
\end{table}
Table 2: Examples of algebras satisfying Happel’s property. With the exception of the last class, all of them are assumed to be finite-dimensional over an arbitrary field. The list is organized in chronological order of the references.
satisfies Han's property was not explicitly found in the literature. However, the following proof, which was essentially communicated by Eduardo N. Marcos, is easily deduced from somewhat well-know facts from Group (Co)Homology.
First of all, one should be aware that every group algebra is symmetric [16, 15], so that its Hochschild homology and cohomology are isomorphic. Another important aspect is that its global dimension have only two possible values: zero or infinite. By Maschke's theorem, we know that a group algebra \(kG\) has zero global dimension if, and only if, \(\mathrm{char}(k)\) does not divide the order of \(G\). In this manner, the assertion that \(kG\) satisfies Han's property is equivalent to the following:
**Theorem 4.1**.: _If \(\mathrm{char}(k)=p>0\) divides the order of a finite group \(G\), then \(\mathrm{hh.dim}(kG)=\infty\)._
Now, a result of Burghelea [15, Theorem I.1] shows that homology of group algebras can be computed in terms of Group Homology. The same holds for cohomology, see [16, Theorem 2.11.2]. These results show, in particular, that the (co)homology groups of \(G\) with respect to \(k\), denoted by \(H_{n}(G,k)\) and \(H^{n}(G,k)\)4, are direct summands, respectively, of \(HH_{n}(kG)\) and \(HH^{n}(kG)\). Thus, the proof can be concluded by using a result of R. Swan [15]: it guarantees that, if \(\mathrm{char}(k)=p\) divides the order of \(G\), then \(H^{n}(G,k)\) is non-zero for an infinite number of values of \(n>0\).
Footnote 4: In terms of Ext-Tor functors, they can defined as \(H_{n}(G,k)=\mathrm{Tor}_{n}^{kG}(k,k)\) and \(H^{n}(G,k)=\mathrm{Ext}_{kG}^{n}(k,k)\)
One final comment about Swan's article should be made: although it is focused in cohomology with coefficients in \(\mathbb{Z}\), the author also remarks that his arguments are also valid for coefficients in \(\mathbb{F}_{p}\), and therefore for any field \(k\) of characteristic \(p\), since \(H^{n}(G,k)\cong H^{n}(G,\mathbb{F}_{p})\otimes_{\mathbb{F}_{p}}k\).
**Commutative algebras:** (In this topic, all algebras are assumed to be commutative.) As it can be seen in the table, two references were provided for this case. This was made, because it was proved independently by two groups of authors: Avramov & Vigue-Poirrier (1992) and the Buenos Aires Cyclic Homology Group (1994). One difference between their results is that the latter assumed the characteristic of the ground field to be zero while the former did not. Another notable aspect is that these articles were published more than 10 years prior to the statement of Han's conjecture. So, now we provide a few comments on how precisely Han's property can be deduced from them.
Basically, the following theorem (which is not restricted to finite-dimensional algebras) was proved:
**Theorem 4.2**.: _A finitely generated commutative algebra \(A\) is smooth if, and only if, its Hochschild homological dimension is finite._
Now, we will outline that smoothness for finitely generated algebras implies in finite global dimension - and, even more for artinian algebras: it is equal to zero. This, together with the theorem, proves Han's property for commutative finitely generated algebras - and, in particular, for finite-dimensional ones.
Smooth noetherian algebras are, in particular, regular, cf. [15, Cor. 9.3.13]. This implies that the global dimension coincides with the Krull dimension for these algebras, see [16, 5.94]. Now, one just need to note that finitely generated algebras have finite Krull dimension - and artinian algebras have zero dimension. Indeed, the Krull dimension of \(k[x_{1},\ldots,x_{n}]/I\) is no bigger than \(n\) for any ideal \(I\). Therefore, every smooth finitely generated algebra has finite global dimension, which is equal to zero when the algebra is finite-dimensional.
**Exterior algebras and quantum complete intersections:** These two examples share some properties: for instance, they are both Frobenius and local. Furthermore, the results in the references show that their Hochschild homological dimensions are both infinite. By proposition 3.2, this implies in infinite global dimension (if the field is perfect). However, this conclusion can be also deduced (cf. example 2.3) from the more elementary fact that they are non-semisimple Frobenius algebras. Now, we define these algebras and provide some details in these directions.
Given a vector space \(V\) over \(k\) with basis \(\{e_{1},\ldots,e_{n}\}\), the \(k\)th component of its exterior algebra can be defined as the following quotient:
\[\Lambda^{k}(V):=\frac{V^{\otimes k}}{\langle e_{1}\otimes\ldots\otimes e_{k}- \mathrm{sgn}(\sigma)e_{\sigma(1)}\otimes\ldots\otimes e_{\sigma(k)}\mid\sigma \in S_{k}\rangle},\]
where \(S_{k}\) denotes the symmetric group. In this manner, we define the _exterior algebra of \(V\)_ to be the graded algebra \(\Lambda(V):=\oplus_{i=0}^{n}\Lambda^{i}(V)\), where the product of two elements is simply given by concatenation the tensor products. It is possible to notice that \(\dim_{k}(\Lambda^{i}(V))=\binom{n}{i}\) and, therefore, that \(\dim_{k}(\Lambda(V))=2^{n}\).
One can note that \(J=\oplus_{i=1}^{n}\Lambda^{i}(V)\) is the unique maximal ideal of \(\Lambda(V)\) - which coincides with its Jacobson radical - so that \(\Lambda(V)\) is a local algebra. Dually, we see that \(I_{0}=\Lambda^{n}(V)\) is the unique minimal ideal of \(\Lambda(V)\), since it is a one-dimensional ideal and, for every \(0\neq a\in\Lambda(V)\), there exists some \(b\in\Lambda(V)\)
such that \(0\neq ab\in\Lambda^{n}(V)\). In this manner, any linear functional \(\lambda\colon\Lambda(V)\to k\) such that \(\lambda(I_{0})\neq 0\) satisfies the following property: for every ideal \(I\neq 0\) of \(\Lambda(V)\), we have that \(\ker(\lambda)\not\supseteq I\). The existence of such \(\lambda\) is equivalent to saying that the exterior algebra is Frobenius, see [1, 3.15]. Using this - and that, because of \(J\neq 0\), it cannot be semisimple - we can conclude that the global dimension of \(\Lambda(V)\) is infinite indeed.
Now, it is possible to see the same properties are satisfied by _quantum complete intersections_5, i.e. algebras of the form
Footnote 5: One motivation for this terminology is that these algebras are the “quantum version” of \(k[x,y]/(x^{a},y^{b})\), which are examples of complete intersections rings in the sense of Commutative Algebra. Here, the word “quantum” means that the algebra has a relation of quasi-commutativity. This meaning of “quantum” was brought to Algebra with the introduction of quantum groups during the ‘80s, see [10]. In some applications, the parameter \(q\) is interpreted as Planck’s constant.
\[A=\frac{k\langle x,y\rangle}{(x^{a},xy-qyx,y^{b})}\]
for some \(a,b\geqslant 2\) and \(0\neq q\in k\). As the ideal \(J=(x,y)\subset A\) can be seen to be the unique maximal ideal of \(A\), we conclude that \(A\) is local - and not semisimple. The fact that it is Frobenius may be retrieved from [1, p.509].
One of the most interesting aspects of these algebras is that the case \(a=2=b\) provided the first counterexample to Happel's property: in [1], these algebras were proven to satisfy \(\operatorname{hch.dim}(A)=2\) when \(q\) is not a root of unity. However, as proved by Y. Han [14, Proposition 5], its homology behaviour turned out to be non-pathological. They can be viewed, thus, as one of the main motivations to adapt Happel's question in order to get the proposition of Han's conjecture. With this in mind, the article of Bergh & Erdmann [1] may be viewed as a generalization in two directions. On one hand, they showed that Han's property remains valid for arbitrary \(a\) and \(b\) and, on the other, that the cohomological dimension is still equal to \(2\) when (and precisely when) \(q\) is not a root of unity.
Two years later, a class of algebras, which generalizes quantum complete intersections, was also proved to satisfy Han's property by showing that its Hochschild homological dimension is infinite. This class is composed by finitely generated algebras of the form
\[A=\frac{k\langle x_{1},\ldots,x_{n}\rangle}{(f_{1},\ldots,f_{p})},\,\text{ where }f_{1}\in k[x_{1}],\ f_{i}\in(x_{2},\ldots,x_{n})\text{ for }i\geqslant 2\]
and the algebra \(B=k[x_{1}]/(f_{1})\) is assumed to be not smooth. Note that quantum complete intersections are recaptured by taking \(n=2\), \(p=3\) and \(f_{1}=x^{a}\), \(f_{2}=xy-qyx\), \(f_{3}=y^{b}\). The fact that \(k[x]/(x^{a})\) (\(a\geqslant 2\)) is not smooth can be deduced from example 2.5 and theorem 4.2, or by simply noting that its global dimension is infinite.
**The examples of Bergh and Madsen:** P. Bergh and D. Madsen published two papers, in 2009 and 2017, showing Han's property for some examples of finite-dimensional algebras. The first one [1] gives three examples of graded algebras. Their proof relies on a formula of K. Igusa - relating the Euler characteristic of relative cyclic homology to the graded Cartan determinant - which forces them to add the assumption that the characteristic of the ground field is zero. In the second article [1], they prove Han's property for trivial extensions of three different classes of algebras.
Concerning the 2009's paper, we must say that, here, a finite-dimensional \(k\)-algebra \(A\) being "graded" means that it has a \(\mathbb{N}\)-grading \(A=\oplus_{i\geqslant 0}A_{i}\) and its Jacobson radical satisfies \(J(A)=\oplus_{i\geqslant 1}A_{i}\). This is called by some authors a _semisimple \(\mathbb{N}\)-grading_, or a _non-trivial \(\mathbb{N}\)-grading_. Furthermore, the subalgebra \(A_{0}\cong A/J(A)\) is assumed to be a product of copies of \(k\).
**Example 4.3**.: If \(kQ\) is a path algebra, where \(Q=(Q_{0},Q_{1})\) is a quiver with a set of vertices \(Q_{0}\) and a set of arrows \(Q_{1}\), then it has a natural grading given by the length of the paths: \(kQ=\oplus_{i\geqslant 0}kQ_{i}\), where \(kQ_{i}\) is the vector subspace generated by the paths of lenght \(i\). We shall also write \(R_{Q}\) to denote the ideal generated by the arrows \(R_{Q}=\oplus_{i\geqslant 1}kQ_{i}\).
1. If \(Q\) does not have oriented cycles (i.e. \(kQ\) is finite-dimensional), then, indeed, \(J(kQ)\) coincides with \(R_{Q}\) and \(kQ_{0}\) is the sum of \(|Q_{0}|\) copies of \(k\).
2. To obtain quotients with the same properties, we can take an _admissible_ ideal \(I\subset kQ\), i.e. such that \(R_{Q}^{m}\subseteq I\subseteq R_{Q}^{2}\) for some \(m\geqslant 2\). In this manner, \(A=kQ/I\) is finite-dimensional (even if \(Q\) has cycles) and \(J(A)=R_{Q}/I\), see [1, 2]. In order to preserve the grading of \(kQ\), we must also assume that \(I\) is _homogeneous_, i.e. its generators are linear combinations of paths of the same length. Thus, \(A=kQ/I\) has a semisimple \(\mathbb{N}\)-grading induced from \(kQ\) with \(A_{0}\cong A/J(A)\cong kQ/R_{Q}\cong k^{\oplus Q_{0}}\).
3. If \(A/J(A)\) is a product of copies of \(k\), then, by Wedderburn's Splitting Theorem, we have that \(A=A/J(A)\oplus J(A)\). In this manner, \(A_{0}=A/J(A)\), \(A_{1}=J(A)\) and \(A_{i}=0\) for \(i\geqslant 2\) provides us a grading as required above if, and only if, \(A\) is a radical square-zero algebra (i.e. \(J(A)^{2}=0\)).
With this in mind, we will make some comments about two of the three classes considered in the article. For the first one, since the authors already present its definition, we solely mention that the notion of \(N\)-Koszul algebras (where \(N\geqslant 2\) is an integer) is a direct generalization of the characterization of Koszul algebras given in [1, Prop. 2.1.3]. The ordinary case is retrieved when \(N=2\).
The second class of examples is given by quotients \(A=kQ/I\) where \(I\) is an admissible homogeneous ideal and \(Q\) is a quiver with some loop (i.e. an arrow which starts and ends at the same vertex). The fact that they always have infinite global dimension is an instance of what is known as the "no loops conjecture". In [11, 4.4, 4.5, 5.5], K. Igusa proved the conjecture for any admissible quotient of quiver algebras and for every algebra over an algebraically closed field6. In this manner, Han's property is, again, proved after showing that the Hochschild homological dimension for these algebras is infinite.
Footnote 6: Taking into consideration Gabriel’s construction of the quiver of an algebra \(A\) (over an algebraically closed field), we say that it has a loop if \(\operatorname{Ext}_{A}^{1}(S,S)\neq 0\) for some simple module \(S\), see [1, 4.1.6] or [1, section II.3]. Noticeably, if \(A\) is a path algebra, this equivalent to saying that its quiver contains a loop.
Restricting to local algebras, this gives us the following immediate consequence, which is much stronger than Han's property:
**Corollary 4.4**.: _Assume that \(\operatorname{char}(k)=0\) and \(A=kQ/I\) is local, where \(I\) is an admissible homogeneous ideal. If \(\operatorname{hh.dim}(A)\) is finite, then \(A\cong k\)._
Proof.: Since \(A\) is local, it follows that \(0\) and \(1\) are its only idempotents, cf. [1, 19.2]. Thus, \(Q\) has only one vertex. By the above, in order to \(\operatorname{hh.dim}(A)\) be finite, we also know that \(Q\) cannot have loops. Therefore, \(Q\) also does not have arrows.
Now, let us focus our attention in Bergh and Madsen's second article. It is concerned with the _trivial extension_ of a finite-dimensional algebra \(A\) by its dual \(D(A):=\operatorname{Hom}_{k}(A,k)\), considered as a \(A\)-bimodule. This algebra is denoted by \(T(A)=A\ltimes D(A)\), its vector space structure is defined to be \(A\oplus D(A)\) and its multiplication is given by
\[(a,f)\cdot(b,g)=(ab,ag+fb),\;\;a,b\in A,\,f,g\in D(A).\]
These algebras receive the word "trivial" in the name, because they are related with the zero element in the cohomology group \(HH^{2}(A,D(A))\), as it can be seen in [11, p.312].
One notable feature of these trivial extensions is that they are symmetric algebras [1, 16.62] with Jacobson radical given by \(J(A)\oplus D(A)\). Thus, they are non-semisimple self-injective algebras, so that \(\operatorname{gldim}(T(A))=\infty\) for every \(A\neq 0\). Therefore, once again it must be shown that their Hochschild homological dimensions are infinite.
Now, we sketch some ideas of the proof. The authors start by giving a presentation of \(T(A)\) as an admissible quotient of a path algebra - where it was necessary to assume the field to be algebraically closed. Using a criteria proved in a joint work with Y. Han [1, Theorem 3.1] - namely, that \(kQ/I\) (\(I\) admissible) has infinite Hochschild homological dimension whenever it has a 2-truncated cycle - they established Han's property for \(T(A)\) if \(A\) is either local or self-injective. At the end of the article, the property for \(T(A)\) was proved when \(A\) is graded by utilizing techniques from 2009's article - in terms of the graded Cartan determinant.
**Weyl algebras:** In comparison to the examples above, this class is rather exceptional. The \(n\)th Weyl algebra \(A_{n}(k)\) (over a field \(k\)) is a certain infinite-dimensional noetherian noncommutative algebra. Its properties are considerably distinct relative to the field chosen: for example, in zero characteristic, \(A_{n}(k)\) is a simple domain, but this is no longer true when fields of positive characteristic are considered. The global dimension can also measure these kind of differences (see [10] and [14]):
\[\operatorname{gldim}(A_{n}(k))=\begin{cases}n,\text{ if }\operatorname{char}(k)=0\\ 2n,\text{ if }\operatorname{char}(k)>0\end{cases}.\]
In [15], assuming \(\operatorname{char}(k)=0\), the authors proved Han's and Happel's property for the quantum case of a class that generalizes the first Weyl algebra \(A_{1}(k)\) - while the ordinary non-quantum case was treated ten years prior [17]. More explicitly, Hochschild (co)homology was computed for these algebras, and a criterion determining when its global dimension is finite was given. Summing up, they proved that
\[\operatorname{gldim}(A)<\infty\iff\operatorname{hh.dim}(A)\leqslant 2\iff \operatorname{hch.dim}(A)\leqslant 2.\]
When this is the case, it was also shown that most of them satisfy \(\operatorname{gldim}(A)=2\).
### Preservation of Han's property by Extensions
Recently, some authors gave contributions for the understanding of Han's conjecture in a distinct way from above. Having in mind, for example, a possible inductive step in order to prove the conjecture, many efforts were given in the following direction: encountering extensions of algebras that preserves Han's property, i.e. pairs of algebras \(B\subseteq A\) such that, if \(B\) satisfy Han's property, then \(A\) also satisfy it. We summarize these in table 3. For instance, with such results, one can construct from the previous examples many other algebras satisfying Han's property.
**Null-square algebras:** In [12], the authors analyse _null-square algebras_, which are constructed using two algebras \(A\) and \(B\), one \(A\)-\(B\)-bimodule \(N\) and one \(B\)-\(A\)-bimodule \(M\). They are of the form
\[\begin{bmatrix}A&N\\ M&B\end{bmatrix},\]
where the matrix multiplications are given by the bimodule structure of \(M\) and \(N\) and the convention \(mn=nm=0\) for all \(m\in M,n\in N\). In this way, the algebra above is an extension of \(A\times B\).
* If \(N=0\), then it is called a _corner algebra_
* If \(M\) and \(N\) are projective bimodules, we call it a _null-square projective algebra_.
Provided the field is perfect, it was proved that, if \(A\) and \(B\) are finite-dimensional algebras satisfying Han's property, then extensions for both types above also satisfy Han's property. For the case of corner algebras, property 2.10 was used in order to reduce its homology to the ones of \(A\) and \(B\).
**Bounded and proj-bounded extensions:** An extension of algebras \(B\subseteq A\) is said to be _bounded_ if:
1. \(A/B\) is of finite projective dimension as a \(B\)-bimodule
2. \(A/B\) is a left or right projective B-module.
3. \(A/B\) is tensor-nilpotent over \(B\), i.e. \((A/B)^{\otimes_{B}n}=0\) for some \(n\)
After a series of papers [13, 14, 15], Cibils, Lanzilotta, Marcos and Solotar proved that if we have such an extension, then
\[B\text{ satisfies Han's property }\Longleftrightarrow\text{ $A$ satisfies Han's property}\]
(without the necessity of assuming \(A\) or \(B\) to be finite-dimensional). In this manner, given an algebra we may analyse it by associating an easier algebra, and it can be chosen to be either smaller or bigger7. More recently, this was generalized for "strongly proj-bounded" extensions.
Footnote 7: It may seem strange to think that a bigger algebra may be simpler, but, as we have already seen, trivial extensions of self-injective algebras are known to satisfy Han’s property even though we do not have an answer for self-injective themselves. Unfortunately, trivial extensions are not bounded usually.
The authors also provide some criteria in order to recognize if certain extensions satisfy the last two conditions of the definition, see [14, Theorems 5.16, 5.20]. Using them, some interesting examples could be given.
**Example 4.5**.: Suppose that \(A\) is an extension of \(B=kQ/I\) (\(I\) an admissible ideal) given by adding arrows to the quiver \(Q\) and some possible relations.
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Type of Extension** & **Assumption over the field** & **Reference** \\ \hline corner algebras & perfect & [12, Theorem 2.21] \\ \hline E-triangular algebras & perfect & [12, Corollary 2.22] \\ \hline null-square projective algebras & perfect & [12, Theorem 4.8] \\ \hline bounded & - & [14, Theorem 4.6] \\ \hline strongly proj-bounded & - & [13, Corollary 6.17] \\ \hline \end{tabular}
\end{table}
Table 3: _Extensions of finite-dimensional algebras which preserves Han’s property. The list is organized in chronological order of the references._
1. The case when only arrows are added - and no new relations - was treated previously in [13] and can be seen as the motivating example for the development of bounded extensions. In this case, \(A\) is isomorphic to the tensor algebra (over \(B\)) \(T_{B}(N)\) for some projective \(B\)-bimodule \(N\). Hence, this extension satisfies a property stronger than the first two conditions in the definition: \(A/B\) is projective as a \(B\)-bimodule. The last condition is also satisfied when \(A\) is finite-dimensional.
2. [13, Example 6.2] Define \(B=kQ\) for the quiver \(Q\) below and take the extension \(A=k\tilde{Q}/J\), where \(\tilde{Q}\) is given by adding the arrow \(1\xrightarrow{a}2\) in \(Q\) and \(J=\langle da-cb\rangle\). It can be proved that this extension is bounded. Since \(\tilde{Q}\) does not have oriented cycles, one of the criteria cited above guarantees that \(A/B\) is tensor-nilpotent. The fact that \(A/B\) has finite projective dimension as a \((B\otimes B^{op})\)-module follows from proposition 3.2: \[\operatorname{gldim}(B\otimes B^{op})=2\cdot\operatorname{gldim}(B)=2.\]
In order to prove their result, the authors used a so-called Jacobi-Zariski long nearly exact sequence, which relates the Hochschild homology (of algebras \(B\) and \(A\)) with the relative Hochschild homology (of \(A\) with respect to \(B\)). When \(B\subseteq A\) is bounded, this sequence turns out to be exact (in higher degrees). This permits one to conclude that \(HH_{n}(B)\) and \(HH_{n}(A)\) are isomorphic for big enough values of \(n\), see [13, p.52]. In this way, Relative Homology - a theory introduced by G. Hochschild in 1956 [14] but still little used for associative algebras - is utilized as a fundamental tool in the proofs. Actually, the very own definition of strongly proj-bounded extensions - for which, now, we turn our attention - is made in relative homological terms.
**Definition 4.6**.: An extension \(B\subseteq A\) is _strongly proj-bounded_ if it satisfies items 1 and 2 from the definition of bounded extensions and, in addition:
1. there exists some \(p\in\mathbb{N}\) such that \((A/B)^{\otimes_{B}n}\) is a projective \(B\)-bimodule for all \(n>p\).
2. \(A\), seen as a \(A\)-bimodule, has finite \(B\)-relative projective dimension.
Both conditions above are satisfied if \(A/B\) tensor-nilpotent, because \(0\) is projective and, as it can be seen in [13, Proposition 2.3], there is a \(B\)-relative projective resolution of \(A\) whose length is smaller than \(m\) if \((A/B)^{\otimes_{B}m}=0\). So, this is, indeed, a generalization of the notion of bounded extensions. In [12, section 4.2], examples of strongly proj-bounded extensions of finite-dimensional algebras which are not bounded are presented. Here, we restrict ourselves just to a simpler one.
**Example 4.7**.: If \(B\) is separable (e.g. \(B=k\)) and \(A=B\times B\), then \(A/B=B\) is not tensor-nilpotent. However, since \(B\otimes B^{op}\) is semisimple, we have that \(A/B\) is projective as a \(B\)-bimodule. Using that \(B\)-relative projectivity is the same as ordinary projectivity when \(B\) is semisimple, we can conclude that \(B\subset A\) is strongly proj-bounded.
## 5 Frontiers of Han's conjecture
Having said much about the results already shown towards Han's conjecture, we conclude the article with a few comments on possible future steps.
As noted in section 4.1, many of the examples which were proved to satisfy Han's property are Frobenius: group algebras, exterior algebras, quantum complete intersections, trivial extensions. Therefore, this class of algebras in general seems to be an appealing option to be analysed next. For a more restrictive approach, one could start considering symmetric algebras; for a broader setting, self-injective algebras could be studied.
In another aspect, it could be interesting to investigate upper bounds for the realm of algebras satisfying Han's conjecture (which is stated only for finite-dimensional ones). For instance, there are many algebras of finite global dimension whose Hochschild homology is not concentrated in degree zero. For this, one can take Weyl algebras, which were considered above, or even polynomial algebras [15, Ex. 9.1.3]:
\[\operatorname{gldim}(k[x_{1},\ldots,x_{n}])=\operatorname{hh.dim}(k[x_{1}, \ldots,x_{n}])=n\]
In this manner, finite global dimension implying in zero Hochschild homological dimension seems to be a behavior really restricted to finite-dimensional algebras. That said, Han's property (and its converse) is still valid for both examples above.
In [14], a counterexample to it was given after considering pseudocompact algebras, i.e. topological algebras which are given by an inverse limit of finite-dimensional algebras (considered with discrete topology).
**Example 5.1**.: [14, Remark 6.18] Taking the quiver with infinite vertices below
\[Q:1\longleftarrow 2\longleftarrow 3\longleftarrow\cdots,\]
and the ideal \(I=R_{Q}^{2}\) generated by paths of lenght two, the pseudocompact algebra \(A=k[[Q]]/I\) satisfies \(\operatorname{gldim}(A)=\infty\) and \(\operatorname{hh.dim}(A)=0\)
As it can be seen, the example above is not finitely generated, and even not noetherian. So, this gives a motivation to analyse (if there are any) counterexamples for Han's property in the following classes generalizing finite-dimensional algebras: noetherian, finitely generated, and artinian.
## 6 Acknowledgments
I wish to thank my advisor Kostiantyn Iusenko for his suggestions and support, and for stimulating me and my colleagues for the exchange of ideas. I also express my thanks specially to two of them, Roger R. Primolan and Matheus Schmidt. This study was financed in part by the Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior - Brasil (CAPES) - Finance Code 001.
|
2308.10566 | Self-aligned hybrid nanocavities using atomically thin materials | Two-dimensional (2D) van der Waals layered materials with intriguing
properties are increasingly being adopted in hybrid photonics. The 2D materials
are often integrated with photonic structures including cavities to enhance
light-matter coupling, providing additional control and functionality. The 2D
materials, however, needs to be precisely placed on the photonic cavities.
Furthermore, the transfer of 2D materials onto the cavities could degrade the
cavity quality $(Q)$ factor. Instead of using prefabricated PhC nanocavities,
we demonstrate a novel approach to form a hybrid nanocavity by partially
covering a PhC waveguide post-fabrication with a suitably-sized 2D material
flake. We successfully fabricated such hybrid nanocavity devices with hBN,
WSe$_2$ and MoTe$_2$ flakes on silicon PhC waveguides, obtaining $Q$ factors as
high as $4.0\times10^5$. Remarkably, even mono- and few-layer flakes can
provide sufficient local refractive index modulation to induce nanocavity
formation. Since the 2D material is spatially self-aligned to the nanocavity,
we have also managed to observe cavity PL enhancement in a MoTe$_2$ hybrid
cavity device, with a cavity Purcell enhancement factor of about 15. Our
results highlights the prospect of using such 2D materials-induced PhC
nanocavity to realize a wide range of photonic components for hybrid devices
and integrated photonic circuits. | C. F. Fong, D. Yamashita, N. Fang, S. Fujii, Y. -R. Chang, T. Taniguchi, K. Watanabe, Y. K. Kato | 2023-08-21T08:50:39Z | http://arxiv.org/abs/2308.10566v1 | # Self-aligned hybrid nanocavities using atomically thin materials
###### Abstract
Two-dimensional (2D) van der Waals layered materials with intriguing properties are increasingly being adopted in hybrid photonics. The 2D materials are often integrated with photonic structures including cavities to enhance light-matter coupling, providing additional control and functionality. The 2D materials, however, needs to be precisely placed on the photonic cavities. Furthermore, the transfer of 2D materials onto the cavities could degrade the cavity quality (\(Q\)) factor. Instead of using prefabricated PhC nanocavities, we demonstrate a novel approach to form a hybrid nanocavity by partially covering a PhC waveguide post-fabrication with a suitably-sized 2D material flake. We successfully fabricated such hybrid nanocavity devices with hBN, WSe\({}_{2}\) and MoTe\({}_{2}\) flakes on silicon PhC waveguides, obtaining \(Q\) factors as high as \(4.0\times 10^{5}\). Remarkably, even mono- and few-layer flakes can provide sufficient local refractive index modulation to induce nanocavity formation. Since the 2D material is spatially self-aligned to the nanocavity, we have also managed to observe cavity PL enhancement in a MoTe\({}_{2}\) hybrid cavity device, with a cavity Purcell enhancement factor of about 15. Our results highlights the prospect of using such 2D materials-induced PhC nanocavity to realize a wide range of photonic components for hybrid devices and integrated photonic circuits.
+
Footnote †: Corresponding author: [email protected]
+
Footnote †: Corresponding author: [email protected]
## I Main
Two-dimensional (2D) van der Waals layered materials such as graphene, hexagonal boron nitride (hBN) and transition metal dichalcogenides (TMDC) are garnering significant attention for both fundamental science and device applications [1; 2; 3; 4]. In particular, semiconducting TMDCs have a direct bandgap at monolayer thickness [5; 6]. They exhibit bright optical emission which is governed by their exciton (coulomb-bound electron-hole pair) responses even up to room temperature due to the large exciton binding energies of hundreds of meV [7]. Many 2D materials exhibit large optical nonlinearities [8], which could also be intricately linked to the valley polarization [9; 10]. The amenability to strain and defect engineering also makes 2D materials promising for the creation of emitters for quantum light sources [11; 3; 12]. 2D materials can also be stacked to achieve new functionalities or physical phenomena. For example, mono- and few-layer TMDCs are often used together with hBN as the insulating layer and with graphene for gate tunable functionalities [13; 14]. In addition, stacks of 2D material layers could give rise to proximity effects [15; 16; 17], as well as moire related behaviour [18; 19; 20; 21].
The integration of two-dimensional (2D) materials with nanophotonic architectures [22; 23; 24; 25; 26], offers a promising avenue for tailoring the dielectric environment and local optical density of states that govern the interactions of 2D excitons with light. This strategy enables the manipulation of light-matter coupling, thereby facilitating the realization of practical hybrid devices such as lasers, nonlinear sources, quantum emitters and photodetectors. To achieve optimal device performance, it is imperative to couple 2D materials with cavities exhibiting a high quality factor over mode volume ratio (\(Q/V_{\text{mode}}\)). The planar air hole photonic crystal (PhC) nanocavities [27; 28; 29], promise strong light confining in an ultrasmall mode volume, rendering them highly attractive for the aforementioned hybrid device applications.
In previous reports, 2D materials such as TMDC mono- and few-layer flakes are usually transferred onto prefabricated nanocavities. The flake transfer requires precise alignment and placement to the cavity. Aside from absorption by the flake, the transfer of a TMDC flake changes the dielectric environment of the cavity and could induces other optical losses which results in significant degradation in the cavity quality (\(Q\)) factor, limiting device performance [30; 31; 32; 33; 34]. This effect becomes even more severe and unpredictable with stacks of 2D materials with irregular shapes and thicknesses. Instead of being a detriment, the change in the dielectric environment caused by the 2D material can be utilized to enable alternative methods for cavity light-matter coupling.
In this work, we demonstrate the formation of hybrid PhC mode gap nanocavities by partially covering PhC waveguides post-fabrication with suitably-sized 2D material flakes. The presence of the flake modulates the local refractive index, causing a mode frequency mismatch, leading to optical confinement and thus cavity formation. The flake is spatially self-aligned to the cavity, facilitating optical coupling. We have successfully fabricated such nanocavities with various 2D materials including hexagonal boron nitride (hBN), tungsten diselenide (WSe\({}_{2}\)) and molybdenum ditelluride (MoTe\({}_{2}\)), achieving high \(Q\) factors of \(10^{4}\)-\(10^{5}\). Contrary to our initial expectations, we have discovered that even a monolayer flake could give rise to sufficient local refractive index modulation to form a hybrid nanocavity. In fact, according to simulation
results, the thinner the flake, the more moderate the refractive index modulation, the higher the \(Q\) factor. In such hybrid systems, the monolayer represents the extreme limit of index modulation, promising high \(Q\) factors. We have further observed the self-aligned coupling of MoTe\({}_{2}\) photoluminescence (PL) in a hybrid nanocavity device, giving rise to Purcell enhanced emission and lifetime reduction corresponding to a Purcell factor of about 15.
## II Design of Hybrid Nanocavity
For simulating our device structure, we employ the air-suspended W1 line defect PhC waveguide [35] made of silicon (refractive index, \(n_{Si}\) = 3.48), consisting of a triangular array of air holes with lattice period \(a\), with a 2D material flake covering a section of the waveguide (Fig. 1a). We typically consider a PhC waveguide with 48 and 14 air holes along the \(\Gamma\)-\(K\) and \(\Gamma\)-\(M\) directions, respectively, with the air hole radius, \(r\) = 0.28\(a\) and \(a\) = 340 nm. The PhC slab thickness is 200 nm. The photonic band structure of the transverse-electric-like (TE-like) modes of the PhC waveguide is shown in Fig. 1b. There are two guided modes, referred to as odd and even in accordance with the symmetry of the \(E_{y}\) field distribution about the \(x\)-axis.
When a 2D material flake is present on the PhC waveguide, the local effective refractive index increases, red shifting the frequencies of the guided modes. The frequency mismatch between the regions with and without the 2D material flake gives rise to field confinement and thus the formation of a cavity. There are corresponding cavity modes to each of the guided modes with the mode resonances in the near infrared (NIR) regime. The frequencies of the hybrid nanocavity modes are usually lower than the band edge (frequency at \(k_{x}\) = 0.5) of the corresponding guided modes. The even cavity modes exhibit much higher \(Q\) factors as the modes are within the mode gap -- the frequency range between the even guided mode edge and the lower edge of the photonic band gap (green region in Fig. 1b). In this work, we will mainly focus on the even cavity modes.
To investigate the cavity mode properties, we consider a rectangular WSe\({}_{2}\) flake (refractive index, \(n_{WS_{2}}\) = 3.95 [36]) of 10 nm thickness with a lateral width of 14\(a\), partially covering the surface of the PhC waveguide. The flake is assumed to cover the PhC structure completely along the \(y\)-direction.
Figure 1: **2D material hybrid PhC mode gap nanocavity.****a** Schematic of a device structure showing cavity formation at the location of the 2D material. **b** Photonic bands of the PhC waveguide showing the odd and even guided modes as labelled, as well as the mode gap in green and the cavity mode within. The solid (dashed) lines represent the guided modes with (without) the 2D material. **c** Electric field E\({}_{y}\) profile of the cavity mode in the \(xy\)- and \(xz\)-planes. The green shaded region represents the 2D material flake. **d** The dependence of the cavity \(Q\) factor against thickness of WSe\({}_{2}\). (Inset) Plot of the \(Q\) factor against number of layers.
The simulated \(E_{y}\) field amplitude profile of the fundamental even cavity mode is shown in Fig. 1c. Due to the minimal thickness of the 2D material flake, most of the field is concentrated within the slab in the region covered by the WSe\({}_{2}\) flake.
The simulated fundamental cavity mode \(Q\) factor dependence on the thickness of the WSe\({}_{2}\) flake is summarized in Fig. 1d. At a thickness of 30 nm, the cavity \(Q\) is about \(10^{4}\). As the thickness of the WSe\({}_{2}\) flake decreases, the \(Q\) factor increases, reaching a theoretical value as high as \(10^{6}\) for few-layer flakes. Simulation results indicate that not only can a monolayer form a cavity, but it also promises an ultrahigh \(Q\) factor of the order of \(10^{7}\). The atomically thin nature of the flake causes a less abrupt change of the refractive index in space, leading to a "gentle" perturbation to the fields. This in turn corresponds to having less field components within the leaky region in the momentum space (see Supplementary Fig. S1), indicating less loss via coupling to free space, and thus results in ultrahigh \(Q\) factors.
Other properties such as the mode profile remains similar for the range of thickness considered in our simulations. The cavity mode wavelength increases with flake thickness while the cavity mode volume remains relatively constant before increasing when the flake thickness decreases below 10 nm (Supplementary Fig. S1). These trends also apply to other 2D materials. Further details about other factors that affect the cavity \(Q\) factor such as the flake lateral width and refractive index are provided in Supplementary Fig. S2.
## III hBN-induced hybrid phc Nanocavity
Dielectric hBN -- which is often used to encapsulate TMDC to prevent exposure to air -- is also a suitable 2D material to form hybrid nanocavities. In particular, high crystal quality hBN flakes with low defect densities promise high \(Q\) factors. Figure 2a shows an optical micrograph of a fabricated hBN-on-PhC waveguide device. The PhC slab thickness is 200 nm, and the waveguide consists of 48 and 14 air holes along the \(\Gamma\)-\(K\) and \(\Gamma\)-\(M\) directions, respectively. The lattice and the air hole radius are \(a\) = 360 nm and \(r\) = 0.27\(a\), respectively. For this PhC waveguide, semi-circular output couplers are included at each end of the waveguide to facilitate laser transmission measurements. The hBN flake is transferred onto the PhC waveguide post-fabrication (see Methods section for further details). The thickness of the hBN flake is estimated to be about 20 nm based on the optical contrast, and its lateral width measured along the waveguide is about 12\(a\).
By optically exciting the silicon PhC device above the bandgap, the weak emission from silicon substrate can couple to the guided and/or cavity modes, manifesting as peaks in the PL spectra. For this particular sample, emission peaks appear near the frequency of the even guided mode edge (Fig. 2b). By performing 2D PL imaging, we confirm that the emission peaks only appear when exciting at the hBN flake along the waveguide, further indicating the formation of a cavity (Fig. 2c). The polarization properties of the peak are broadly consistent with expectations in accordance with the mode profile (see Supplementary Fig. S3). However, the actual linewidth of the peaks, and thus the \(Q\) factors, cannot be determined using the spectrometer due to insufficient resolution.
We then carried out transmission measurements by exciting at the hBN flake and detecting the scattered light from the outcoupler to right of the waveguide. A few features can be seen in the transmission spectrum (Fig. 2d): the guided mode edge can be seen at about 1330.00 nm, as well as a relatively broad peak centered at 1330.34 nm and a narrow peak at 1330.61 nm. The broad peak has a linewidth of 60.2 pm, corresponding to \(Q=2.2\times 10^{4}\) while the narrow peak has a linewidth of 3.54 pm, giving \(Q=3.8\times 10^{5}\). Given the relative position and the \(Q\) factors of the peaks, the narrow and broad peaks should correspond to the fundamental and the first higher order cavity modes.
The extracted \(Q\) factors from the resonant peaks of the transmission spectra are the so-called loaded \(Q\) factors which consist of the intrinsic cavity \(Q\) (\(Q_{i}\)) and cavity-waveguide coupling \(Q\) (\(Q_{c}\)) [37]: \(\frac{1}{Q_{i}}=\frac{1}{Q_{i}}+\frac{1}{Q_{c}}\). For this hBN device,
Figure 2: **hBN-induced hybrid nanocavity.****a** Optical micrograph of an hBN hybrid nanocavity device. The scale bar represents 5 \(\upmu\)m. The “x” symbol marks the location of the PL measurement in (b). The dotted box outlines the area of the 2D imaging shown in (c). **b** PL spectrum showing signatures of cavity emission. **c** 2D image of the integrated PL intensity over the wavelength range of 1325–1336 nm showing localized cavity emission at the hBN flake (the flake is indicated by the darkened region). **d** Transmission spectra showing the cavity peak. (Inset) A closer look at the narrow peak fitted to a Lorentzian function (magenta line).
by changing the transmission measurement configuration to excite at the left output coupler and detecting light from the right output coupler, we are able to observe the resonant dip that corresponds to the absorption of light by the cavity, allowing us to calculate \(Q_{\mathrm{i}}\) of this device to be about 4.5\(\times\)10\({}^{5}\) (Supplementary Fig. S4).
Overall, our results show that the flake and the nanocavity are colocalized, essentially forming a self-aligned device. The enhanced Si PL emission also indicates cavity light-matter coupling.
## IV Hybrid PhC nanocavities using atomically thin WSe\({}_{2}\)
Compared to hBN, it is easier to obtain and identify mono- and few-layer flakes with TMDCs. Motivated by our simulation results, we conduct experiments using WSe\({}_{2}\) as it is relatively stable in air. We successfully fabricate seven hybrid PhC nanocavity devices using monolayer to trilayer flakes. The discussion here will focus on three exemplary devices with monolayer, bilayer and trilayer flakes (Fig. 3). Summaries of all device configurations and \(Q\) factors are presented in Fig. S10 and Supplementary Table 1.
The transmission spectrum for each device is shown in Fig. 3e-f. The cavity peaks appear at the expected wavelengths in accordance with the PhC waveguide lattice period. Due to the presence of background transmission signal, its interference with the cavity peak could results in an asymmetric line shape. By fitting the peaks to either the Lorentzian (symmetric line shape) or Fano resonance (asymmetric line shape), \(Q_{\mathrm{l}}\) are determined to be 8.6\(\times\)10\({}^{4}\), 1.0\(\times\)10\({}^{5}\) and 4.0\(\times\)10\({}^{5}\), for the monolayer, bilayer and trilayer devices, respectively. Only for the bilayer sample are we able to obtain the necessary transmission spectra with the cavity resonant dip and peak in the different measurement configurations to extract the \(Q_{\mathrm{i}}\) to be about 3.7\(\times\)10\({}^{5}\). The cavity \(Q\) factors do not show an obvious increase with the decrease flake thickness even after considering the flake lateral widths. Various loss mechanisms could affect the extracted \(Q\) factors; further descriptions of the possible mechanisms are provided in the discussion section. Our results here show unambiguously that even a monolayer flake could give rise to sufficient local refractive index modulation to form a high \(Q\) hybrid nanocavity despite being only one-atom thick.
## V Cavity PL enhancement in self-aligned hybrid nanocavity with MoTe\({}_{2}\)
MoTe\({}_{2}\) is optically active in the NIR, making it an ideal TMDC to induce a nanocavity in the PhC and simultaneously couple its exciton emission to the cavity. Figure 4a shows an optical micrograph of a MoTe\({}_{2}\) hybrid nanocavity device. The flake is about 6-8 layers thick based on optical contrast and the PL peak position. The lateral width of the flake along the waveguide is about 6\(a\). Despite not fully covering the PhC along the vertical (\(y\)) direction, the flake managed to induce the formation of a cavity. Multiple peaks are visible in the transmission spectrum (Fig. 4b) with \(Q\) factors of the order of 10\({}^{3}\)-10\({}^{4}\). The lower \(Q\) factors are due to a combination of the larger flake refractive index, the small lateral width, as well as
Figure 3: **Mono- & few-layer WSe\({}_{2}\) hybrid nanocavity.****a-c** Optical micrograph of mono-, bi- and trilayer WSe\({}_{2}\)-induced nanocavity device. The waveguide consists of 96 and 14 air holes along the \(\Gamma-K\) and \(\Gamma-M\) directions, respectively, to facilitate ease of 2D material flake transfer. The brightness and contrast of the images have been adjusted to improve the visibility of the WSe\({}_{2}\) flake. For the mono- and trilayer device, the PhC waveguide has \(a\) = 348 nm while the bilayer device is of \(a\) = 356 nm. The lateral widths along the waveguide of the monolayer, bilayer and trilayer flakes of the devices are 13\(a\), 36.5\(a\) and 10\(a\), respectively. Some residues could be left on the sample during the transfer process in the monolayer device. However, there is no significant effect on the transmission spectra. **d-f** the corresponding cavity peak in the transmission spectra for each of the devices. The transmission measurements are performed by exciting the left output coupler and detecting at the right output coupler for the mono- and trilayer devices. As for the bilayer device, the laser excitation is focused on the sample and the transmitted signal is detected at the right output coupler. The peak in (d) is fitted with the Lorentzian function, while the peaks in (e) and (f) are fitted with the Fano resonance. The scale bars in (a-c) represents 5 \(\mu\)m.
the absorption by the flake. Modelling a structure with similar properties as the actual device in FDTD and FEM simulations, we confirmed that the formation of cavity that could support multiple modes is indeed possible (Supplementary Fig. S5). In fact, a local refractive index modulation of a small area adjacent to the waveguide is sufficient to induce strong optical confinement (Supplementary Fig. S6).
A broad background from the MoTe\({}_{2}\) exciton emission can be seen in the PL spectrum (Fig. 4c). Peaks close to 1200 nm correspond to the Fabry-Perot-like emission arising from the odd guided mode, while the narrow peaks at around 1310 nm correspond to the cavity modes. The bright intensity of these peaks indicates that the MoTe\({}_{2}\) emission is indeed coupled to and enhanced by the cavity. We further perform time-resolved PL measurements. A comparison of the decay curves of the PL from the cavity and bare MoTe\({}_{2}\) (positions 1 & 2, respectively in Fig. 4a) is shown in Fig. 4d. From the decay curves, the bare MoTe\({}_{2}\) PL lifetime is extracted to be 46.1\(\pm\)0.3 ps, whereas the cavity emission lifetime is 3.0\(\pm\)0.6 ps. The cavity decay curve is largely similar to that of the instrument response function (IRF), suggesting that the actual lifetime is shorter than the extracted value. Nonetheless, the PL lifetimes indicate a Purcell enhancement factor, \(F_{\text{P}}\) of 15\(\pm\)3. Since the emitter linewidth is broader than the cavity linewidth, we employ the following equation to estimate the theoretical Purcell enhancement factor [38]: \(F_{\text{P}}=1+\frac{3\lambda_{\text{av}}^{3}}{4\pi^{2}n^{2}}\frac{Q_{\text{ emitter}}}{V_{\text{mode}}}\frac{|E(r)|^{2}}{|E(r)|_{\text{max}}}\), where \(\lambda_{\text{cav}}\) is the cavity wavelength, \(n\) is the refractive index, \(Q_{\text{emitter}}\) is the emitter \(Q\) factor, \(V_{\text{mode}}\) is the mode volume and \(\frac{|E(r)|^{2}}{|E(r)|_{\text{max}}}\) is the ratio of the field intensity at the location of the emitter and the maximum field intensity of the mode. Since the MoTe\({}_{2}\) emitter is in air, \(n\) is taken to be 1. From FEM simulations of a device with a similarly shaped and thick flake, \(V_{\text{mode}}=0.04(\lambda_{\text{cav}}/n)^{3}\) and \(\frac{|E(r)|^{2}}{|E(r)|_{\text{max}}}=0.49\). Based on the linewidth of the MoTe\({}_{2}\) PL emission, \(Q_{\text{emitter}}=9\), resulting in a theoretical \(F_{\text{P}}\) of 9, close to the experimental \(F_{\text{P}}\). Our results unambiguously show that the spatial self-alignment of the flake to the nanocavity facilitates light-matter coupling.
## VI Discussion
We have successfully fabricated hybrid nanocavities using hBN, WSe\({}_{2}\) and MoTe\({}_{2}\) which exhibit a range of \(Q\) factor values. It is worth reiterating that the extracted \(Q\) factors are \(Q_{\text{I}}\) which is limited by either \(Q_{\text{c}}\) and \(Q_{\text{I}}\). The \(Q_{\text{i}}\) is in turned determined only by the coupling losses to free space and is comprised of the theoretical (\(Q_{\text{th}}\)) and experimental (\(Q_{\text{ex}}\)) losses: \(\frac{1}{Q_{\text{i}}}=\frac{1}{Q_{\text{i}}}+\frac{1}{Q_{\text{ex}}}\). The experimental \(Q\) factors of the devices are lower than the theoretical values due to fluctuations in the
Figure 4: **Purcell enhancement in MoTe\({}_{2}\) hybrid nanocavity.****a** Optical micrograph of an MoTe\({}_{2}\) nanocavity device. The scale bar indicates 5 \(\mu\)m. **b** Transmission spectra obtained by exciting at the left and detecting at the right output coupler, showing the cavity peaks. The green curves are the individual Lorentzian fits, and the magenta curve is the cumulative fit. **c** PL spectrum showing the emission from the MoTe\({}_{2}\) and cavity. (Inset) A zoom-in of the region with the cavity emission peak. **d** Time-resolved PL decay curves of the emission from the cavity and the bare MoTe\({}_{2}\) flake (positions 1 & 2 in (a), respectively) showing the reduced lifetime due to the cavity Purcell effect. The cavity and the bare MoTe\({}_{2}\) emission decay curves are fitted using a single and double exponential reconvolution functions, respectively. The grey shaded region shows the IRF.
fabricated structure parameters, for example, non-uniformity of air hole radii and lattice period, leading to scattering losses. The broken vertical symmetry of the device could also contribute to loss via the coupling of transverse electric and transverse magnetic modes [39; 40]. The irregular shape of the flake could also lead to losses as more of the flake edges overlap with the air holes [41]. Wrinkles, bubbles, and contaminants could also lead to air gaps between the 2D material flake and the PhC waveguide, affecting the cavity \(Q\). Nonetheless, the surface roughness of the flakes are mostly smooth and the interface between the flake and the PhC substrate clean, especially for the WSe\({}_{2}\) devices which are fabricated using the anthracene-assisted transfer method [42]. As such, the surface condition of the flake is likely not the main factor that affects the \(Q\) factor. From the scanning electron micrograph of the fabricated PhC waveguide (Supplementary Fig. S7), the PhC air holes show imperfect roundness with slight ellipticity, which we believe to the main cause that limits the experimental \(Q\) factors.
Despite these issues, high \(Q\) factors can be achieved in such 2D material-induced hybrid PhC nanocavities as demonstrated by our devices. The high \(Q/V_{\text{mode}}\) of our hybrid nanocavity devices also enabled the observation of nonlinear effects such as the optical bistability at low excitation powers (Supplementary note 2 and Fig. S8). By comparing the experimentally observed guided mode redshift caused by the 2D materials and the required change in the slab refractive index of bare PhC waveguide to produce the same redshift in FDTD simulations, it is estimated that a monolayer flake results in 0.1-0.2% change in the local refractive index. This is consistent with previous reports which suggested that ultrahigh \(Q\) PhC mode gap cavities could be formed in PhC waveguides with a local refractive index modulation as small as 0.1% [35; 43]. Furthermore, the devices are stable, with the cavities showing no significant degradation even after being stored in ambient conditions for a time period ranging from several months to more than a year (see Supplementary Fig. S9).
It is interesting to note that despite the 20 nm thickness of the hBN flake, the obtained experimental \(Q\) is an order of magnitude higher than the theoretical \(Q\) of a device with equally thick WSe\({}_{2}\) flake (Fig. 1d). The refractive index of hBN is taken to be 2.2, smaller than that of the silicon PhC slab. In addition to modulating the local refractive index, the interface between the hBN flake and the PhC waveguide facilitates total internal reflection, resulting in good optical confinement and thus a high \(Q\) factor. In contrast to hBN, TMDCs have refractive indices comparable or larger than that of silicon and thus do no facilitate total internal reflection at the flake/waveguide interface. Nonetheless, obtaining mono- and few-layer flakes with TMDCs is relatively easier. This advantage helps overcome the larger refractive indices of TMDCs and enables the creation of high \(Q\) hybrid nanocavities.
In conclusion, we have demonstrated experimentally that a 2D material flake can induce high \(Q\) factor hybrid nanocavity in a PhC waveguide post-fabrication. The cavity can be formed at arbitrary location along the waveguide and could readily couple to the 2D material flake, forming a self-aligned device. It is worth emphasizing that even a monolayer flake is capable of forming a hybrid nanocavity, enabling the combination of the exceptional optical properties of monolayers with the flexibility of optical engineering offered by the PhC substrate. Our versatile approach to form hybrid PhC nanocavities and self-aligned devices can be extended to encompass a wide variety of 2D materials, allowing the creation of devices with diverse functionalities. By transferring flakes of suitable 2D materials, passive PhC waveguides can be transformed post-fabrication into active devices such as laser, modulator, and detector, facilitating the development of hybrid integrated photonic circuits.
## VII Methods
_Numerical simulations._ The photonic band structure is calculated using MIT Photonic Bands (MPB) [44], considering a unit cell of the PhC waveguide with periodic boundary conditions in the \(x\)- and \(y\)-directions. The finite-difference time-domain (FDTD) simulations are performed using the open-source package MEEP [45] on a computing cluster. The grid resolution is usually set to at least \(a\)/24 or higher, depending on the thickness of the simulated 2D materials. With subpixel averaging, we are able to simulate 2D material of thicknesses down to 5 nm and obtain reliable results. Numerical finite element method (FEM) simulations are carried out with COMSOL to simulate PhC structures with mono- to few-layer thick 2D materials. The simulated PhC waveguide consisting of 48 (14) air holes along the \(\Gamma-K\) (\(\Gamma-M\)) direction is sufficiently large such that any further increase in size does not change the \(Q\) factor, i.e., the \(Q\) factor is limited only by out-of-plane losses. The type of simulated 2D materials is controlled by setting the value of the dielectric constant, obtained from reference Laturia _et al._[36]. We assume a monolayer effective thickness of 0.7 nm in our simulations. Silicon has minimal absorption at the NIR regime where that cavity mode is. Aside from MoTe\({}_{2}\), hBN and WSe\({}_{2}\) are not expected to have strong absorption in the NIR and thus we do not include absorption losses in the simulations. We also performed simulations using the isotropic or anisotropic dielectric constants and found no significant difference in the results.
_Silicon PhC waveguide fabrication._ The PhC waveguides are fabricated on a silicon-on-insulator substrate with a 200-nm-thick top silicon layer and a 1-\(\mu\)m-thick buried oxide layer. The PhC pattern is first defined on a resist mask by electron beam lithography, then the pattern is transferred onto the substrate via inductively coupled plasma using C\({}_{4}\)F\({}_{8}\) and SF\({}_{6}\) gases. Following resist removal, the buried oxide layer is etched away using a solution of 20% hydrofluoric acid to form air-suspended PhC waveguide structures.
_2D material dry transfer._ The hBN flakes (NIMS) are prepared on a polydimethylsiloxane (PDMS) sheet (Gelfilm by Gelpak) by mechanical exfoliation of bulk crystals. Suitable flakes are identified using an optical microscope and then transferred onto the target PhC waveguide using a homebuilt micromanipulator setup. MoTe\({}_{2}\) flakes (HQ Graphene) are prepared and transferred using the same method.
WSe\({}_{2}\) flakes (HQ Graphene) are prepared on a commercially available 90-nm-thick SiO\({}_{2}\)/Si substrates via mechanical exfoliation to enable the identification of the layer number via optical contrast. The WSe\({}_{2}\) flakes are then placed on the PhC waveguides using the anthracene-assisted transfer process [42]. To grow the anthracene crystals, anthracene powder is heated to about 80\({}^{\circ}\)C. The sublimated anthracene vapor will then recrystallize on the bottom surface of a glass slide placed at about 1 mm above the anthracene powder. The growth time is typically 10h. A small PDMS sheet is then placed on a glass slide, followed by an anthracene crystal on the PDMS to form an anthracene/PDMS stamp. Next, this stamp is used to pick up the WSe\({}_{2}\) flake. The WSe\({}_{2}\) flake and the anthracene crystal were then transferred together onto the target PhC waveguide. Finally, the anthracene crystal is heated to about 80\({}^{\circ}\)C or left in ambient conditions to sublime, leaving behind clean flakes.
_Optical spectroscopy._ PL measurements are performed with a homebuilt confocal microscopy system. A Ti:sapphire laser (Spectra Physics 3900S) is used for excitation, usually at a wavelength of 760\(-\)780 nm. The excitation power is controlled using neutral density filters, while the polarization of the laser is adjusted using a half-wave plate to match the polarization of the cavity mode. The laser beam is focused on the samples using an objective lens (Olympus) of 50\(\times\) magnification with a numerical aperture (NA) of 0.65. The emission from the sample is collected with the same objective lens, directed to a spectrometer (Princeton Instruments Acton SP2300), dispersed by a 150 lines/mm grating, and then detected with a liquid nitrogen-cooled InGaAs detector (Princeton Instruments PyLoN IR).
For transmission measurements, a wavelength tunable continuous-wave laser (Santec TSL-550) is used. A steering mirror and a 4\(f\) system are used to displace the laser excitation spot while keeping the same detection spot. The light scattered from the sample is collected by the objective lens and coupled into an optical fiber to direct the signal to a photoreceiver (New Focus 2011).
For time-resolved measurements, a Ti:sapphire laser (Coherent MIRA) operating in pulsed mode is used for excitation. The excitation wavelength is set to 780 nm. The laser beam is focused on the sample with an objective lens (Olympus) of 100\(\times\) magnification and NA of 0.85. The PL emission is collected by the same objective lens and then filtered spectrally with a long-pass filter to direct light of a specific wavelength range into an optical fiber connected to a superconducting nanowire single photon detector (Quantum Design Eos). All measurements are carried out at room temperature. The samples are kept in a nitrogen gas environment in bid to reduce the oxidation rate of the TMDC flakes.
###### Acknowledgements.
This work is supported in part by JSPS KAKENHI (JP22K14623, JP20H02558, JP20J00817, JP22K14624, JP22K14625, JP22F22350 and JP23H00262) and ARIM of MEXT (JPMXP1222UT1138), as well as the RIKEN Incentive Research Projects. K.W. and T.T. acknowledge support from the JSPS KAKENHI (Grant Numbers JP21H05233 and JP23H02052) and World Premier International Research Center Initiative (WPI), MEXT, Japan. C.F.F. is supported by the RIKEN SPDR fellowship. Y.-R.C. is supported by the JSPS Postdoctoral Fellowship. We acknowledge support by the RIKEN Information Systems Division for the use of the Supercomputer HOKUSAI BigWaterfall and SailingShip, as well as the use of the COMSOL license.
## Author contributions
C.F.F conceived the idea and performed the numerical simulations. C.F.F fabricated the PhC waveguide under the guidance of D.Y. C.F.F carried out the 2D materials transfer with assistance from N.F and Y.-R.C. C.F.F performed the optical measurements with some assistance from D.Y and S.F. K.W and T.T provided the bulk hBN crystals. C.F.F analyzed the data and wrote the manuscript. All authors contributed to the discussion of the results and the manuscript. Y.K.K supervised the project.
|
2301.04018 | Origin of heterogeneous stripping of lithium in liquid electrolytes | Lithium metal batteries suffer from low cycle life. During discharge, parts
of the lithium are not stripped reversibly and remain isolated from the current
collector. This isolated lithium is trapped in the insulating remaining
solid-electrolyte interphase (SEI) shell and contributes to the capacity loss.
However, a fundamental understanding of why isolated lithium forms and how it
can be mitigated is lacking. In this article, we perform a combined theoretical
and experimental study to understand isolated lithium formation during
stripping. We derive a thermodynamic consistent model of lithium dissolution
and find that the interaction between lithium and SEI leads to locally
preferred stripping and isolated lithium formation. Based on a cryogenic
transmission electron microscopy (TEM) setup, we reveal that these local
effects are particularly pronounced at kinks of lithium whiskers. We find that
lithium stripping can be heterogeneous both on a nanoscale and on a larger
scale. Cryo TEM observations confirm our theoretical prediction that isolated
lithium occurs less at higher stripping current densities. The origin of
isolated lithium lies in local effects, such as heterogeneous SEI, stress
fields, or the geometric shape of the deposits. We conclude that in order to
mitigate isolated lithium, a uniform lithium morphology during plating and a
homogeneous SEI is indispensable. | Martin Werres, Yaobin Xu, Hao Jia, Chongmin Wang, Wu Xu, Arnulf Latz, Birger Horstmann | 2023-01-10T15:06:14Z | http://arxiv.org/abs/2301.04018v1 | # Origin of heterogeneous stripping of lithium in liquid electrolytes
###### Abstract
Lithium metal batteries suffer from low cycle life. During discharge, parts of the lithium are not stripped reversibly and remain isolated from the current collector. This isolated lithium is trapped in the insulating remaining solid-electrolyte interphase (SEI) shell and contributes to the capacity loss. However, a fundamental understanding of why isolated lithium forms and how it can be mitigated is lacking. In this article, we perform a combined theoretical and experimental study to understand isolated lithium formation during stripping. We derive a thermodynamic consistent model of lithium dissolution and find that the interaction between lithium and SEI leads to locally preferred stripping and isolated lithium formation. Based on a cryogenic transmission electron microscopy (TEM) setup, we reveal that these local effects are particularly pronounced at kinks of lithium whiskers. We find that lithium stripping can be heterogeneous both on a nanoscale and on a larger scale. Cryo TEM observations confirm our theoretical prediction that isolated lithium occurs less at higher stripping current densities. The origin of isolated lithium lies in local effects, such as heterogeneous SEI, stress fields, or the geometric shape of the deposits. We conclude that in order to mitigate isolated lithium, a uniform lithium morphology during plating and a homogeneous SEI is indispensable.
Lithium metal, multi-scale model, SEI on lithium, isolated lithium, cryo-TEM, electrochemical dissolution
## Introduction
Lithium metal anodes paired with liquid electrolytes have regained much attention in the search for next-generation high-energy-density batteries [1, 2, 3, 4, 5, 6]. Despite early commercialization attempts until the late 1980s, safety concerns and low durability hinder the successful use of lithium metal anodes [7]. Recently, there has been much progress in monitoring the lithium metal structure during cycling, and there is the consensus that controlling the metal surface during cycling is key for successfully deploying lithium metal anodes [8, 9, 10, 11]. The low durability originates from the continuous growth of solid-electrolyte interphase (SEI) [1] and the formation of isolated lithium, which is electronically disconnected from the current collector [12]. Both effects lead to capacity loss and are enhanced with increasingly irregular, non-planar, and high-surface-area structures of the lithium anode. Experiments have found that the non-planar structure arises from whiskers, growing with no apparent growth direction during charging [13, 14, 15, 16, 17]. Lithium whiskers, often with small diameters, entangle each other to form mossy lithium [14]. The growth process is not limited by electrolyte transport at battery-typical current densities [18, 19, 20, 21]. During discharging, portions of lithium remain isolated in the SEI shell [22, 23, 12]. This isolated lithium can accumulate at the electrode, increasing the cell resistance because the ion path becomes tortuous [24, 25, 26] or float in the electrolyte [14] and possibly react at high voltage [27] and elevated temperatures [28]. A fundamental understanding of these nano- and microscale effects would significantly contribute to developing mitigation strategies for these structural inefficiencies.
Isolated lithium was first observed by Yoshimatsu _et al._ Scanning electron microscopy (SEM) experiments showed that particle-like structures remain at the tip of lithium "needles" after stripping [22]. Li _et al._ revealed by comparing cryo transmission electron microscopy (TEM) to room-temperature (RT) TEM that RT TEM greatly interacts with the lithium structures, and that the needles observed by RT TEM are lithium whiskers [23]. Cryo TEM is particularly useful for resolving the atomic scales of lithium whiskers and the SEI [29, 30, 31, 12, 16, 13]. The dynamics of the dissolution process of a single lithium whisker were captured by Steiger _et al._ with optical microscopy techniques [13]. It was observed that during stripping, a droplet is first disconnected at the tip, and afterwards, the root of the whisker dissolves, while a thin line connects the droplet to the anode. The thin line is most likely the hollowed-out SEI shell, but due to optical microscopy, the small structures cannot be resolved, and the pictures are governed by diffraction. Steiger _et al._ discuss the possibility that the remaining droplet is an insoluble SEI particle [13]. However, optical microscopy cannot investigate this hypothesis which requires spectroscopic analysis and high-resolution images of the residual particle.
Recently, new experimental findings were supported by theoretical works which tried to understand phenomena associated with isolated lithium formation [32, 30]. Li _et al._ observed in cryo TEM experiments that SEI nanostructure can induce notches where lithium whiskers are pinched-off and isolated lithium forms. Li _et al._ tried to understand this by simulating lithium dissolution with locally enhanced ionic conductivity. A large enhancement factor of more than 1000 is needed to simulate notches. This large enhancement factor is in contradiction to experimental estimates of ionic conductivities of SEI compounds, which tend to vary only one order of magnitude [33, 34]. Tewari _et al._ found that an increasing discharge current density lead to less isolated lithium formation [32]. Tewari _et al._ tried to understand this with an atomistic kinetic Monte Carlo model, where lithium self-diffusion at the solid interface, lithium dissolution, and ionic diffusion in the electrolyte were incorporated, while neglecting effects of the SEI or lithium electromigration in the electrolyte [32]. This model, reproduces that an increasing discharge current density leads to less isolated lithium. However, their simulation lattice of 150x100 lattice sites, i.e., atoms, is smaller than the typical whisker diameter and the observed
thickness of dead lithium structures of approximately 100 nm, equivalent to \(\sim\)285 atoms with the lattice constant of 351 pm [16, 23, 35]. In this model, the formation of isolated lithium is a purely stochastic process and cannot predict systematic formation of isolated lithium. Thus, the model is not complete and further theoretical investigation of isolated lithium is necessary.
We investigate the lithium stripping process in a combined experimental and theoretical approach. We focus on the dissolution dynamics of lithium during the stripping of lithium metal anodes and the origin of isolated lithium. We aim to answer the following key questions: (1) Why does isolated lithium form? (2) How do the electrochemical conditions influence the formation of isolated lithium? (3) How can we mitigate the formation of isolated lithium?
On the theoretical side, we developed a generalized phase-field model for lithium stripping underneath a rigid SEI. The model comprises the interaction of lithium with the SEI and the influence of geometrical effects on the dissolution rate. Both contributions are known to influence the reaction rate [36, 37, 38, 39, 40]. With this, we study the stripping behavior of a single lithium whisker. The model, described in detail in the Methods section, reproduces the literature observations. On the experimental side, we conducted cryo TEM to investigate lithium at different stripping stages and electron energy loss spectroscopy (EELS) to get insights into the chemical species of the whisker and the covering SEI.
## 2 Results and Discussion
In order to understand the heterogeneous stripping of lithium whiskers, a thorough understanding of the structure and chemical composition of the whiskers is necessary. Thus, we first present experimental results of the chemical composition of the whisker and the covering SEI shell. The composition of the whisker is under debate, and the results will further be used to validate assumptions for our model. Second, the model is applied to predict the dissolution dynamics of a single straight lithium whisker. The results are compared to the observations of Steiger _et al._[13]. In the cryo TEM setup, the dynamics of the dissolution process cannot be captured. However, it is a powerful tool for resolving the structures. Third, we thus show the micro and nanoscale observations of lithium after different stages of stripping with cryo TEM. A particular focus lies on kinked regions of the whiskers. Here, we compare the observed structures to model results of a 3D extension of the presented model. Finally, we extend our model for locally different SEI compositions and compare the results of our model to the notches observed by Li _et al._ and their model [23].
### Chemical composition of the whisker and the SEI
In the literature, the chemical composition of lithium whiskers, particularly of the tip, is still under debate [12, 13, 29, 41, 42, 43]. Steiger _et al._ suggested that the whisker tip consists of an insoluble particle [13].
For copper (Cu) whiskers occurring during electroplating, a similar phenomenon was experimentally verified [44]. There, a dirt particle induced much higher plating current densities at the tip, which led to whisker growth. However, lithium whiskers grow from the root, and different ideas for growth mechanisms are discussed
Figure 1: High-angle annular dark field (HAADF) scanning transmission electron microscopy image of the whisker and its tip with the corresponding electron energy loss spectroscopy (EELS) elemental mapping for a plating current density of 1 Am\({}^{-2}\). Red color corresponds to carbon, green to oxygen, blue to fluorine, and yellow to lithium.
in literature [14, 18, 19, 45, 46, 47].
In order to clarify the chemical composition of the whisker tip, we perform EELS elemental mapping measurements of the whisker tip. The whiskers are formed by electrochemically depositing lithium on a Cu grid. As electrolyte, we use 1.2 M LiPF\({}_{6}\) in ethylene carbonate (EC)-ethyl methyl carbonate (EMC) (3:7 by wt) with 5 wt % vinylene carbonate (VC) additive. As shown in Fig. 1, we find that the whisker tip has the same chemical composition as the root of the whisker. It can be seen that the whisker has oxygen (O) and carbon (C) rich molecules and little fluorine (F) containing molecules. We associate this with the SEI-forming molecules. The intensity is higher at the shell where only SEI is imaged. In the core, the lithium intensity is higher, suggesting that the core is elemental lithium. The thickness of the shell, where C, O, and F are higher in intensity, is roughly 20 nm. This matches with our measurement of the SEI in the cryo TEM images, as shown in Fig. SI 4. Additionally, we see no strong dependence of the chemical composition of the SEI on the plating current density, as shown in Fig. SI 2. Thus, the whisker tip is elemental lithium regardless of how the whisker was formed. There are no striking irregularities in the intensity distribution of the chemical compounds. We conclude that the SEI is homogeneous.
### Droplet formation during whisker dissolution
With our model, we can simulate the galvanostatic dissolution of a single straight lithium whisker, taking the interaction between lithium and SEI into account. The SEI adheres to the lithium and lithium is stripped underneath the SEI. First, the adhesive bond breaks, influencing the local chemical potential, described by Equation 8, which in turn influences the local reaction rate, described by Equation 6. We apply our model to simulate the dissolution of a whisker at a low current density and compare our results with the recorded whisker dissolution dynamics of Steiger _et al._[13] This experiment is ideal for our first comparison because the whisker is straight with no kinks and cylindrically symmetric.
As the exchange current density determines the dynamics for a given geometry and applied current, we state the applied current density \(J\) relative to the exchange current density \(J_{0}\). As discussed in the Methods section in more detail, the exchange current density depends on the used electrolyte and the thickness of the SEI, and is estimated to be \(J_{0}=100\) A/m\({}^{2}\) based on experimental estimations [37, 48]. We define our low current density scenario by \(J=0.01J_{0}\). In this case, the simulation results are shown in
Figure 2: Comparison of simulated and observed whisker dissolution and droplet formation. (A) Snapshots of the simulation of the whisker dissolution at low current density \(J=0.01J_{0}\). The color represents the local chemical potential at the given time and location on the whisker surface. (B) Dissolution of a single lithium whisker as recorded in the experiment by Steiger et al. [13] Reprinted with the permission of Elsevier.
Fig. 2 (A). Depicted is the shape of the lithium whisker at different points in time. The three-dimensional curve describes the surface of the lithium metal at a given time and the surface color describes the local chemical potential of lithium at the surface.
The simulation predicts the nucleation of an instability just below the tip in the early stripping process, depicted in Fig. 2 (A)b). This point on the whisker surface is unique: there, the surface is concave opposed to everywhere else. The binding to the SEI is represented by a negative effective interfacial tension. Usually, convex surfaces dissolve preferentially but here, with the binding to the SEI, concave surfaces are preferred. The instability can be understood by the following: When the dissolution begins, the detachment of the SEI exposes new surface area of lithium. In general, the exposed surface is minimized during the dissolution process in order to minimize the surface energy. If the dissolution is slow, lithium is stripped at preferential places, leaving most of the lithium surface attached to the SEI. This leads to lithium being electronically disconnected from the current collector at the tip of the whisker, see Fig. 2 (A)c). After this, at the tip, the remaining lithium forms a sphere, while the root of the whisker dissolves without any new instabilities, see Fig. 2 (A)d)-e). The part of the whisker which is still connected to the current collector then dissolves entirely.
The predicted dissolution behavior agrees nicely with the experimentally observed stripping of a lithium whisker by Steiger _et al._[13], see Fig. 2 (B). It was observed that below the tip, the whisker is thinned, and the tip gets disconnected from the root, as shown in Fig. 2 (B)b). Then, the root of the whisker dissolves completely, as shown in Fig. 2 (B)c)-e).
Note that for the experiment by Steiger _et al._, a stripping current density of approximately \(2\,\mu\)Acm\({}^{-2}\) was applied, estimated by dividing the total stripping current by the substrate area. The time scale of around \(500\,\)s, over which the dissolution of the single whisker was observed, hints that the local dissolution current density of the whisker deviates from the reported global \(2\,\mu\)Acm\({}^{-2}\). By approximating the whisker as a cylindrical object with a length of \(10\,\mu\)m and a diameter of \(200\,\)nm, we estimate the dissolution current density of the whisker to be in the order of
\[J =I/\bar{A}_{\rm whisker}=\frac{(F\cdot V_{\rm whisker})/(V_{\rm M} \cdot t)}{\bar{A}_{\rm whisker}} \tag{1}\] \[\approx 100\,\mu\mbox{Acm${}^{-2}$}=0.01J_{0},\]
with the Faraday constant \(F\), the whisker volume \(V_{\rm whisker}\), the molar volume of lithium \(V_{\rm M}\), and the average whisker surface area during dissolution \(\bar{A}_{\rm whisker}\). Our estimation deviates from the reported average current density by two orders of magnitude. With our assumption of \(J_{0}=100\,\)A/m\({}^{2}\), this estimation of the dissolution current density fits perfectly to our simulations with \(J=0.01J_{0}\).
Further, one can observe that in Fig. 2 (B) between a) and b) (in \(240\,\)s), and between b) and c) (\(45\,\)s), a comparable amount of lithium is stripped in different time intervals. We interpret this to imply that an onset time exists before the dissolution of the whisker starts. Our observations, discussed in the section below, support this interpretation. The underlying course of the existence of the onset time is not understood. We suggest that heterogeneous current distribution or heterogeneous kinetic
Figure 3: Snapshots of the simulation of the whisker dissolution at high current density \(J=J_{0}\). The color represents the local chemical potential at the given time and location on the whisker surface.
barriers due to fluctuating surface properties of the individual whiskers can induce the onset time. The latter is well described in the context of our model. We predict locally varying dissolution currents depending on the local chemical potential, which in turn depends on the SEI properties and the whisker surface properties. Considering multiple whiskers with varying radii and heterogeneous SEI coverage, our local dissolution currents can lead to whiskers dissolving one at a time. We do not have an onset time in our simulation, as we consider only a single whisker. Thus, we predict the disconnection of the isolated lithium droplet after just a few seconds, as shown in Fig. 2 (A)c).
The discussed instability below the tip vanishes for higher stripping current densities, as depicted in Fig 3. For this simulation, we choose the stripping current density to be \(J=J_{0}\). In this scenario, the local variations of interfacial tension become irrelevant and the local stripping current density variations are negligible [49, 50, 51, 52]. Therefore, the lithium-SEI bond breaking occurs homogeneously, see Fig. 3 b) and c). During stripping, the whisker is thinned until it is dissolved, as depicted in Fig. 3 d)-e). The remaining isolated lithium is hardly visible, Fig. 3 e). This agrees with the experimental observations of Tewari _et al._ of less isolated lithium formation at higher stripping current densities [32].
To validate the simulation results, we perform an additional experiment at a higher stripping current density of \(10\,\mathrm{Am}^{-2}\), depicted in the Supporting Information (SI). After discharge, the whiskers are stripped completely and only a hollowed-out SEI shell remains, see Fig. SI 3. This agrees with our theoretical predictions.
### Micro- and nanoscale observation of the stripping heterogeneity
In order to understand the stripping behavior of lithium, one has to understand the dynamics at different length scales simultaneously, which is a very challenging task. At the centimeter scale, non-uniform distribution of current density is observed [53]. In our cryo TEM setup, we can probe for heterogeneous stripping ranging from several hundred microns down to a few nanometers. In order to investigate irregularities of lithium stripping in the length scale of \(100\,\mathrm{\mu m}\), we take images of the copper grid after plating and after stripping. To investigate smaller length scales, we focus on small, interesting parts of the lithium whiskers after stripping.
In the case of stripping at a low current density of \(0.1\,\mathrm{Am}^{-2}\), we observe non-uniform lithium dissolution on the microscale, as shown in Fig. 4. In the experiment, we stripped away about half of the plated lithium. Opposing to uniform stripping of the lithium whiskers, we observe areas where the lithium seems to be almost completely dissolved, while in other areas, it seems that dissolution has not started at all. From this, we conclude that larger-scale heterogeneity plays a critical role in the preferential dissolution of lithium. Local stress fields or locally different SEI compositions can cause this heterogeneity and lead to locally different overpotential and thus enhance or retard localized stripping.
As we want to investigate the lithium whiskers during and after stripping, we focus on the areas with low remaining whisker density. There, the dissolution is mostly complete, and we can investigate if isolated lithium forms. We find that preferential stripping occurs mostly at kinks. In Fig. 5 (A), we show a typical image of the observed structure at kinks. It can be seen that
Figure 4: Cryo TEM image of the copper grid a) after 100 minutes plating at \(1\,\mathrm{Am}^{-2}\) and b) after 500 minutes stripping at \(0.1\,\mathrm{Am}^{-2}\). After non-uniform stripping, there are areas with high and low remaining whisker densities.
the preferential dissolution at the kinks leads to a separation point where one part of the whisker is electronically disconnected from the other part and thus forms isolated lithium. The connection remains only through a SEI, which seems to be different compared to the rest of the whisker covering SEI. This can be caused by two effects: First, by mechanical deformation of the native SEI covering the whisker, and second, by new reactions of exposed lithium with the electrolyte. In order to understand the cryo TEM image, we performed EELS elemental mapping to analyze the SEI composition, see Fig. 5 (B). We can identify an O-rich SEI, with little amounts of C and very little amount of F. This suggests the formation of Li\({}_{2}\)O nano-particles at the kink.
Kink regions have the distinct feature of different surface curvatures on the inside and the outside of the kink. Following our line of argument presented above, this geometry effect should lead to different stripping rates in the kink region at low current rates. We extend the whisker model to three dimensions to study if our predictions match the experimental observations. For this, we trace points on the lithium surface and calculate the surface curvature utilizing differential geometry as the eigenvalues of the shape operator. Fig. 5 (C) shows the amount of stripped lithium in the vicinity of a kink in the early stages of stripping for a low stripping current density. It can be seen that stripping occurs preferentially on the inside of the kink, where the kink surface was initially concave. The blue color indicates that a large amount of lithium is stripped away, while the yellow color indicates that almost no lithium is stripped. In the blue region on the inside of the kink, the lithium-SEI bond is broken first. With further stripping, the preferred dissolution at the kink leads to a pinch-off of the upper whisker part and isolated lithium. After the pinch-off and the breakdown of the SEI shell, the isolated lithium part is fragile, can rotate, and potentially mechanically disconnect from the root of the whisker. Our predictions agree with our cryo TEM observations and explain why kinks are prone to isolated lithium formation. This explains why mossy lithium is particularly bad and why stripping behavior is better when the whiskers are straight and aligned [54].
Note that our model considers a local chemical potential due to the lithium-SEI interaction through an adhesive bond. Additionally, the SEI can put pressure on lithium during whisker growth when lithium is plated underneath the
Figure 5: Stripping of kinked region. (A) Cryo TEM image of a whisker kink region after incomplete stripping at \(0.1\,\mathrm{Am}^{-2}\). (B) HAADF STEM image of the whisker kink region after incomplete stripping at \(0.1\,\mathrm{Am}^{-2}\) and the corresponding electron energy loss spectroscopy elemental mapping. Red color corresponds to carbon, green to oxygen, blue to fluorine, and yellow to lithium. (C) Snapshot of the simulation of whisker dissolution at \(J=0.01J_{0}\) with focus on the whisker kink region at \(t=3\,\mathrm{s}\). The color represents the amount of lithium stripped at the given time point compared to the original whisker surface.
SEI. Differences in the local stress distribution also cause differences in the local chemical potential and thus lead to different local stripping behavior. In kinks, the rotational symmetry of the geometry is broken. We thus anticipate that local stress fields can also play a role in the preferred stripping behavior of kinks. Investigating this effect would require extending our model with a model for the local stress distribution. This is possible because of the generality of our framework but outside the scope of this work.
### Notches
Li _et al._ found that during stripping notches can occur, as depicted in Fig. 6 A-C.[23] Notches are seemingly random spots in the whisker where a part of the whisker is completely pinched off, leaving isolated lithium disconnected from the current collector. Li _et al._ suspect that notches emerge at spots where the covering SEI has a higher ionic conductivity. Possibly through a locally different SEI composition. We adopt this idea and translate the locally enhanced ionic conductivity to a locally enhanced exchange current density with enhancement factor \(S\). The exchange current density strongly depends on the SEI composition and thickness.[37, 48] We explore what our model predicts by introducing a 90 nm long spot where the exchange current density is increased by \(S=2\). The simulation results are depicted in Fig. 6 D. The results look remarkably similar to the observation of Li _et al_. The enhanced exchange current density is equivalent to the enhanced ionic conductivity, but our model introduces an additional mechanism to the process. Due to the locally accelerated dissolution, the bond between lithium and SEI is broken faster. Thus, the reaction is even more enhanced, and notches can develop.
In order to understand the influence of heterogeneity of the SEI on the formation of notches, we conduct a parameter study to find the minimum factor \(S_{\rm min}\) for notches to develop. We find that \(S_{\rm min}\) is larger for higher stripping currents, see Fig. 7. We note that our model predicts trends, not quantitative values, as the exact value of \(S_{\rm min}\) depends on many factors, e.g. the whisker thickness or the surface area of the ionically higher conductive SEI. The value range of \(S_{\rm min}\sim 2-5\) for notches to occur seems more realistic than an enhanced ionic conductivity factor of 1000, as reported in the studies of Li _et al.[30]_ We therefore conclude that the LISEI interaction is important for the occurrence of notches.
We propose the following interpretation of our results: 1. Our model confirms the idea from Li _et al._ that a locally enhanced ionic conductivity of the SEI can lead to notches. 2. For smaller stripping current densities, notches can more easily occur and are thus more likely to occur. As discussed above, local variations play a lesser role at higher stripping current densities.[49, 50, 51, 52] Thus higher stripping current densities can mitigate the emergence of notches and thus can lead to less isolated lithium.
In our experiments, we do not observe notches, which we attribute to a more homogeneous SEI structure and composition.
## 4 Conclusion
We investigated the stripping behavior of lithium at low stripping current densities and the origins of its heterogeneity in a combined theoretical and experimental approach. On the theoretical side, we developed a model for lithium whisker dissolution. Lithium whiskers occur in the early stages of electroplating and lead to mossy lithium. Our model comprises the interaction between lithium and SEI which is crucial to describe the experimentally observed phenomena. We predicted the occurrence of isolated lithium emerging at geometrically distinct spots below the tip or at kinks at low stripping current densities. The dissolution dynamics can describe the experimental observation by Steiger _et al.[13]_ and the formation of a lithium droplet at the whisker tip. The model also reproduces the notches that can lead to isolated lithium, reported by Li _et al.[23]_ On the experimental side, we plated lithium on a
Cu TEM grid and investigated the emerging structures with cryo TEM, as this preserves the native state of the specimen [16, 23, 29, 30, 41, 55]. We observed that kink regions are very prone for isolated lithium formation. With EELS elemental mapping, we observed the SEI composition and found that the SEI composition changes, where isolated lithium is formed. We do not only observe that lithium stripping is non-uniform on a nanoscale of \(\sim 100\,\mathrm{nm}\) but also on a micro scale of \(\sim 100\,\mathrm{\mu m}\). We thus conclude that defects play a critcial role in the dissolution of lithium and that local effects, such as stress fields or local overpotential, can retard or facilitate lithium stripping. From our study we can conclude that in order to prevent non-uniform stripping, morphological inefficiencies like whiskers or locally different SEI should be mitigated. Otherwise isolated lithium and low Coulombic efficiency are anticipated, especially at low discharge current densities. There, the observed non-uniform stripping phenomena are more prominent compared to higher stripping current densities.
## 3 Methods
### Model
We model the dissolution of a single lithium whisker without kinks, covered by an SEI layer of uniform thickness, during galvanostatic stripping as a reaction-limited process. Lithium whiskers are thought to be formed in a stress relaxation mechanism [14, 45, 56]. We assume that all stresses are relaxed with the start of the stripping and that the dissolution can be described solely by electrochemical reactions. We assume the diffusion of lithium in the electrolyte to be sufficiently fast and a constant concentration of Li\({}^{+}\) at the surface, which is valid for stripping current densities before diffusion limitations occur (see SI Eq. A1[57]):
\[j\ll j_{\mathrm{diff}}\approx 300\,\mathrm{A}\,\mathrm{m}^{-2}. \tag{2}\]
Figure 6: Comparison of notches as observed in experiments and predicted by simulation. A-C: Notches as observed by Li _et al._[23] Places with locally different SEI nanostructure have higher ionic conductivity, and isolated lithium forms. Reproduced with permission by Elsevier [30]. D: Snapshot of a simulation of Li whisker with enhanced exchange current density \(S=1.5\) at a current density of \(J=0.1J_{0}\). A notch forms at the place with enhanced SEI properties.
Figure 7: Phase-diagram of the stability of whiskers against notches as a function of applied current density and the enhancement factor \(S\) of the locally higher conductive SEI. The red dots represent the minimum enhancement factor \(S_{\mathrm{min}}\) for notches to develop before other the lithium-SEI bond is broken at the rest of the whisker. The red area describes where notches are anticipated and the green area where notches are mitigated.
As the initial geometry, we assume a cylinder-like shape with radius \(R=100\,\)nm and a spherical tip, as described in SI Eq. A2. This resembles the structures observed in experiments, see Fig. 8. This ansatz allows us to use a cylindrical symmetry. We describe the surface of the whisker by
\[(\xi,\phi)\mapsto(r(\xi),z(\xi),\phi) \tag{3}\]
with the surface marker \(\xi\). Here, \((r_{0}(\xi),z_{0}(\xi))\) is the initial whisker surface and the position of the SEI, which we assume to be rigid, i.e., it does not change during dissolution. In reality, the SEI shell falls together due to a negative pressure beneath the SEI surface. This is not important for our simulation of the whisker dissolution, as this happens after the lithium-SEI bond is broken.
Unlike canonical phase-field models, where the solid/liquid phases are captured by a phase parameter and a diffuse edge,[58, 59, 60, 61, 62, 63, 64, 65] we assume a sharp edge and only track the surface of the whisker.[49, 50] The dynamics of the whisker dissolution can then be described by the propagation of the surface points \(\xi\) which move perpendicular to the surface:
\[\frac{\partial r(\xi)}{\partial t} =\frac{\dot{z}}{\sqrt{\dot{r}^{2}+\dot{z}^{2}}}\cdot\frac{V_{\rm M }}{F}\cdot J(\xi)\, \tag{4}\] \[\frac{\partial z(\xi)}{\partial t} =\frac{-\dot{r}}{\sqrt{\dot{r}^{2}+\dot{z}^{2}}}\cdot\frac{V_{\rm M }}{F}\cdot J(\xi). \tag{5}\]
Here, \(\dot{r}=\partial r/\partial\xi\), \(\dot{z}=\partial z/\partial\xi\), \(V_{\rm M}=13.02\times 10^{-6}\,\)m\({}^{3}\)/mol is the molar volume of lithium, \(F=96\,485\,\)C/mol is the Faraday constant and \(J(\xi)\) is the electrical current density, given by the Butler-Volmer expression
\[J(\xi)=J_{0}\left[e^{\frac{-F\Delta\Phi}{2R\Omega}}-e^{\frac{\mu(\xi)}{R\Omega} }e^{\frac{F\Delta\Phi}{2R\Omega}}\right], \tag{6}\]
with the effective exchange current density \(J_{0}\), the ideal gas constant \(R=8.314\,\)J/(mol K), the room temperature \(T=298.15\,\)K, the potential step \(\Delta\Phi=\Phi-\Phi_{0}\) relative to the lithium metal and the chemical potential \(\mu(\xi)\) of lithium at the whisker surface.[49, 50] Note that with the simplified formula for Marcus-Hush-Chidesy kinetics by Bazant and co-workers,[66] equation 6 can be modified to better describe behavior for high overpotential,[38] see SI Eq. A5. The effective exchange current density depends on the used electrolyte and the thickness of the SEI. As this quantity is hard to measure we assume \(J_{0}=100\,\)A/m\({}^{2}\) which is the reported order of magnitude for the exchange current density.[37] To avoid a dependency of our results on the value of \(J_{0}\) we later give the initial stripping current density relative to the effective exchange current density. The chemical potential \(\mu\) determines the non-equilibrium thermodynamics and follows from the Gibbs free energy
\[\begin{split} G&=\int g\mathrm{d}z=\int\sigma \mathrm{d}A\\ &=\int\sigma(d,\alpha)2\pi r\sqrt{1+r^{\prime 2}}\mathrm{d}z \end{split} \tag{7}\]
which is based on the interfacial tension \(\sigma(d,\alpha)\), with \(r^{\prime}=\mathrm{d}r/\mathrm{d}z\). From equation 7 we get an expression for the free energy density \(g\) which we can use to calculate the chemical potential
Figure 8: Comparison of the whisker geometry as assumed in the model and as seen in experiments. (A) Upper part of the whisker as described by our model assumptions. The body has a cylinder shape with a spherical tip that has a slightly bigger radius than the body. (B) Cryo TEM image of the upper part of a lithium whisker.[23] The tip is rounded, slightly spherical-like, and appears to be slightly bigger in radius than the whisker body. Reprinted with permission by The American Association for the Advancement of Science.
via a variational derivative
\[\begin{split}\mu&=\frac{\delta G[n]}{\delta n}=\frac{V_{ \mathrm{M}}}{2\pi r}\frac{\delta G[r]}{\delta r}\\ &=\frac{V_{\mathrm{M}}}{2\pi r}\left(\frac{\partial g}{\partial r} -\frac{\mathrm{d}}{\mathrm{d}z}\frac{\partial g}{\partial\frac{\partial r}{ \partial z}}\right)\end{split} \tag{8}\]
where \(n(z)=\pi r^{2}/V_{\mathrm{M}}\) is the number of lithium atoms per \(z\)-interval.
In order to get an expression for the chemical potential \(\mu\), we model the Gibbs energy density \(g\). In our model, the change of \(g\) is due to the change of surface tension \(\sigma\). At the beginning of the experiment, the lithium surface is parallel to the rigid SEI surface, and the lithium is bonded to the SEI. In order to strip a lithium atom from underneath the SEI, the work of adhesion has to be done. At the end of the experiment, the lithium and the SEI are decoupled. We model this with the function \(\sigma_{\parallel}(d)\), where \(d\) is the distance between lithium and the SEI. The distance dependence mimics the behavior of typical molecular-binding potentials. In our continuum approach, we further need to account for the case that lithium can be orthogonal to the SEI, e.g., when a gap is formed in the lithium whisker. In this situation, there is no binding between lithium and the SEI. We model this with the function \(\sigma_{\perp}=\sigma_{\mathrm{Li}}\), where \(\sigma_{\mathrm{Li}}=0.5\,\mathrm{J}/\mathrm{m}^{2}\) is the surface free energy of lithium [67]. We, therefore, combine both parts with an angle-dependent function \(f(\alpha)\), where \(\alpha\) is the angle between the SEI and the normal of the whisker surface:
\[\sigma(d,\alpha)=\sigma_{\parallel}(d)f(\alpha)+\sigma_{\perp}(1-f(\alpha)), \tag{9}\]
with \(\sigma_{\parallel}(d)=\sigma_{\mathrm{Li}}+E_{\mathrm{A}}(d)\) including the surface free energy of lithium \(\sigma_{\mathrm{Li}}\) and the work of adhesion \(E_{\mathrm{A}}\) due to the binding to the SEI. By \(\sigma_{\parallel}(0)=-\sigma_{\mathrm{Li}}\), we assume that the bond strength from lithium to the SEI is in the same order of magnitude as the lithium-lithium cohesive bond. For our continuum approach, we smear out the bond breaking over the distance \(a=3\,\mathrm{nm}\) so that \(\sigma_{\parallel}(d\leq-a)=\sigma_{\mathrm{Li}}\). The function \(\sigma_{\parallel}(d)\) is depicted in Fig. 9. The angle-dependence is chosen such that in the perpendicular case \(\sigma(d,\alpha=90^{\circ})=\sigma_{\perp}=\sigma_{\mathrm{Li}}\), i.e. \(f(\alpha=90^{\circ})=0\). For numerical stability, we choose \(f(\alpha\leq 45^{\circ})=\cos(2\alpha)\) and \(f(\alpha>45^{\circ})=0\). Further details are presented in the SI.
With these definitions, we can evaluate Eq. 8 and calculate the chemical potential. The full derivation is presented in the SI with the final result being Eq. 15. Initially, for \(d=0\) the adhesion to the SEI leads to a negative chemical potential of the lithium surface, as \(\sigma_{\parallel}\) is negative. This implies, that concave surfaces have a higher chemical potential, see Eqs. 15 & 10. The bond breaking leads to a steadily decreasing value of the chemical potential until \(d\approx-1\,\mathrm{nm}\). After this the chemical potential rises again until its detached from the SEI. During the bond breaking there will be a point, at which \(\sigma_{\parallel}\) switches its sign and concave surfaces will have a lower chemical potential. When the bond is broken, from Eq. 8, we recover the same expression for the chemical potential as by inserting the well known Young-Laplace equation in the Gibbs-Duhem relation:
\[\mu=V_{\mathrm{M}}\sigma_{\mathrm{Li}}\left(\frac{1}{R_{1}}+\frac{1}{R_{2}} \right). \tag{10}\]
We only model the dynamics of the lithium, not of the SEI. If, however, an instability occurs leading to droplet formation, we assume that the SEI shell breaks as the elastic de
Figure 9: Change of effective interfacial energy \(\sigma_{\parallel}\) as a function of distance \(d\) to the SEI modelled by Eq. 9.
formation can only account for a few percent of volume change.[68, 69] Then, electrolyte can flood underneath the SEI shell and break the lithium-SEI bond. We assume this process to occur instantaneously, after the droplet formation. We assume that lithium is disconnected from the current collector at the point where the whisker thickness falls below the threshold \(r(\xi)<0.05R\).
When the SEI breaks, the lithium ions do not have to diffuse through the complicated SEI, and thus the effective exchange current density becomes larger. This is suggested by experiments where the exchange current density was measured. Shi _et al._ used a microelectrode setup and cyclic voltammetry with a large sweeping rate of \(200\,\mathrm{mV\,s^{-1}}\) and claimed to measure the exchange current density of lithium metal deposition without SEI.[37] In 1M LiTFSI 1,3-dioxolane/1,2-dimethoxyethane electrolyte they measured \(J_{0}=1230\,\mathrm{A\,m^{-2}}\). For the same electrolyte, Chen _et al._ measured \(J_{0}=4.1\,\mathrm{A\,m^{-2}}\) using lithium-lithium symmetric cells and a sweeping rate of \(0.5\,\mathrm{mV\,s^{-1}}\).[48] There, the SEI covering the lithium explains the discrepancy of these values. To the best of our knowledge, there is no systematic study on how the SEI thickness influences the effective exchange current density. Lithium whiskers are typically covered by a very thin SEI of about \(20\,\mathrm{nm}\).[16] Therefore, we assume a relatively large exchange current density of \(J_{0}=100\,\mathrm{A\,m^{-2}}\) with the lithium adhered to the SEI and an exchange current density \(J_{0}=1000\,\mathrm{A\,m^{-2}}\) after the SEI breaks.
## Experimental Setup
### Electrochemistry
We assembled a CR2032 coin cell with a Cu TEM grid on a Cu foil as the working electrode, lithium metal as the counter electrode and reference electrode, and polyethylene as the separator in an argon (Ar)-filled glovebox. The diameter of lithium metal was \(1.56\,\mathrm{cm}\), and the diameter of Cu foil was around \(1.8\,\mathrm{cm}\). A polyethylene separator was used to separate the two electrodes. The electrolyte was \(1.2\) M LiPF6 in ethylene carbonate (EC)/ethyl merhyl carbonate EMC (3:7 by weight) with 5 wt % vinylene carbonate (VC). Lithium metal was deposited onto the working electrode by applying different current densities of \(1\,\mathrm{A\,m^{-2}}\) for 100 minutes (by using Arbin BT-2000). After deposition, lithium was stripped by applying a current density of \(-0.1\,\mathrm{A\,m^{-2}}\) for 500 minutes.
### Transfer to the cryo-TEM
After cycling, the coin cell was disassembled in the Ar-filled glovebox. The TEM grid was taken out of the Cu foil and slightly rinsed with EMC to remove trace electrolyte. After rinsing, the TEM grid was placed in a sealed bag filled with Ar. Immediately after taking the sealed bag out from the Ar-filled glovebox, it was plunged into a bath of liquid nitrogen until the lithium metal reach very low temperature (around 100 K). Then, we quickly took the Cu TEM grid with electrochemically deposited lithium from the sealed bag and loaded it onto a precooled Gatan cryoholder (Elsa, Gatan, USA) using a cryotransfer station to ensure the entire process occurred under a cryogenic environment. This preserves the specimen in its native state.
### Cryo-TEM characterization of the lithium deposits after cycling
A \(300\,\mathrm{kV}\) FEI Titan monochromated (scanning) transmission electron microscope ((S)TEM) equipped with a probe aberration corrector was used to acquire the TEM, selected area electron diffraction (SAED), energy dispersive spectroscopy (EDS), and EELS data. The samples were imaged at low temperature (\(100\,\mathrm{K}\)) under low dose conditions (\(\sim 1\,\mathrm{e/\SIUnitSymbolAng}^{2}\,\mathrm{s}\) for low magnification imaging, \(\sim 100\,\mathrm{e/\SIUnitSymbolAng}^{2}\,\mathrm{s}\) for high resolution TEM imaging) to prevent beam induced damage and artifacts. EDS elemental mapping was collected by scanning the same region multiple times at a dwell time of \(1-10\)\(\mu\)s (depending on the image size), and the dose rate was around \(0.363-1.98\)\(\mathrm{e/\SIUnitSymbolAng}^{2}\,\mathrm{s}\) depending on magnification. The functions of binning and smoothing in Aztec software
(Oxford Instruments) were used to enhance the contrast of EDS data. Spectroscopy experiments were performed on a Gatan GIF-Quantum spectrometer. The EELS collection semi angle during the spectroscopy experiments was \(\sim 45\,\mathrm{mrad}\). EELS spectra dispersion was \(0.05\,\mathrm{eV/channel}\) with vertical binning at 130 in dual EELS mode. The probe beam current was around \(25\,\mathrm{pA}\), and pixel dwell time was \(0.001-0.5\) s. The electron dose applied during acquisition of the EELS spectra was \(0.8-40\)\(\mathrm{e/\AA}^{2}\). These electron dose rates are typically used in cryogenic environment and do not introduce obvious damage or artifacts after acquiring images, diffraction patterns, EDS, and EELS spectra [16, 23, 29, 30, 41, 55].
**Acknowledgement** The work performed at Pacific Northwest National Laboratory (PNNL) was supported by the Assistant Secretary for Energy Efficiency and Renewable Energy, Office of Vehicle Technologies of the U.S. Department of Energy (DOE) und the Advanced Battery Materials Research (BMR) Program and the US-Germany Cooperation on Energy Storage with Contract Nos. DE-LC-000L072 (for C.W.) and DE-AC05-76RL01830 (for W.X.). The microscopic and spectroscopic characterizations were conducted in the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL), a national scientific user facility sponsored by DOE's Office of Biological and Environmental Research and located at PNNL. PNNL is operated by Batelle for the DOE under Contract DE-AC05-76RL01830. M.W., A.L., and B.H. greatfully acknowledge financial support within the Lillint project (03XP0225A). The support of the bwHPC initiative through the use of the JUSTUS HPC facility at Ulm University is acknowledged. B.H. and A.L. acknowledge Dominik Kramer for fruitful discussions.
## Author's contribution
M.W.: conceptualization, methodology, software, validation, formal analysis, data curation, writing - original draft, visualization; Y.X.: conceptualization, validation, investigation, data curation, writing - review & editing; H.J.: conceptualization, validation, investigation, data curation, writing - review & editing; C.W.: conceptualization, resources, writing - review & editing, review & editing, supervision, project administration, resources, writing - review & editing, supervision, project administration, funding acquisition; W.X.: conceptualization, resources, writing - review & editing, supervision, project administration, funding acquisition; A.L.: conceptualization, resources, writing - review & editing, writing - review & editing, supervision, project administration, funding acquisition; B.H.: conceptualization, methodology, resources, writing - review & editing, supervision, project administration, funding acquisition.
|
2303.12003 | Artificial muses: Generative Artificial Intelligence Chatbots Have Risen
to Human-Level Creativity | A widespread view is that Artificial Intelligence cannot be creative. We
tested this assumption by comparing human-generated ideas with those generated
by six Generative Artificial Intelligence (GAI) chatbots: $alpa.\!ai$,
$Copy.\!ai$, ChatGPT (versions 3 and 4), $Studio.\!ai$, and YouChat. Humans and
a specifically trained AI independently assessed the quality and quantity of
ideas. We found no qualitative difference between AI and human-generated
creativity, although there are differences in how ideas are generated.
Interestingly, 9.4 percent of humans were more creative than the most creative
GAI, GPT-4. Our findings suggest that GAIs are valuable assistants in the
creative process. Continued research and development of GAI in creative tasks
is crucial to fully understand this technology's potential benefits and
drawbacks in shaping the future of creativity. Finally, we discuss the question
of whether GAIs are capable of being truly creative. | Jennifer Haase, Paul H. P. Hanel | 2023-03-21T16:35:01Z | http://arxiv.org/abs/2303.12003v1 | #### Artificial muses: Generative Artificial Intelligence Chatbots Have Risen to Human-Level Creativity
###### Abstract
A widespread view is that Artificial Intelligence cannot be creative. We tested this assumption by comparing human-generated ideas with those generated by six Generative Artificial Intelligence (GAI) chatbots: alpa.ai, Copy.ai, ChatGPT (versions 3 and 4), Studio.ai, and YouChat. Humans and a specifically trained AI independently assessed the quality and quantity of ideas. We found no qualitative difference between AI and human-generated creativity, although there are differences in how ideas are generated. Interestingly, 9.4% of humans were more creative than the most creative GAI, GPT-4. Our findings suggest that GAIs are valuable assistants in the creative process. Continued research and development of GAI in creative tasks is crucial to fully understand this technology's potential benefits and drawbacks in shaping the future of creativity. Finally, we discuss the question of whether GAIs are capable of being "truly" creative.
Creattivity, originality, AI, Generative Artificial Intelligence
## 1 Main
Artificial Intelligence has proven to be better in many areas, such as chess or GO, than humans1. Some people believe creativity is one of the 'last resorts' in which humans are better than AI\({}^{1,2}\). However, recent generative artificial intelligence (GAI) developers have argued that their software is also creative. We put this claim to the test by comparing whether humans are (still) more creative than six GAIs, and let both humans and AI be the judge of this.
Footnote 1: _Acknowledgements._ We thank Saba Abdul Wahid Mahmood, Diane Adebayo, Carnila I. Bottger Garcia-Godos, Francelene James, Henrik Kirchmann, and Margaret L. Ludwig for help with rating the responses to the creativity test. |
2305.11684 | Self-Reinforcement Attention Mechanism For Tabular Learning | Apart from the high accuracy of machine learning models, what interests many
researchers in real-life problems (e.g., fraud detection, credit scoring) is to
find hidden patterns in data; particularly when dealing with their challenging
imbalanced characteristics. Interpretability is also a key requirement that
needs to accompany the used machine learning model. In this concern, often,
intrinsically interpretable models are preferred to complex ones, which are in
most cases black-box models. Also, linear models are used in some high-risk
fields to handle tabular data, even if performance must be sacrificed. In this
paper, we introduce Self-Reinforcement Attention (SRA), a novel attention
mechanism that provides a relevance of features as a weight vector which is
used to learn an intelligible representation. This weight is then used to
reinforce or reduce some components of the raw input through element-wise
vector multiplication. Our results on synthetic and real-world imbalanced data
show that our proposed SRA block is effective in end-to-end combination with
baseline models. | Kodjo Mawuena Amekoe, Mohamed Djallel Dilmi, Hanene Azzag, Mustapha Lebbah, Zaineb Chelly Dagdia, Gregoire Jaffre | 2023-05-19T14:06:36Z | http://arxiv.org/abs/2305.11684v1 | # Self-Reinforcement Attention Mechanism For Tabular Learning
###### Abstract
Apart from the high accuracy of machine learning models, what interests many researchers in real-life problems (e.g., fraud detection, credit scoring) is to find hidden patterns in data; particularly when dealing with their challenging imbalanced characteristics. Interpretability is also a key requirement that needs to accompany the used machine learning model. In this concern, often, intrinsically interpretable models are preferred to complex ones, which are in most cases black-box models. Also, linear models are used in some high-risk fields to handle tabular data, even if performance must be sacrificed. In this paper, we introduce Self-Reinforcement Attention (SRA), a novel attention mechanism that provides a relevance of features as a weight vector which is used to learn an intelligible representation. This weight is then used to reinforce or reduce some components of the raw input through element-wise vector multiplication. Our results on synthetic and real-world imbalanced data show that our proposed SRA block is effective in end-to-end combination with baseline models.
Keywords:Attention Interpretability classification.
## 1 Introduction
While deep learning models continue to provide impressive performance for computer vision and Natural Language Processing (NLP), their practical use in some real-life problems (e.g. credit scoring) remains limited due to legislation with one of the important points: the interpretability of the used models (e.g., GDPR: Article 22 in Europe). Since the promising results of the transformer architecture on machine translation tasks [23], many efforts have been made to improve models' accuracy for tabular modeling using the attention mechanism [20, 14, 10, 9]. The principal motivation behind our work is to push forward this effort by proposing an interpretable attention model for tabular learning. To achieve this goal, we found it necessary to develop a representation learning block or layer that (i) preserves the initial feature space (i.e., \(\mathds{R}^{p}\longrightarrow\mathds{R}^{p}\)), and (ii) reduces to the maximum some extra steps (e.g., residual connection, LayerNorm) that make the overall architecture less interpretable.
In this paper, we present a new attention-based representation learning block for tabular
data, called Self-Reinforcement Attention (SRA). We summarize our contributions as follows:
* SRA is a pre-processing block that allows to learn intelligible representation by weighting each raw feature with a positive alignment (i.e., an attention score). The obtained representation, called "reinforced vector", is then passed to an aggregation model to provide the decision boundary. It is based on the previous work about Epigenetics algorithms [8].
* SRA provides a feature score that facilitates interpretations. This score is then used as a coefficient to amplify or reduce some characteristics according to the context of the observations (here the spatial information), allowing to: (i) take into account possible interactions without the need to artificially add additional features or terms, (ii) identify important features for the final prediction.
* Our experiments on synthetic and benchmark imbalanced datasets show a promising performance while preserving intrinsic interpretability of the resulting architecture.
The rest of the paper is organized as follows: Section 2 presents a brief discussion of state-of-the-art works. Section 3 describes the SRA block and its architecture. The experimental setup, the discussion of the obtained results and the limitations are presented in Section 4. Finally, Section 5 concludes the paper and highlights some perspectives.
## 2 Related work
**Classical models.** For many tabular or heterogeneous data classification problems, tree-based ensemble models are still widely used and preferred as they generate high performance results while not being greedy in terms of computational resources. Random Forest [4] is one of the well-known models that combines the prediction of several independent trees through majority voting or average. Tree based models are also used in a sequential manner (Gradient Boosting Machine or GBM) where the newest trees are built to minimize the errors of the previous ones. XGBoost [6] and LightGBM [13] are two fast implementations of GBM that empower the regularization in the boosting mechanism and are considered as the state-of-the-art for many problems or competitions. As for linear models, these are commonly used in settings where models' interpretability is strongly required. For these models, linear relations between features are expected otherwise continuous variables are discretized aiming to take into account non-linearity; otherwise other features are added as interaction terms.
**Deep learning models and feature attribution.** Many deep learning models were designed to provide local explanations to their predictions. Among these models, we mention Neural Additive Models (NAM) [1] which present a neural implementation of the classic Generalized additive models (GAM). In addition to the marginal feature contribution, NAM provides the shape function which by visualization can help understand the global contribution of a given feature. NODE-G\({}^{2}\)AM [5] is an improvement of NAM built on top of a NODE architecture [18] to take into account pairwise interactions among features. Among the drawbacks of these deep learning architectures is that
they apply one sub-network per feature (or pair of features) which can be very resource consuming, especially for high dimensional problems, and the management of higher order interactions is not guaranteed. In TabNet [3], a multiplicative sparse mask vector is used for instance-wise feature selection and the masks are aggregated across many stages to compute features' importance. Matrix multiplication trick is used to reconstruct initial feature dimension which is lost with the passage in the feature transformer module. Contrary to TabNet, our proposed SRA model preserves the initial feature dimension and uses the dot-product to compute its attention weights.
**Attention-based models for tabular data classification.** Compared to classical models, deep learning models have several advantages in various settings, mainly (i) in continuous learning especially in the presence of concept drift (e.g., using transfer learning or fine tuning), (ii) in multimodal problems (e.g., encode tabular information with text, image, etc.), and (iii) in multitask settings [1]. These reasons motivated many researchers to improve the traditional fully connected MultiLayer Perceptron (MLP) by imitating tree models [18] or to use the attention mechanism [20, 14, 10, 9]. One common feature of these attention-based architectures is that each feature is considered as a token and is embedded similarly as in [23]. This consideration, although favoring the expressiveness of the resulting architecture, leads to a significant increase in the number of learnable parameters and complicates the explanation of local predictions, especially when the number of transformer layers or stages exceeds 1. The SRA-based models proposed in our work use less parameters compared to [20, 14, 10, 9]. Also they are accurate in comparison to state-of-art models while being self-explainable, in the sense that they provide an intrinsic explanation of their predictions.
## 3 Self-Reinforcement Attention (SRA)
### SRA Model
The challenge in most supervised tabular learning problems using attention mechanism [20, 14, 10, 9] is to estimate the output \(\hat{y}=f_{\theta}(\mathbf{x})\) given the feature vector \(\mathbf{x}=(x_{1},...,x_{p})\in\mathbb{R}^{p}\). The parametric model \(f_{\theta}\) is learned using the training data \(\mathcal{D}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{n}\) with \(y_{i}\in\{0,1\}\) for binary classification or \(y_{i}\in\mathbb{R}\) for regression tasks. Our proposed SRA model \(f_{\theta}\) (Fig 1b) contains a SRA block (Fig 1a) which is a novel attention mechanism layer denoted as a function \(a(.)\). Given the raw input \(\mathbf{x}\), the SRA block produces an attention vector \(\mathbf{a}=(a_{1},...,a_{i},...a_{p})\). Thereafter the attention vector is used to learn an intelligible representation \(\mathbf{o}=(o_{1},...,o_{i},...,o_{p})\) as follows:
\[\mathbf{o}=\mathbf{a}\odot\mathbf{x} \tag{1}\]
where \(\odot\) is the element-wise multiplication.
If we instantiate our supervised aggregation model (Figure 1b) as a linear transformation, then the SRA model can be formalized as follows:
\[\begin{split} g(\hat{y})&=\boldsymbol{\beta}\cdot \mathbf{o}\\ &=\beta_{1}o_{1}+...+\beta_{i}o_{i}+...+\beta_{p}o_{p}\\ &=\beta_{1}a_{1}x_{1}+...+\beta_{i}a_{i}x_{i}+...+\beta_{p}a_{p} x_{p}\end{split} \tag{2}\]
\(\beta_{i}a_{i}x_{i}\) represents the contribution (the prediction importance) of the feature \(x_{i}\) to the output, \(\mathbf{\beta}=(\beta_{1},\beta_{2},...,\beta_{p})\) is the linear regression coefficients and \(a_{i}\) is interpreted as the amplification (or the correction) that the feature \(x_{i}\) received from other features or itself due to the interactions. We call this SRA model (instantiation) as SRALinear. \(g\) represents the link function (e.g., usually \(g(\mu)=\log(\frac{\mu}{1-\mu})\) for binary classification and \(g=Identity\) for regression tasks).
### SRA block
Given the input vector \(\mathbf{x}=(x_{1},...x_{i},...,x_{p})\in\mathbb{R}^{p}\), the SRA block encodes it into \(p\) keys in \(K=[\mathbf{k}_{1},\mathbf{k}_{2},...,\mathbf{k}_{i},...,\mathbf{k}_{p}]^{T}\) with \(\mathbf{k}_{i}=(k_{i}^{1},...,k_{i}^{d_{k}})\in\mathbb{R}^{d_{k}}\) using the key encoder and queries matrix \(Q=[\mathbf{q}_{1},\mathbf{q}_{2},...,\mathbf{q}_{i},...,\mathbf{q}_{p}]^{T}\) with \(\mathbf{q}_{i}=(q_{i}^{1},...,q_{i}^{d_{k}})\in\mathbb{R}^{d_{k}}\) using the query encoder (see Figure 0(a) and the pseudocode provided in Algorithm 1). The matrix of queries (\(Q\)) and keys (\(K\)) are generated by two separate fully connected feed-forward networks (\(FFN\)) namely \(QueryEncoder\) and \(KeyEncoder\).
The \(KeyEncoder\) (resp. \(QueryEncoder\)) produces directly \(p\) keys (resp. queries) using a single \(FFN\) instead of using \(p\) independent \(FFN\)s per feature as in [20, 10]. This embedding should be particularly useful for heterogeneous (tabular) data especially in the presence of strong features' interactions and at the same time alleviate the need of using several attention blocks (layers) or extra processing which could affect the interpretable of the attention coefficients. Furthermore, with a Sigmoid activation function, all elements \(k_{i}^{j}\) of \(K\) (resp. \(q_{i}^{j}\) of \(Q\)) are scalar numbers bounded in \([0,1]\).
The keys in \(K\) are compared to the queries \(Q\) component by component, allowing to quantify the alignment of different transformations of the same input calculating the
Fig. 1: SRA architecture.
attention weights \(\mathbf{a}=(a_{1},..,a_{i},...,a_{p})\) as follows :
\[a_{i}=\frac{\mathbf{q}_{i}\cdot\mathbf{k}_{i}}{d_{k}}\quad\text{for}\quad i\in 1, \cdots,p \tag{3}\]
We further use the scaling by \(d_{k}\) in order to reduce the magnitude of the dot-product and to get dimension-free attention coefficients \(a_{i}\in[0,1]\).
We propose this attention estimation to produce a concise explanation of the decision process. Indeed, considering the potential internal conflict between the input components (due to the interactions), the attention weights vector \(a\) may enhance or reduce some components (of the input vector) at strategic and specific positions.
```
#bisbatchsize,pthenumberoffeatures defforward(self,x): Q=self.KeyEncoder(x)#Qis(b,p,d_{k}) K=self.QueryEncoder(x)#Kis(b,p,d_{k}) QK=Q*K*self.scale#scale=1/d_{k},QKis(b,p,d_{k}) a=QK.sum(axis=-1)#ais(b,p) returna
```
**Algorithm 1**PyTorch-style forward pass pseudocode of the SRA Block
## 4 Experiments
### Experimental setup
Our motivation when building the SRA is to combine interpretability and performance in a single model with a focus on imbalanced classification tasks. Typically, the interpretability of models is assessed separately from their performance, which can make it challenging to gauge the effectiveness of our SRA solution. Nonetheless, we believe it is appropriate to measure the value of our SRA solution by comparing it to both comprehensible and fully complex benchmarks using the following criteria:
* **Intelligibility**: Are the representations learned by SRA understandable? Are the explanations provided by SRA models faithful?
* **Effectiveness**: Are SRA-based models accurate compared to state-of-the-art models?
**Datasets.** As we focus particularly on finance as an application domain, we considered three UCI datasets (Default of Credit Card Clients, Bank Marketing, and Adult Income) and four Kaggle datasets (Credit Card Fraud, Bank Churn Modelling, Blastchar, and Telco Churn) and the Heloc Fico dataset for our experiments. All of these datasets are used for binary classification tasks, and the number of features (both numerical and categorical) ranges from 10 to 63. The percentage of positive class instances varies between 0.17% and 48% (see Table 1 for further details). Unless otherwise specified, all categorical inputs are one-hot encoded, and numerical inputs are scaled using the mean and standard deviation to accelerate the convergence of the algorithms.
#### 4.1.2 Model setup.
* Choice of the query and key encoder: we use the same architecture for the key and query encoders which is a two hidden layers fully connected neural network of dimension \(\{d_{1},d_{2}\}\) with, \(d_{1}=p\times(d_{k}/4)\) and \(d_{2}=p\times(d_{k}/2)\), \(d_{k}\geq 4\).
* Regularization: to increase the generalization power, we used regularization in the SRA block. Specifically, we used dropout [21] in both the key and query encoders during the training. Also, we used weight decay (\(L_{2}\) penalization) to empower the smoothness in the embeddings (of the key and query).
#### 4.1.3 Evaluation measures.
We evaluate the models using 5-stratified fold cross validation (80% for the training) and report the mean and standard deviation of the Area Under the ROC curve (AUCROC) on the validation set. Particularly for highly imbalanced datasets (e.g., the Credit Card Fraud dataset), we optimize and report the Average Precision or Precision-Recall (AUCR). In fact, AUCR gives a more informative picture of an algorithm's performance than AUCROC in highly imbalanced data settings and algorithms that optimize AUCRR are guaranteed to optimize AUCROC [7].
### Intelligibility of SRA
One interesting property of an SRA-based model is that it provides interpretable information about its behavior. In this section, we explore some of these interpretable aspects through visualizations and the ability to identify relevant features. We focus in this section on its combination with the linear model. The SRALinear model (Equation 2) has two interesting properties:
1. Each feature \(x_{i}\) appears in the equation as in a classical linear model and \(\beta_{i}a_{i}x_{i}\) is its contribution to the output.
2. Faithfulness: the attention coefficients are clearly correlated to model's outputs. This is actually a desirable property for considering attention as an explanation of predictions [12].
#### 4.2.1 How the raw data is reinforced using the SRA block.
To illustrate how the raw data is reinforced in practice, we use 2D toy datasets with the objective of facilitating the visualization. We consider first the following function:
\[F_{1}(x)=5x_{1}-5x_{2}\mathbbm{1}_{x_{1}>0}\quad\text{and}\quad y=\{\mathbbm{1 }_{p>0.5}\quad\text{with}\quad p=1/(1+e^{-F_{1}(x)}\} \tag{4}\]
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Datasets & \(\#\) Datapoints \(\#\) features \(\#\) Categorical features Positive Class (\%) \\ \hline Bank Churn & 10000 & 10 & 2 & 20.37 \\ \hline Credit Default & 30000 & 23 & 3 & 22.16 \\ \hline Bank Marketing & 45211 & 16 & 9 & 11.70 \\ \hline Adult Income & 30162 & 14 & 8 & 24.89 \\ \hline Credit Card Fraud & 284807 & 29 & 0 & 0.17 \\ \hline Blastchar & 7043 & 19 & 16 & 26.54 \\ \hline Telco Churn & 66469 & 63 & 0 & 20.92 \\ \hline Heloc Fico & 10459 & 23 & 0 & 47.81 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Benchmark datasets
As simple as it may seem, this function cannot be directly modeled with a linear model due to the term \(\mathbbm{1}_{x_{1}>0}\), which forces \(x_{2}\) to have no effect on the output when \(x_{1}<0\). Using the reinforced version of the raw inputs helps to alleviate this problem; as shown in Fig 2. Fig 1(a) shows the original data distribution, with the yellow color indicating the class of interest. In Fig 1(b), we show the representation learned by multiplying the raw inputs with the SRA coefficients. The green color represents a possible decision boundary to separate the two classes. Through multiplication, values of \(x_{2}\) are significantly reduced (e.g., to 0) when needed (i.e., \(o_{2}\sim 0\) when \(x_{1}<0\), \(x_{2}<0\)), which makes the classes easy to separate with the downstream linear model.
We included another synthetic dataset, the 2D chainLink [22], as depicted in Fig3. By applying SRA coefficients to this dataset, we acquired a new data representation that enables easy separation of the classes, as shown in Fig3(b). Even without knowledge of the true data generating process, it is apparent that all the purple observations have been moved strategically so that a simple rule, \(o_{2}>0\), can effectively isolate nearly all the yellow observations of interest. For a more detailed depiction of the reinforced vectors, please refer to the supplementary materials provided in Section A.1.
variables contributed to a high output score. For classical state-of-the-art models like XGBoost, post-hoc explanation tools such as TreeSHAP and LIME are often used to provide individual prediction explanations. However, these tools can introduce their own biases [15, 2]. In this investigation, we aim to assess SRALinear's ability to identify crucial features in comparison to that of Logistic Regression, a self-explanatory model, and XGBoost coupled with TreeSHAP [17]. As TreeSHAP calculates the exact Shapley value and is computationally efficient, it is particularly well-suited for tree-based models. For this purpose, we generate two synthetic datasets with 5 features \(\mathbf{x}=(x_{1},x_{2},x_{3},x_{4},x_{5})\) of size 30000 and 60000, respectively based on the Gaussian distribution (0 mean and variance 1) as follows:
\[y=(5x_{1}-5x_{2})\mathbbm{1}_{x_{5}\leq 0}+(5x_{3}-5x_{4})\mathbbm{1}_{x_{5}>0} \tag{5}\]
\[y=1\quad\text{if}\quad(x_{1}{+}2.5)^{2}{+}x_{2}^{2}<1\quad\text{or}\quad(x_{1} {-}2.5)^{2}{+}(x_{2}{-}1.5)^{2}<1\quad\text{and}\quad 0\quad\text{otherwise} \tag{6}\]
The example called _Synthetic 1_ (Equation 5) is borrowed from [2]. It is interesting for this work because it highlights the interactions between the features. The goal is to design a model that can achieve perfect accuracy by using only the features \(x_{1}\) and \(x_{2}\), or alternatively, depending on the sign of \(x_{5}\), using only the features \(x_{3}\) and \(x_{4}\). To evaluate the model's performance, we compute the True Positive Rate (TPR) using a test set consisting of 20% of the data points, with the remaining 80% used for training. We restrict our analysis to those data points with \(x_{5}\leq 0\), which comprise 3750 instances. Specifically, we assess the ability of SRALinear to identify the two most important features among \((x_{1},x_{2},x_{3},x_{4})\). Regarding the example that we called _Synthetic 2_ (Equation 6), it is rather attribution tools friendly as features are independent (although there is a non-linearity). Only \(x_{1}\) and \(x_{2}\) are relevant to predict the class 1. We consider all data points from class 1 in test set (695 data points) and try to find the two most important features among \((x_{1},x_{2},x_{3},x_{4},x_{5})\).
As shown in Table 2, SRALinear is able to accurately detect the most relevant features with a high True Positive Rate (TPR) of over 99%. As expected, TreeSHAP (combined with XGBoost) is able to accurately detect the two most relevant features for _Synthetic 2_, but struggles with the Synthetic 1 dataset, achieving a TPR of approximately 75%. Knowing that XGBoost has a perfect performance on this dataset (\(R^{2}>99\%\)), we argue that the incorrect attributions are due to the interpretability tool,
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Datasets & Models & TPR & Test performance \\ \hline \multirow{3}{*}{_Synthetic 1_} & Linear Regression & 51.28 & 50.00 \\ & XGBoost+TreeSHAP & 75.47 & 99.21 \\ & SRALinear & 99.77 & 99.67 \\ \hline \hline \multirow{3}{*}{_Synthetic 2_} & Logistic Regression (LR) & 66.83 & 73.77 \\ & XGBoost+TreeSHAP & 98.63 & 99.99 \\ \cline{1-1} & SRALinear & 99.86 & 99.72 \\ \hline \end{tabular}
\end{table}
Table 2: Relevance feature discovry capacity. The True Positive Rate (TPR) (%) is used as metric. \(R^{2}\) (the higher the better) is to evaluate the test performance of _Synthetic 1_ and AUCROC (%) is used for _Synthetic 2_ dataset.
which fails to provide the important features in XGBoost's decision. For brevity, we encourage interested readers to refer to [2, 15, 11] for more details on attribution methods and variable interactions. Regarding the Linear models (Linear regression and Logistic Regression), although being highly explainable, they detect important variables only with moderate accuracy. This is due to bias when using linear models for handing non-linear data (\(R^{2}=50\) %, AUCROC = 74%). From these two synthetic examples, we show that there are two possible biases when using feature attribution: (i) the first is due to underfitting (e.g., using linear models to fit complex data); (ii) the second is due to post-hoc interpretabilty tools used to explain full complexity models. In the context mentioned above, the SRALinear model appears to be a good compromise for both the feature attribution and accuracy aspects.
**Limitations of SRA based explanations.** The SRA model, as proposed, should not be used directly as a global feature selector but rather after identifying all relevant variables. This is because the feature importance measure provided by SRALinear is 'the local prediction importance' and not 'the local feature importance' (cf. Equation 2). Although these two terms are usually used interchangeably in the literature of feature attribution methods, there are some nuances [16]. Specifically, the feature that is important to a prediction is automatically relevant, but the inverse is not always true, especially when there are interactions. Regarding the SRALinear model, an illustrative example is the _synthetic 1_ dataset (Equation 5). For this dataset, a perfect SRALinear model will always give zero prediction importance to the feature \(x_{5}\) as it cannot be used as main effect or feature (although it can be used to reduce the contribution of other features in the attention vector). Thus, based solely on the prediction importance, one may be tempted to delete the feature \(x_{5}\) to create a model with fewer variables. However with further analysis (e.g., visualizing or computing the gradient \(\beta_{i}a_{i}x_{i}\) vs \(x_{5}\)) we can notice that \(x_{5}\) must be kept. An shown in Fig 4, an important information needs to be known about \(x_{5}\); which is its sign. When \(x_{5}<0\) (resp. \(x_{5}>0\)), the prediction contribution (or importance) of \(x_{3}\) is close to 0 (resp. the prediction contribution of \(x_{1}\) is close to 0). A similar visualization would lead to the same finding for the contributions of \(x_{2}\) and \(x_{4}\), indicating that \(x_{5}\) is indeed relevant to the model. Dropping it would result in a drastic reduction in SRALinear's performance, as it would behave like a simple linear regression.
Figure 4: _synthetic 1_: relevance analysis of the feature \(x_{5}\)
### The effectiveness of the SRA block.
In this section, we discuss the effectiveness of considering the SRA block by comparing the accuracy achieved by SRALinear model (Equation 2) on benchmark datasets relatively to baseline models (interpretable and non-counterparts).
#### 4.3.1 Baseline models.
We compare quantitatively the performance of the proposed SRA models (Equation 2) with the following baselines:
* Logistic Regression (LR): It is a highly interpretable model obtained by simple linear combination of features followed by Sigmoid activation for binary classification problems.
* MultiLayer Perceptron (MLP): it is a full complexity architecture that can model non-linear effects and interactions; making it not directly interpretable by humans. We consider two (2) hidden layers MLP model of dimensions \(\{4\times p,2\times p\}\) as in [10]. \(p\) is the input feature dimension.
* TabNet [3]: it is a deep learning model that provides local explanation of its predictions without imposing a limit on the order of interactions between the variables in contrast to [1, 5].
* XGBoost [6]: Despite the need for feature attribution tools such as TreeSHAP [17] and LIME [19] to explain its local predictions, XGBoost remains a favorite and leading state-of-the-art model for several real-life use cases and tabular learning competitions. It is selected for comparison with the intention of measuring the performance that may be lost by preferring an intrinsically interpretable model. It is also to be noted that we do not compare directly to some attention-based models, such as [20, 14, 10, 9] as they are more motivated by performance than interpretability and XGBoost can give an idea of the upper bound that these models can reach in most cases.
\begin{table}
\begin{tabular}{|c|c|c|c||c|c|} \hline Datasets & LR & TabNet & SRALinear & MLP & XGBoost \\ \hline BankChurn & 76.93 (1.56) & **86.99** (0.79) & 86.98 (0.46) & _87.08_ (0.73) & 86.82 (0.79) \\ \hline CreditDefault & 72.53 (0.49) & **77.85** (1.03) & 77.55 (0.56) & 78.24 (0.78) & _78.56_ (0.69) \\ \hline BankMarketing & 90.79 (0.49) & 92.74 (0.70) & **93.33** (0.50) & 93.44 (0.41) & _93.82_ (0.38) \\ \hline AdultIncome & 90.50 (0.41) & 90.46 (0.52) & **91.07** (0.42) & 91.45 (0.38) & _92.63_ (0.37) \\ \hline CreditCardFraud & 77.08 (2.59) & 81.09 (3.92) & **86.58** (2.81) & 85.69 (2.53) & 86.54 (2.19) \\ \hline Blastchar & 84.54 (1.48) & 83.53 (1.45) & **84.63** (1.51) & 84.63 (1.52) & _84.89_ (1.21) \\ \hline TelcoChurn & 88.95 (0.29) & 90.45(0.33) & **90.52** (0.31) & 90.54 (0.28) & _91.13_ (0.37) \\ \hline HelocFico & 78.26 (0.52) & 79.39 (0.57) & **79.43** (0.41) & 79.50 (0.46) & _79.75_ (0.74) \\ \hline \end{tabular}
\end{table}
Table 3: Accuracy of the SRALinear model. Mean and standard deviation AUC (%), reported from 5-stratified cross validation. Bold highlights the best performance when comparing self-explainable (LR, TabNet, SRALinear) models and italic is used for the overall best performing model.
Evaluation of the Accuracy.As shown in Table 3, SRALinear achieved the best performance in 6/8 cases among self-explainable models (over TabNet, LR). Furthermore, the obtained performance is often close (for 6/8 benchmark datasets) to the one of the overall best performing model which is XGBoost. These results confirms the effectiveness of SRA block particularly when observing the difference of performances between the Logistic Regression (LR) and SRALinear which ranges from +0.09 for the Adult-Income dataset to +10.05 AUC for the Bank Churn dataset. We recall that LR model is the resulting architecture when removing the SRA block or setting attention weights to 1 (cf. Fig 1).
## 5 Conclusion
We presented a novel attention mechanism for tabular learning named Self-Reinforcement Attention (SRA), a deep learning based representation learning block to produce reinforced version from raw input data through element-wise multiplication. We demonstrated the effectiveness and the benefits of SRA with both synthetic and benchmark imbalanced classification datasets. We also showed that the SRA models are intelligible in sense that they provides an intrinsic attribution for feature, which can be further used for global model behavior understanding. Our experimental results confirms the proposed model as a promising solution for self-explainable models in tabular learning settings without the need to'sacrificing the accuracy'. Overall, we recommend to the interested user to check as much as possible the agreement of the SRA based explanations with their data knowledge since these are not causalities. The SRA block as proposed can be further enriched especially to deal with complex tasks. In this concern, we are currently working on how to use several heads and layers, similar to what is often done in attention-based architectures. Also, studying empirically the local stability of SRA explanations is an important direction of future research as well as incorporating data knowledge in the training phase (e.g. use monotonic constraints with respect to some features).
|
2310.15654 | A Survey on Detection of LLMs-Generated Content | The burgeoning capabilities of advanced large language models (LLMs) such as
ChatGPT have led to an increase in synthetic content generation with
implications across a variety of sectors, including media, cybersecurity,
public discourse, and education. As such, the ability to detect LLMs-generated
content has become of paramount importance. We aim to provide a detailed
overview of existing detection strategies and benchmarks, scrutinizing their
differences and identifying key challenges and prospects in the field,
advocating for more adaptable and robust models to enhance detection accuracy.
We also posit the necessity for a multi-faceted approach to defend against
various attacks to counter the rapidly advancing capabilities of LLMs. To the
best of our knowledge, this work is the first comprehensive survey on the
detection in the era of LLMs. We hope it will provide a broad understanding of
the current landscape of LLMs-generated content detection, offering a guiding
reference for researchers and practitioners striving to uphold the integrity of
digital information in an era increasingly dominated by synthetic content. The
relevant papers are summarized and will be consistently updated at
https://github.com/Xianjun-Yang/Awesome_papers_on_LLMs_detection.git. | Xianjun Yang, Liangming Pan, Xuandong Zhao, Haifeng Chen, Linda Petzold, William Yang Wang, Wei Cheng | 2023-10-24T09:10:26Z | http://arxiv.org/abs/2310.15654v1 | # A Survey on Detection of LLMs-Generated Content
###### Abstract
The burgeoning capabilities of advanced large language models (LLMs) such as ChatGPT have led to an increase in synthetic content generation with implications across a variety of sectors, including media, cybersecurity, public discourse, and education. As such, the ability to detect LLMs-generated content has become of paramount importance. We aim to provide a detailed overview of existing detection strategies and benchmarks, scrutinizing their differences and identifying key challenges and prospects in the field, advocating for more adaptable and robust models to enhance detection accuracy. We also posit the necessity for a multi-faceted approach to defend against various attacks to counter the rapidly advancing capabilities of LLMs. To the best of our knowledge, this work is the first comprehensive survey on the detection in the era of LLMs. We hope it will provide a broad understanding of the current landscape of LLMs-generated content detection, offering a guiding reference for researchers and practitioners striving to uphold the integrity of digital information in an era increasingly dominated by synthetic content. The relevant papers are summarized and will be consistently updated at [https://github.com/Xianjun-Yang/Awesome_papers_on_LLMs_detection.git](https://github.com/Xianjun-Yang/Awesome_papers_on_LLMs_detection.git).
## 1 Introduction
With the rapid development of powerful AI tools, the risk of LLMs-generated content has raised considerable concerns, such as misinformation spread Bian et al. (2023); Hanley and Durumeric (2023); Pan et al. (2023), fake news Oshikawa et al. (2018); Zellers et al. (2019); Dugan et al. (2022), gender bias Sun et al. (2019), education Perkins et al. (2023); Vasilatos et al. (2023), and social harm Kumar et al. (2023); Yang et al. (2023).
In Figure 1, we show some topics regarding the threats brought by AI-written text on education, social media, elections, etc. We also find on the Google search trend, that the concerns about AI-written text have witnessed a significant increase since the release of the latest powerful Large Largue Models (LLMs) such as ChatGPT Schulman et al. (2022) and GPT-4 OpenAI (2023). Humans are already unable to directly distinguish between LLMs- and human-written text, with the fast advancement of the model size, data scale, and AI-human alignment Brown et al. (2020); Ouyang et al. (2022). Since then, the field of LLMs has made remarkable advancements, leading to substantial improvements in the generation of content across various NLP tasks Qin et al. (2023); Yang et al. (2023). Alongside these advancements, there has been a proliferation of detection algorithms aimed at identifying LLMs-generated content. However, there remains a dearth of comprehensive surveys encompassing the latest methodologies, benchmarks, and attacks on LLMs-based detection systems.
Actually, AI has revolutionized various modalities, encompassing image generation models like DALLE Ramesh et al. (2021) and Imagen Saharia et al. (2022), text generation models like ChatGPT and Bard, audio processing models such as MMS Pratap et al. (2023), video generation models Singer et al. (2022), and code generation models Chen et al. (2021). In the past, there have been
Figure 1: We list the four most representative scenarios where the detection of LLMs-generated content is of paramount importance.
efforts to develop watermarking techniques for images [23, 24] and successful attacks against such techniques [10]. Additionally, research has been conducted on the detection of online ChatBots [12, 13]. Specifically, this survey concentrates on the detection of text and code generated by LLMs, as well as the attacks.
The current most powerful commercial LLMs, e.g. Anthropic's Claude, OpenAI's ChatGPT, and GPT-4, usually adopt the decoder-only transformer [12] architecture. Those models have tens to hundreds of billions of parameters, are trained on a large collection of text, and are further tuned to align with human preferences. During inference, the text generation process involves using top-\(k\) sampling [15], nucleus sampling [16] in conjunction with beam search. Concurrently, growing interests are shown to detectors, like the commercial tool GPTZero [13], or OpenAI's own detector [17] since humans can be easily fooled by improvements in decoding methods [18]. However, the misuse of detectors also raises protests from students on the unfair judgment on their homework and essays [1, 15] and popular detectors perform poorly on code detection [13].
Earlier work on text detection dates back to feature engineering [1]. For example, GTLR [1] assumes the generated word comes from the top distribution on small LMs like BERT [14] or GPT-2 [1]. Recently, there has been an increasing focus on detecting ChatGPT [13, 12, 14], to mitigate ChatGPT misuse or abuse [15]. In particular, it has recently been called for regulation1 on powerful AI like ChatGPT usage [1, 14].
Footnote 1: [https://www.nytimes.com/2023/05/16/technology/openai-altman-artificial-intelligence-regulation.html](https://www.nytimes.com/2023/05/16/technology/openai-altman-artificial-intelligence-regulation.html)
Therefore, we firmly believe that the timing is ideal for a comprehensive survey on the detection of LLMs-generated content. It would serve to invite further exploration of detection approaches, offer valuable insights into the strengths and weaknesses of previous research, and highlight potential challenges and opportunities for the research community to address. Our paper is organized as follows: we first briefly describe the problem formulation, including the task definition, metrics, and datasets in Section 2. In Section 3, we classify detection by their working mechanism and scope of application. In section 4, we summarize the three popular detection methods: training-based, zero-shot and watermarking. We also investigate various attacks in Section 5 since defending against attacks is of increasing importance and point out some challenges in Section 6. Finally, in Section 7 we provide additional insights into this topic on potential future directions, as well as the conclusion in Section 8.
## 2 Problem formulation
### Overview
We refer to any textual outputs from LLMs following specific inputs as LLMs-Generated Content. It can be generally classified into natural languages like news, essays, reviews, and reports, or programming languages like codes of Python, C++, and Java. Current research usually aims at the detection of content with moderate length and specific topics. It is meaningless to detect a short sentence describing some facts like _EMNLP started in 1996_ or simple coding question _def hello_world(): print('Hello World')_, to be human or AI written.
Formally, consider an LLM denoted as \(LLM\), which generates a candidate text \(S\) of length \(|S|\) based on an input prompt. Let \(f()\) represent a potential detector we aim to use for classification, assigning \(f(S)\) to \(0\) or \(1\), where \(0\) and \(1\) signify human or machine, respectively. The \(LLM\) can be classified into unknown (Black-box), fully known (White-box), or partially known (known model name with unknown model parameters) to the detectors. In practice, we are usually given a candidate corpus \(C\) comprising both human and LLMs-generated content to test \(f()\).
Apart from the standard definition, machine-generated content can undergo additional modifications in practical scenarios, including rephrasing by humans or other AI models. Besides, it is also possible that the candidate text is a mix of human and machine-written text. For example, the first several sentences are written by humans, and the remaining parts by machines, or vice versa. When a text undergoes revisions, the community often perceives it as paraphrasing and treats it as either machine- or human-generated text, depending on the extent of these modifications and the intent behind them. However, it is important to high
light that if a substantial majority of the text is authored by humans, or if humans have extensively revised machine-generated text, it becomes challenging to maintain the assertion that the text is purely machine-generated. Hence, in this survey, we adhere to the traditional definition by considering machine-generated content as text that has not undergone significant modifications, and we consistently classify such text as machine-generated.
### Metrics
Previous studies Mitchell et al. (2023); Sadasivan et al. (2023) predominantly used the Area Under the Receiver Operating Characteristic (AUROC) score to gauge the effectiveness of detection algorithms. As a binary classification problem, AUROC shows the results under different thresholds, and the F1 score is also helpful. Krishna et al. (2023); Yang et al. (2023) suggest that AUROC may not consistently provide a precise evaluation, particularly as the AUROC score nears the optimal limit of 1.0 since two detectors with identical AUROC score of \(0.99\) could exhibit substantial variations in detection quality from a user's perspective. From a practical point of view, ensuring a high True Positive Rate (TPR) is imperative while keeping the False Positive Rate (FPR) to a minimum. As such, current research Krishna et al. (2023); Yang et al. (2023) both report TPR scores at a fixed 1% FPR, along with the AUROC. Other work Sadasivan et al. (2023) also refer to Type I and Type II errors following the binary hypothesis test and even report TPR at \(10^{-6}\) FPR Fernandez et al. (2023).
### Datasets
In this section, we discuss the common datasets used for this task. The corpus is usually adopted from previous NLP tasks, and reconstructed by prompting LLMs to generate new outputs as candidate machine-generated text. Usually, there are two prompting methods: 1). prompting LLMs with the questions in some question-answering datasets. 2). prompting LLMs with the first 20 to 30 tokens to continue writing in datasets without specific questions. Specifically, several datasets have been compiled and utilized in the field. Some noteworthy datasets include TURINGBENCH Uchendu et al. (2021), HC3 Guo et al. (2023), CHEAT Yu et al. (2023), Ghostbuster Verma et al. (2023), OpenGPTText Chen et al. (2023), M4 Wang et al. (2023), MGTBench He et al. (2023), and MULTI-TuDE Macko et al. (2023) and some other datasets not explicitly built for detection have also been used, such as C4 Raffel et al. (2019), shareGPT 2, and alpaca Taori et al. (2023), as summarized in Table 1. For text detection, we only list datasets explicitly built for detection, while some general datasets like C4 Raffel et al. (2019) or alpaca Taori et al. (2023) can also be used. For code detection, we only list datasets that have been used in previous code detection work Lee et al. (2023); Yang et al. (2023). And other codegeneration corpora can also be adopted. The detailed description is included in Appendix A.3.
Footnote 2: [https://sharegpt.com/](https://sharegpt.com/)
**Data Contamination.** Despite those released standard datasets, we argue that static evaluation benchmarks might not be desirable for this problem with the rapid progress of LLMs trained, tuned, or aligned on large amounts of data across the whole internet. On the one hand, Aaronson (2022) mentioned that some text from Shakespeare or the Bible is often classified as AI-generated because such classic text is frequently used in the training datasets for generative language models. On the other hand, many detectors did not fully disclose their training data, especially commercial tools like GPTZero Tian (2023). It is natural to worry that those standard evaluation benchmarks would face a serious test data contamination problem, considering the commercial detectors would consistently improve their products for profits. So, with the rapid evolution of LLMs and detectors, the traditional paradigm of providing standard benchmarks might no longer be suitable for AI-generated text detection. We provide a unique solution to this:
**Utilize the most latest human-written content to reduce data contamination problem by collecting such content from the most updated open-source websites, which themselves explicitly forbid posting AI-written posts.**
## 3 Detection Scenarios
The findings of previous research, such as Gehrmann et al. (2019) and Dugan et al. (2022), highlight the general difficulty faced by humans in distinguishing between human- and machine-generated text, motivating the development of automatic solutions. The detection process can be classified into black-box or white-box detection based on whether the detector has access to the source model output logits. In black-box detection, there are two distinct cases: 1). when the source model
name is known, such as GPT-4; 2). when the source model name is unknown, and the content might have been generated by models like GPT-4, Bard, or other undisclosed models. On the other hand, white-box detection also encompasses two cases: 1). the detector only has access to the model's output logits or partial logits, such as the top-5 token log probability in text-davinci-003; 2). the detector has access to the entire model weights. Table 2 shows four categories according to application scenarios and three detector methods. Specifically, we can categorize the usage of detecting LLM-generated content into four distinct scenarios based on their application: These categorizations highlight the different levels of information available to the detectors, ranging from limited knowledge to complete access and demonstrate the various scenarios encountered in detecting machine-generated content.
### Black-Box Detection with Unknown Model Source
This scenario closely resembles real-world applications, particularly when users, such as students, utilize off-the-shelf AI services to assist them in writing their essays. In such cases, teachers are often unaware of the specific AI service being employed. Consequently, this situation poses the greatest challenge as very limited information is available to identify instances of deception.
### Black-Box Detection with Known Model Source
In this scenario, we possess knowledge regarding the specific model from which the text originates, yet we lack access to its underlying parameters. This aspect carries considerable significance due to the market domination of major language model providers such as OpenAI and Google. Many users rely heavily on their services, enabling us to make informed assumptions about the model sources.
### White-Box Detection with Full Model Parameters
While access to the most powerful LLMs, such as Anthropic's Claude or OpenAI's ChatGPT, is typically limited, assuming full access to the model parameters is an active research area. This approach is reasonable, considering that researchers often encounter resource constraints, making it challenging to experiment with large-scale models. For instance, watermarking-based methods [13] typically require full access to the model parameters. This technique manipulates the next token prediction at each sampling
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Datasets** & **Length** & **Size** & **Data type** & **\#Language** \\ \hline TuringBench [2021] & 100\(\sim\)400 & 200K & News articles & 1 \\ \hline HC3 [2023] & 100\(\sim\)250 & 44,425 & Reddit, Wikipedia, medicine and finance & 2 \\ \hline CHEAT [2023a] & 100\(\sim\)300 & 35,304 & Academical abstracts & 1 \\ \hline Ghostbuster [2023] & 200\(\sim\)1200 & 12,685 & Student essays, creative fiction, and news & 1 \\ \hline GPT-Sentinel [2023] & 100\(\sim\)400 & 29,395 & OpenWebText [2023] & 1 \\ \hline M4 [2023d] & 200-300 & 122,481 & Multi-domains & 6 \\ \hline MGTBench [2023] & 10\(\sim\)200 & 2,817 & Question-answering datasets & 1 \\ \hline HC3 Plus [2023b] & 100\(\sim\)250 & 214,498 & Summarization, translation, and paraphrasing & 2 \\ \hline MULTTtuDE [2023] & 150\(\sim\)400 & 74,081 & MassiveSumm [2021] & 11 \\ \hline HumanEval [2021] & \(\sim\)181 & 164 & Code Exercise & 1 \\ \hline APPS [2021] & \(\sim\)474 & 5,000 & Code Competitions & 1 \\ \hline CodeContests [2022] & \(\sim\)2239 & 165 & Code Competitions & 6 \\ \hline \hline \end{tabular}
\end{table}
Table 1: A summarization of the detection datasets. Length is reported in the number of words for text and characters for codes. #Language represents the number of types of natural languages for text and programming languages for codes.
Figure 2: Three categories of detectors and four detection scenarios: as the transparency decreases, the detection difficulty increases.
position by modifying the distribution. Although this approach necessitates access to the complete model parameters, it has shown promise and could potentially be adapted for practical use.
### White-Box Detection with Partial Model Information
This corresponds to the scenarios when only the partial model outputs, like the top-5 token logits are provided by text-davinci-003. Previous work like DetectGPT (Mitchell et al., 2023) and DNA-GPT (Yang et al., 2023) both utilize such probability to perform detection.
### Model Sourcing
Furthermore, another aspect related to detection goes beyond distinguishing between human and machine-generated content. This task involves determining which specific model may have generated the content and is referred to as authorship attribution (Uchendu et al., 2020), origin tracing (Li et al., 2023), or model sourcing (Yang et al., 2023). We consider this task as a special scenario since it is slightly different from other detection tasks.
## 4 Detection Methodologies
In this section, we delve into further details about the detection algorithms. Based on their distinguishing characteristics, existing detection methods can be categorized into three classes: 1) Training-based classifiers, which typically fine-tune a pre-trained language model on collected binary data - both human and AI-generated text distributions. 2) Zero-shot detectors leverage the intrinsic properties of typical LLMs, such as probability curves or representation spaces, to perform self-detection. 3) Watermarking involves hiding identifying information within the generated text that can later be used to determine if the text came from a specific language model, rather than detecting AI-generated text in general. We summarize the representative approaches in Figure 3 as classified by the scenarios listed in Section 3.
### Training-based
The earlier work of training a detection classifier focuses on fake review (Bhagat and Hovy, 2013), fake news (Zellers et al., 2019) or small models (Solaiman et al., 2019; Bakhtin et al., 2019; Uchendu et al., 2020) detection. Subsequently, growing interest in this line of research turns to detecting high-quality text brought by LLMs.
#### 4.1.1 Black-box
The first line of work focuses on black-box detection. _When the model source is known_, some work use the text generated by **1 mixed sources** and subsequently train a classifier together for detection. For example, OpenAI (OpenAI, 2023) collects text generated from different model families and trains a robust detector for detection text with more than 1,000 tokens. GPTZero (Tian, 2023) also collects their human-written text spans student-written articles, news articles, and Q&A datasets spanning multiple disciplines from a variety of LLMs. Similarly, G\({}^{3}\)Detector (Zhan et al., 2023) claims to be a general GPT-Generated text detector by finetuning RoBERTa-large (Liu et al., 2019) and explores the effect of the use of synthetic data on the training process. GPT-Sentinel (Chen et al., 2023) trains the RoBERTa and T5 (Raffel et al., 2020) classifiers on their constructed dataset OpenGPTText. **2 Mixed decoding** is also utilized by incorporating text generated with different decoding parameters to account for the variance. Ippolito et al. (2020) find that, in general, discriminators transfer poorly between decoding strategies, but training on a mix of data can help. GPT-Pat (Yu et al., 2023) train a siamese network to compute the similarity between the original text and the re-decoded text. Besides, **3 mixed strategies** involves additional information, such as graph structure and contrastive learning in CoCo (Liu et al., 2022), proxy model perplexity in LLMDet (Wu et al., 2023), positive unlabeled training in MPU (Tian et al., 2023) and adversarial training in RADAR (Hu et al., 2023).
On the other hand, _when the source model is unknown_, OpenAI text classifier (OpenAI, 2023) and GPTZero (Tian, 2023) still works by **1 cross-domain transfer**. Other works like (Pu et al., 2023; Antoun et al., 2023), Conda (Bhattacharjee et al., 2023) also rely on the zero-shot generalization ability of detectors trained on a variety of model families and tested on unseen models. Besides, Ghostbuster (Verma et al., 2023) directly uses outputs from known **2 surrogate model** as the signal for training a classifier to detect unknown model. Additionally, **3 detection in the wild**(Li et al., 2023) contributes a wild testbed by gathering texts from various human writings and deepfake texts generated by different LLMs for detection without knowing their sources.
Figure 3: Taxonomy on detection of LLMs-generated content. We list the most representative approaches for each kind of method.
#### 4.1.2 White-box
The second kind of work lies in the white-box situation when the model's full or partial parameters are accessible. For example, when we have full access to the model, GLTR (Gehrmann et al., 2019) trains a logistic regression over absolute word ranks in each decoding step. When only partial information like the model output logits are available, SeqXGPT (Wang et al., 2023) introduce a sentence-level detection challenge by synthesizing a dataset that contains documents that are polished with LLMs and propose to detect it with logits as waves from white-box LLMs. Sniffer (Li et al., 2023) utilizes the contrastive logits between models as a typical feature for training to perform both detection and origin tracking.
### Zero-Shot
In the zero-shot setting, we do not require extensive training data to train a discriminator. Instead, we can leverage the inherent distinctions between machine-generated and human-written text, making the detector training-free. The key advantage of training-free detection is its adaptability to new data distributions without the need for additional data collection and model tuning. It's worth noting that while watermarking methods can also be considered zero-shot, we treat them as an independent track. Previous work utilizes entropy (Lavergne et al., 2008), average log-probability score (Sollaman et al., 2019), perplexity (Beresneva, 2016), uncommon n-gram frequencies (Grechnikov et al., 2009; Badaskar et al., 2008) obtained from a language model as the judge for determining its origin. However, those simple features fail as LLMs are becoming diverse and high-quality text generators. Similarly, there are also black- and white-box detection, as summarized below.
#### 4.2.1 Black-Box
_When the source of the black-box model is known_, DNA-GPT (Yang et al., 2023) achieves superior performance by utilizing N-Gram divergence between the continuation distribution of re-prompted text and the original text. Besides, DetectGPT (Mitchell et al., 2023) also investigates using another surrogate model to replace the source model but achieves unsatisfactory results. In contrast, Mireshghallah et al. (2023) proves that a smaller surrogate model like OPT-125M (Zhang et al., 2022) can serve as a universal black-box text detector, achieving close or even better detection performance than using the source model. Additionally, Krishna et al. (2023) suggests building a database of generated text and detecting the target text by comparing its semantic similarity with all the text stored in the database. Finally, DetectGPT4Code (Yang et al., 2023) also investigates detecting codes generated by ChatGPT through a proxy small code generation models by conditional probability divergence and achieves significant improvements on code detection tasks.
_When the source of the model is unknown_, PHD (Tulchinskii et al., 2023) observes that real text exhibits a statistically higher intrinsic dimensionality compared to machine-generated texts across various reliable generators by employing the Persistent Homology Dimension Estimator (PHD) as a means to measure this intrinsic dimensionality, combined with an additional encoder like Roberta to facilitate the estimation process.
#### 4.2.2 White-Box
_When the partial access to the model is given_, traditional methods use the features such as entropy (Lavergne et al., 2008), average log-probability score (Solaiman et al., 2019) for detection. However, these approaches struggle to detect text from the most recent LLMs. Then, the pioneer work DetectGPT (Mitchell et al., 2023) observes that LLM-generated text tends to occupy negative curvature regions of the model's log probability function and leverages the curvature-based criterion based on random perturbations of the passage. DNA-GPT (Yang et al., 2023) utilizes the probability difference between the continuous distribution among re-prompted text and original text and achieves state-of-the-art performance. Later, Deng et al. (2023) improves the efficiency of DetectGPT with a Bayesian surrogate model by selecting typical samples based on Bayesian uncertainty and interpolating scores from typical samples to other ones. Furthermore, similar to DNA-GPT (Yang et al., 2023) on using the conditional probability for discrimination, Fast-DetectGPT (Bao et al., 2023) builds an efficient zero-shot detector by replacing the probability in DetectGPT with conditional probability curvature and witnesses significant efficiency improvements. Additionally, GPT-who (Venkatraman et al., 2023) utilizes Uniform Information Density (UID) based features to model the unique statistical signature of each LLM and human author for accurate authorship attribution.
_When the full access to the model is given_, Su et al. (2023) leverages the log-rank information for zero-shot detection through one fast and efficient DetectLLM-LRR (Log-Likelihood Log-Rank ratio) method, and another more accurate DetectLLM-NPR (**N**ormalized **p**erturbed log **r**ank) method, although slower due to the need for perturbations.
### Watermarking
Text watermarking injects algorithmically detectable patterns into the generated text while ideally preserving the quality and diversity of language model outputs. Although the concept of watermarking is well-established in vision, its application to digital text poses unique challenges due to the text's discrete and semantic-sensitive nature (Kutter et al., 2000). Early works are edit-based methods that modify a pre-existing text. The earliest work can be dated back to Atallah et al. (2001), which designs a scheme for watermarking natural language text by embedding small portions of the watermark bit string in the syntactic structure of the text, followed by paraphrasing (Atallah et al., 2003), syntax tree manipulations (Topkara et al., 2005; Meral et al., 2009) and synonym substitution (Topkara et al., 2006). Besides, text watermarking has also been used for steganography and secret communication (Fang et al., 2017; Ziegler et al., 2019; Abdelnabi and Fritz, 2021), and intellectual property protection (He et al., 2022, 2023; Zhao et al., 2022, 2023), but this is out the scope of this work. In light of growing ethical considerations, text watermarking has been increasingly used to ascertain the origin of textual content and detect AI-generated content (Grinbaum and Adomatiis, 2022). The primary focus of this paper is on the use of text watermarking to detect AI-generated text.
In general, watermarking for text detection can also be classified into white-box and black-box watermarking. Watermarking is designed to determine whether the text is coming from a specific language model rather than universally detecting text generated by any potential model. As such, knowledge of the model source is always required in text watermarking for detection.
#### 4.3.1 Black-Box Watermarking
In black-box setting, such as API-based applications, the proprietary nature of the language models used by LLM providers precludes downstream users from accessing the sampling process for commercial reasons. Alternatively, a user may wish to watermark human-authored text via post-processing. In such cases, black-box watermarking aims to automatically manipulate generated text to embed watermarks readable by third parties. Traditional works designed complex linguistic rules such as paraphrasing (Atallah et al., 2003), syntax tree manipulations (Topkara et al., 2005; Meral et al., 2009) and synonym substitution (Topkara et al., 2006), but lack scalability. Later work turns to pre-trained language models for efficient watermarking. For example, Yang et al. (2022) proposes a natural language watermarking scheme based on context-aware lexical substitution (LS). Specifically, they employ BERT (Devlin et al., 2019) to suggest LS candidates by inferring the semantic relatedness between the candidates and the original sentence. Yang et al. (2023) first defines a binary encoding function to compute a random binary encoding corresponding to a word. The encodings computed for non-watermarked text conform to a Bernoulli distribution, wherein the probability of a word representing bit-1 is approximately 0.5. To inject a watermark, they alter the distribution by selectively replacing words representing bit-0 with context-based synonyms that represent bit-1. A statistical test is then used to identify the watermark.
#### 4.3.2 White-Box Watermarking
The most popular **1**training-free** watermark directly manipulates the decoding process when the model is deployed. In the efforts of watermarking GPT outputs, Aaronson (2022) works with OpenAI to first develop a technique for watermarking language models using exponential minimum sampling to sample text from the model, where the inputs to the sampling mechanism are a hash of the previous \(k\) consecutive tokens through a pseudo-random number generator. By Gumbel Softmax (Jang et al., 2016) rule, their method is proven to ensure guaranteed quality. Besides, Christ et al. (2023) provides the formal definition and construction of undetectable watermarks. Their cryptographically inspired watermark design proposes watermarking blocks of text from a language model by hashing each block to seed a sampler for the next block. However, there are only theoretical concepts for this method without experimental results. Another pioneering work of training-free watermark (Kirchenbauer et al., 2023) embeds invisible watermarks in the decoding process by
dividing the vocabulary into a "green list" and a "red list" based on the hash of prefix token and subtly increases the probability of choosing from the green list. Then, a third party, equipped with knowledge of the hash function and random number generator, can reproduce the green list for each token and monitor the violation of the green list rule. Subsequently, Zhao et al. (2023) simplifies the scheme by consistently using a fixed green-red list split, showing that the new watermark persists in guaranteed generation quality and is more robust against text editing. Kuditipudi et al. (2023) create watermarks that are distortion-free by utilizing randomized watermark keys to sample from token probability distribution by inverse transform sampling and exponential minimum sampling. Hou et al. (2023) propose a sentence-level semantic watermark based on locality-sensitive hashing (LSH), which partitions the semantic space of sentences. The advantage of this design is its enhanced robustness against paraphrasing attacks. DiPmark (Wu et al., 2023) is an unbiased distribution-preserving watermark that preserves the original token distribution during watermarking and is robust to moderate changes of tokens by incorporating a novel reweight strategy, combined with a hash function that assigns unique i.i.d. ciphers based on the context. Drawn on the drawbacks of random green-red list splitting, Fu et al. (2023) uses input sequence to get semantically related tokens for watermarking to improve certain conditional generation tasks.
Despite training-free watermarking, text watermarks can also be injected through pre-inference training or post-inference training: **2 training-based watermark**. One example of pre-inference training is REMARK-LLM (Zhang et al., 2023), which injects the watermark by a message encoding module to generate a dense token distribution, following a message decoding module to extract messages from the watermarked textual and reparameterization is used as a bridge to connect the dense distribution with tokens' one-hot encoding. The drawback is that training is required on source data and might not generalize well to unseen text data. On the contrary, post-inference training involves adding a trained module to assist in injecting watermarks during inference. For instance, Liu et al. (2023) proposes a semantic invariant robust watermark for LLMs, by utilizing another embedding LLM to generate semantic embeddings for all preceding tokens. However, it is not training-free since these semantic embeddings are transformed into the watermark logits through their trained watermark model.
Despite from 0-bit watermark, there is also **3 multi-bit watermarking**. For example, Yoo et al. (2023) designs a multi-bit watermarking following a well-known proposition from image watermarking that identifies natural language features invariant to minor corruption and proposes a corruption-resistant infill model. COLOR (Yoo et al., 2023) subsequently designs another multi-bit watermark by embedding traceable multi-bit information during language model generation while allowing zero-bit detection simultaneously. Fernandez et al. (2023) also consolidates watermarks for LLMs through more robust statistical tests and multi-bit watermarking.
### Commercial Tool
Despite from academic research, AI text detection also draws considerable attention from commercial companies. Table 2 summarizes the popular commercial detectors. Although the majority of them simultaneously claim to be the most accurate AI detectors on the homepage of their website, it is essential to evaluate their performance based on various factors such as accuracy, speed, robustness, and compatibility with different platforms and frameworks. Regrettably, a dearth of articles exists that explicitly delve into the comparative analysis of the aforementioned properties among popular commercial detectors.
## 5 Detection Attack
Despite the progress of detection work, there are also continuous efforts to evade existing detectors, and we summarize the main streams in this section.
### Paraphrasing Attack
Paraphrasing could be performed by human writers or other LLMs, and even by the same source model. Paraphrasing can also undergo several rounds, influenced by a mixture of different models. Current research mostly focuses on the simple paraphrase case where another model rewrites a machine-generated text for one round. For instance, Krishna et al. (2023) trains a T5-11b model for paraphrasing text and discovers that all detectors experience a significant drop in quality when faced with paraphrased text. Additionally, simple paraphrasing attacks involve word substitutions
(Shi et al., 2023). Moreover, paraphrasing can also be achieved through translation attacks. However, conducting more in-depth analysis and research on complex paraphrasing techniques in the future is crucial. Becker et al. (2023) systemically examines different classifiers encompassing both classical approaches and Transformer techniques for detecting machine (like T5) or human paraphrased text.
### Adversarial Attack
Though the adversarial attack is popular for general NLP tasks (Alzantot et al., 2018), there has been little work specifically addressing adversarial attacks on detectors for LLM-generated content. However, we can consider the following two types of attacks for further investigation and exploration:
_Adversarial Examples:_ Attackers can generate specially crafted inputs by making subtle modifications to the text that fool the AI text detectors while remaining mostly unchanged to human readers (Shi et al., 2023). These modifications can include adding or removing certain words or characters, introducing synonyms, or leveraging linguistic tricks to deceive the detector. Evasion attacks aim to manipulate the AI text detector's behavior by exploiting its vulnerabilities. Attackers can use techniques such as obfuscation, word permutation, or introducing irrelevant or misleading content to evade detection. The goal is to trigger false negatives and avoid being flagged as malicious or inappropriate.
_Model Inversion Attacks:_ Attackers can launch model inversion attacks by exploiting the responses of AI text detectors. They might submit carefully crafted queries and observe the model's responses to gain insights into its internal workings, architecture, or training data, which can be used to create more effective attacks or subvert the system's defenses.
### Prompt Attack
Current LLMs are vulnerable to prompts (Zhu et al., 2023), thus, users can utilize smartly designed prompts to evade established detectors. For example, Shi et al. (2023) examines instructional prompt attacks by perturbing the input prompt to encourage LLMs to generate texts that are difficult to detect. Lu et al. (2023) also show that LLMs can be guided to evade AI-generated text detection by a novel substitution-based In-Context example Optimization method (SICO) to automatically generate carefully crafted prompts, enabling ChatGPT to evade six existing detectors by a significant 0.54 AUC drop on average. Nevertheless, limited attention has been devoted to this topic, indicating a notable research gap that merits significant scholarly exploration in the immediate future. Notably, a recent work (Chakraborty et al., 2023) introduces the Counter Turing Test (CT2), a benchmark consisting of techniques aiming to evaluate the robustness of existing six detection techniques comprehensively. Their empirical findings unequivocally highlight the fragility of almost all the proposed detection methods under scrutiny. Despite the hard prompt attack, Kumarage et al. (2023) first creates an evasive soft prompt tailored to a specific PLM through prompt tuning; and then, they leverage the transferability of soft prompts to transfer the learned evasive soft prompt from one PLM to another and find the universal efficacy of the evasion attack.
## 6 Challenges
### Theorical Analysis
Inspired by the binary hypothesis test in (Polyanskiy and Wu, 2022), (Sadasivan et al., 2023) claims that machine-generated text will become indistinguishable as the total variance between the distributions of human and machine approaches zero. In contrast, Chakraborty et al. (2023) demonstrates
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline
**Product Name** & **Website** & **Price** & **API available** \\ \hline Originality.AI & [https://app.originality.ai/api-access](https://app.originality.ai/api-access) & \$0.01/100 words & Yes \\ \hline Quil.org & [https://aimwritingcheck.org/](https://aimwritingcheck.org/) & Free website version & No \\ \hline Sapling & [https://sapling.ai/ai-content-detector](https://sapling.ai/ai-content-detector) & 1 million chars at \$25/month & Yes \\ \hline OpenAI text classifier & [https://openai-openai-detector.hf.space/](https://openai-openai-detector.hf.space/) & Free website version & Yes \\ \hline Crossplag & [https://crossplag.com/ai-content-detector/](https://crossplag.com/ai-content-detector/) & Free website version & No \\ \hline GPTZero & [https://gptzero.me/](https://gptzero.me/) & 0.5 million words at \$14.99/mo & Yes \\ \hline ZeroGPT & [https://www.zerogpt.com/](https://www.zerogpt.com/) & Free website version & No \\ \hline CopyLeaks & [https://copyleaks.com/ai-content-detector](https://copyleaks.com/ai-content-detector) & 25000 words at \$10.99/Month & No \\ \hline \hline \end{tabular}
\end{table}
Table 2: A summary of popular commercial tools to detect AI-generated text.
that it is always possible to distinguish them by curating more data to make the detection of AUROC increase exponentially with the number of training instances. Additionally, DNA-GPT (Yang et al., 2023) demonstrates the difficulty of obtaining a high TPR while maintaining a low FPR. Nevertheless, a dearth of theoretical examination persists regarding the disparities in intrinsic characteristics between human-written language and LLMs. Scholars could leverage the working mechanisms of GPT models to establish a robust theoretical analysis, shedding light on detectability and fostering the development of additional detection algorithms.
### LLM-Generated Code Detection
Previous detectors usually only focus on the text, but LLMs-generated codes also show increasing quality (see a recent survey (Zan et al., 2022)). Among the first, Lee et al. (2023) found that previous watermarking (Kirchenbauer et al., 2023) for text does not work well in terms of both detectability and generated code quality. It is evidenced that low entropy persists in generated code (Lee et al., 2023), thus, the decoding process is more deterministic. They thus adapt the text watermarks to code generation by only injecting watermarks to tokens with higher entropy than a given threshold and achieve more satisfactory results. Code detection is generally believed to be even harder than text detection due to its shorter length, low entropy, and non-natural language properties. DetectGPT4Code (Yang et al., 2023) detects codes generated by ChatGPT by using a proxy code model to approximate the logits on the conditional probability curve and achieves the best results over previous detectors.
### Model Sourcing
Model sourcing (Yang et al., 2023), is also known as origin tracking (Li et al., 2023) or authorship attribution (Uchendu et al., 2020). Unlike the traditional distinction between human and machine-generated texts, it focuses on identifying the specific source model from a pool of models, treating humans as a distinct model category. With the fast advancement of LLMs from different organizations, it is vital to tell which model or organization potentially generates a certain text. This has practical applications, particularly for copyright protection. Consequently, we believe that in the future, it may become the responsibility of organizations releasing powerful LLMs to determine whether a given text is a product of their system. Previous work either (Li et al., 2023) trains a classifier or utilizes the intrinsic genetic properties (Yang et al., 2023) to perform model sourcing, but still can not handle more complicated scenarios. GPT-who (Venkatraman et al., 2023) utilizes Uniform Information Density (UID) based features to model the unique statistical signature of each LLM and human author for accurate authorship attribution.
### Bias
It has been found that current detectors tend to be biased against non-native speakers (Liang et al., 2023). Also, Yang et al. (2023) found that previous detection tools often perform poorly on other languages other than English. Besides, current research usually focuses on the detection of text within a certain length, thus showing bias against the shorter text. How to ensure the integrity of detectors under various scenarios without showing bias against certain groups is of central importance.
### Generalization
Currently, the most advanced LLMs, like ChatGPT, are getting actively updated, and OpenAI will make a large update every three months. How to effectively adapt existing detectors to the updated LLMs is of great importance. For example, Tu et al. (2023) records the ChatLog of ChatGPT's response to long-form generation every day in one month, observes performance degradation of the Roberta-based detector, and also finds some stable features to improve the robustness of detection. As LLMs continuously benefit from interacting with different datasets and human feedback, exploring ways to effectively and efficiently detect their generations remains an ongoing research area. Additionally, Kirchenbauer et al. (2023) investigates the reliability of watermarks for large language models and claims that watermarking is a reliable solution under human paraphrasing and various attacks at the context length of around 1000. Pu et al. (2023) examines the zero-shot generalization of machine-generated text detectors and finds that none of the detectors can generalize to all generators. All those findings reveal the difficulty of reliable generalization to unseen models or data sources of detection.
## 7 Future Outlook
The detection of LLM-generated content is an evolving field. Here, we list some potential avenues for future work (details are included in Appendix A.4): 1). robust and scalable detection
techniques; 2). rigorous and standard evaluation; 3). fine-grained detection; 4). user education and awareness; 5). transparency and explainability.
## 8 Conclusion
We comprehensively survey LLMs-generated content detection over existing task formulation, benchmark datasets, evaluation metrics, and different detection methods under various scenarios. We also point out existing challenges, such as diversified attack approaches, and share our views on the future of detection. We hope our survey can help the research community quickly learn the progress of detection methodologies and challenges and potentially inspire more ideas in the urgent need for reliable detectors.
## Limitations
Despite conducting a comprehensive literature review on AI-generated content detection, we acknowledge the potential for omissions due to incomplete searches.
## Ethics Statement
The utilization of AI detection presents significant ethical considerations, particularly when it comes to the detection of plagiarism among students. Misclassifications in this context can give rise to substantial concerns. This survey aims to summarize the current techniques employed in this field comprehensively. However, it is important to note that no flawless detectors have been developed thus far. Consequently, users should exercise caution when interpreting the detection outcomes, and it should be understood that we cannot be held accountable for any inaccuracies or errors that may arise.
|
2306.08685 | World-to-Words: Grounded Open Vocabulary Acquisition through Fast
Mapping in Vision-Language Models | The ability to connect language units to their referents in the physical
world, referred to as grounding, is crucial to learning and understanding
grounded meanings of words. While humans demonstrate fast mapping in new word
learning, it remains unclear whether modern vision-language models can truly
represent language with their grounded meanings and how grounding may further
bootstrap new word learning. To this end, we introduce Grounded Open Vocabulary
Acquisition (GOVA) to examine grounding and bootstrapping in open-world
language learning. As an initial attempt, we propose object-oriented BERT
(OctoBERT), a novel visually-grounded language model by pre-training on
image-text pairs highlighting grounding as an objective. Through extensive
experiments and analysis, we demonstrate that OctoBERT is a more coherent and
fast grounded word learner, and that the grounding ability acquired during
pre-training helps the model to learn unseen words more rapidly and robustly.
Our code is available at https://github.com/sled-group/world-to-words | Ziqiao Ma, Jiayi Pan, Joyce Chai | 2023-06-14T18:10:05Z | http://arxiv.org/abs/2306.08685v1 | # World-to-Words: Grounded Open Vocabulary Acquisition
###### Abstract
The ability to connect language units to their referents in the physical world, referred to as _grounding_, is crucial to learning and understanding grounded meanings of words. While humans demonstrate fast mapping in new word learning, it remains unclear whether modern vision-language models can truly represent language with their grounded meanings, and how grounding may further bootstrap new word learning. To this end, we introduce Grounded Open Vocabulary Acquisition (GOVA) to examine grounding and bootstrapping in open-world language learning. As an initial attempt, we propose object-oriented BERT (OctoBERT), a novel visually-grounded language model by pre-training on image-text pairs highlighting grounding as an objective. Through extensive experiments and analysis, we demonstrate that OctoBERT is a more coherent and fast grounded word learner, and that the grounding ability acquired during pre-training helps the model to learn unseen words more rapidly and robustly.1
Footnote 1: Code available at [https://github.com/sled-group/world-to-words](https://github.com/sled-group/world-to-words).
## 1 Introduction
Language is learned through sensorimotor experience in the physical world [1]. The ability to connect language units to their referents in the physical world, referred to as _grounding_, plays an important role in learning and understanding grounded meanings of words [1]. As shown in Figure 1, a human reader would easily ground noun phrases to the corresponding entities captured in the image. Even when the term "incinerator" is new to human learners, they can still locate the object of interest through the language and visual context, and acquire its meaning. In fact, this ability to bootstrap new word learning with only minimal information, known as _fast mapping_, is demonstrated abundantly in cognitive literature on human language acquisition [1, 1, 2, 3].
Recently, there has been a substantial effort on pre-training vision-language models (VLMs) [11]. Despite the exciting performance of these models on a variety of downstream vision and language tasks, it remains unclear whether these models can truly understand or produce language with their grounded meanings in the perceived world, and how grounding may further bootstrap new word learning. These questions are of interest from both a scientific and an engineering point of view. From a scientific perspective, grounding is crucial to language learners, as children attend to intended objects in the environment when producing [12, 13] and comprehending [22] utterances. From an engineering perspective, even with the availability of grounded vision language datasets (image-text pairs with fine-grained word-object mappings) [10], the costly grounding annotation can hardly cover the whole vocabulary space during the training time. Building upon the pre-trained models, it's important for the agent to have the ability to learn grounded new words in a few shots of raw image-text pairs without word-object mappings.
To this end, we introduce Grounded Open Vocabulary Acquisition (GOVA), a scalable formulation to examine grounding and bootstrapping in open-world language learning. In this formulation,
Figure 1: Even when the term “incinerator” (highlighted yellow) is new to human learners, they can still locate the most likely referent (indicated by the yellow bounding box) in the perceived world by grounding.
language learning is a combination of learning to predict a word in a linguistic context as well as learning to ground the word in the physical world. Under this formulation, we explore the framework in which the model first acquires the grounding ability during pre-training, and then transfers this ability to learn unseen words without grounding supervision. As an initial step, we developed object-oriented BERT (OctoBERT), a novel visually grounded language model motivated by recent advances in detection transformers (DETR) Carion et al. (2020); Kamath et al. (2021). Compared to many existing VLMs, OctoBERT performs language modeling upon explicit object representations. The model first acquires the ability to ground during pre-training, and then transfers this intrinsic ability to learn unseen words when grounded supervision is no longer available.
Our empirical results show that learning to map words to their referents plays a significant role in grounded word acquisition. By pre-training with fine-grained word-object mappings, OctoBERT demonstrates stronger performance in learning grounded meanings of words, both seen and unseen, yet with orders of magnitude fewer data compared to other competitive VLM baselines. The pre-trained model can further provide a foundation for efficient learning of new grounded words with a few examples. We further present an in-depth analysis to understand potential predictors of VLMs in word learning, which demonstrates intriguing behaviors in comparison to human language learning. Our findings will facilitate future work on grounded language learning in an open world.
## 2 Grounded Open Vocabulary Acquisition (60va)
We start by introducing the settings of _grounded word acquisition_ and _few-shot learning of new words_ tasks, which are two key components of the Grounded Open Vocabulary Acquisition (6OVA) task formulation. We further present a unified evaluation protocol and introduce the dataset we curated for this problem.
### Grounded Word Acquisition
Many vision-language tasks have been developed in the past, _e.g._, visual question answering, visual commonsense reasoning, etc. However, these tasks are mainly focused on the end task performance without scrutinizing whether words are grounded to their corresponding visual entities. We consider a formulation that directly examines if vision-language models have the ability to acquire grounded meanings of words, specifically, through both _language modeling_ and _object localization_. Figure 2 shows an instance of the word acquisition task. A model is presented with an image \(x_{\text{img}}\in\mathcal{I}\) and an incomplete caption \(x_{\text{cap}}\in\mathcal{T}\) with one of its groundable words \(w\) (_e.g._, nouns and adjectives) replaced by a MASK. The model is tasked to predict this missing word \(w\in\mathcal{V}\) based on all available context and localize the corresponding objects \(O_{w}=\{o_{1},o_{2},\cdots,o_{n}\}\) in the image by proposing the bounding boxes of them. Overall, a model capable of solving the grounded word acquisition task is a function \(f:\mathcal{I}\times\mathcal{T}\rightarrow\mathcal{V}\times\mathbb{R}^{4n}\).
The language modeling part takes the form of a cloze test, which predicts an open vocabulary word and is widely adopted to evaluate pre-trained language models Paperno et al. (2016); Petroni et al. (2019); Jin et al. (2020). However, language modeling alone fails to provide a comprehensive evaluation of language grounding. For example in Figure 2, a model may correctly produce the word "boat," but mistakenly attributes the evidence to the larger white boat in the image. To address this limitation, we require models to localize the corresponding object in the image. This design is motivated by the disentanglement of object detection into object localization and class recognition Singh et al. (2018); Zareian et al. (2021); Zhong et al. (2022). It enables vision models to develop a sense of objectness without relying on a predefined set of object classes, thereby potentially allowing them to generalize to unseen objects. Further comparison with related task setups is discussed in Section 5 and illustrated in Figure 8 in the Appendix.
### Evaluation Metric
In language model evaluation, the commonly used measures for assessing performance are the stan
Figure 2: An instance of the word grounding task. Models are tasked to predict the missing word boat and localize the corresponding smaller yellow boat in the image coherently.
dard hit-rate-at-\(k\) (HR\(@k\)) measure and perplexity (Salazar et al., 2020; Jin et al., 2020). In masked language modeling, the log perplexity of a word \(w\) is defined as the log pseudo-perplexity:
\[\log\text{PPL}(w)=-\log P(w|x_{\text{img}},x_{\text{cap}}) \tag{1}\]
In object detection evaluation, especially for phrase grounding where multiple referents are possible (Kamath et al., 2021), Any-Protocol and All-Protocol are commonly adopted. Assuming \(n\) ground truth bounding boxes \(B=\{b_{1},b_{2},\cdots,b_{n}\}\) and \(m\) predicted bounding boxes \(\widetilde{B}=\{\widetilde{b_{1}},\widetilde{b_{2}},\cdots,\widetilde{b_{m}}\}\), the intersection-over-union (IoU) in both protocols is defined as:
\[\text{IoU}_{\text{any}}=\frac{1}{n}\sum_{i\in\{1,2,\cdots,n\}}\max_{j\in\{1,2,\cdots,n\}}\text{IoU}(b_{i},\widetilde{b_{j}}) \tag{2}\]
\[\text{IoU}_{\text{all}}=\text{IoU}(\cup B,\cup\widetilde{B}) \tag{3}\]
However, these metrics only capture unimodal performance without concerning the correctness of cross-modal mapping. We design two new metrics to combine language and vision performance:
* **Grounded hit-rate** (G-HR\(@k\)), the proportion of tests with the masked word appearing in the top-\(k\) candidates and a localization IoU over 0.5.
* **Grounded perplexity** (G-PPL) as follows: \[\log\text{G-PPL}(w)=\begin{cases}\infty&\text{if IoU}=0\\ \log\text{PPL}(w)-\log\text{IoU}&\text{else}\end{cases}\] (4)
### Few-Shot Learning of New Words
Although there are grounding datasets available, _i.e._, image-text pairs with word-object mapping annotation (Plummer et al., 2015), it is impractical to obtain such fine-grained annotation on a large scale and to cover the whole vocabulary space \(\mathcal{V}\). We therefore explore grounded new word learning as a few-shot learning problem, especially under the setting of incremental class learning (Mandziuk and Shastri, 1999; Kemker et al., 2018). An intuitive illustration of the few-shot new word learning framework is provided in Figure 3. Under this framework, a computational model is developed in two stages. During the pre-training stage, the model receives image-caption pairs, with fine-grained word-object annotation for a set of base words \(\mathcal{V}_{\text{seen}}\subseteq\mathcal{V}\). After pre-training, the model is provided with few samples of raw text-image pairs, each containing a set of unseen words \(\mathcal{V}_{\text{unseen}}\subseteq\mathcal{V}\) that the model has to acquire.
Tests are performed after each training stage. It's important to note that the unseen words may not be completely new, _e.g._, the models may have encountered these words in its language encoder initialized with pre-trained language models. We consider them "unseen" because the model never sees these words paired with their referent, _i.e._, the grounded meanings of the words are unknown.
### Dataset Curation
We build our dataset based on the Flickr30K Entities dataset (Plummer et al., 2015), which contains image-text pairs with dense annotations between groundable phrases and bounding boxes of objects. The groundable phrases and regions are defined by the dataset, as chunks of text that refer to object bounding boxes. To construct word grounding instances, we use Stanza (Qi et al., 2020) to parse the caption, enumerate every word in the groundable phrase, and identify those with a POS tag of NOUN or ADJ. These groundable words are replaced by MASK one at a time and matched to their corresponding bounding boxes.
The dataset is divided into 4 splits: pre-training set, unseen words training set, seen words test set, and unseen words test set. We start by selecting 31 unseen words and holding out all text-image pairs containing these words from the training split of Flickr30K Entities. The hold-out text-image pairs are further divided into the training and test sets for unseen words. The remaining training split of Flickr30K Entities is used for the pre-training set. To prevent frequent words (_e.g._, "man") from dominating the test results of the seen words, we choose 60 seen words and sample an equal number of test instances for each word from the test split of Flickr30K Entities. More details and statistics of the dataset are available in Appendix A.
Figure 3: An illustration of the few-shot new word learning paradigm. The model first pre-trains on a grounding dataset with a set of base words (\(\mathcal{V}_{\text{seen}}\)), and then attempts to acquire a set of unseen words (\(\mathcal{V}_{\text{unseen}}\)) in a small number of raw text-image pairs. Tests are performed after each training session.
## 3 Computational Models
### Object-Oriented BERT (OctoBERT)
Humans demonstrate fast mapping, the ability to learn new words with only minimal information Carey and Bartlett (1978); Carey (1978); Golinkoff et al. (2000). Motivated by how visual grounding helps humans in bootstrapping new words, we propose a computational framework that first acquires the ability to ground during pre-training, and then transfers this intrinsic ability to learn unseen words when grounded supervision is no longer available. We introduce object-oriented BERT (OctoBERT), an end-to-end visually-grounded language model illustrated in Figure 4.
Model Architecture.Similarly to dual-stream vision-language models, OctoBERT encodes the textual input with a pre-trained language model Liu et al. (2019), and encodes image input with convolutional backbone He et al. (2016) with 2D positional encoding added. The text and image representations are linearly projected onto a joint semantic space and concatenated. The multimodal representation is then forwarded into a cross-encoder with self-attention layers. The cross-encoded representations in the final layer are sent into an object decoder, together with a set of learnable object queries. The object decoder produces an object embedding for each input object query, which can be considered as a representation of the proposed object. The object representations are further forwarded to the text decoder, which allows language modeling to explicitly attend to the perceived objects. We discuss the pre-training objectives, especially how the model acquires grounding in the following paragraphs. Other details are available in Appendix B.
Masked Language Modeling (MLM).As an intrinsic task, we follow the majority of existing pre-trained vision-language models to perform masked language modeling with a two-layer MLP. Words in input text are randomly masked out, and the model predicts the masked words conditioned on the corrupted sentence and image. Words in groundable phrases are masked with a probability of 0.4 and those in non-groundable regions are masked with a lower probability of 0.1.
Object Localization (OL).Each object representation will be decoded by a shared three-layer MLP to produce a bounding box. We follow prior detection transformers (DETR) Carion et al. (2020); Kamath et al. (2021) to perform bipartite matching between proposed boxes and ground truth boxes with a Hungarian loss Kuhn (1955). The predicted boxes are optimized towards ground truth using the generalized intersection-over-union (GIoU) loss Rezatofighi et al. (2019) and the L1 loss.
Grounding.The notion of _Grounding_ is realized by grounded pre-training through word-region alignment (WRA) which enables fine-grained cross-modal mapping between words and objects. It consists of two levels of alignment: _positional alignment_ and _semantic alignment_. In positional alignment, the model learns to map each object representation to words in the sentence, which could possibly be a MASK or an additional no-object label \(\varnothing\)Yu and Siskind (2013); Kamath et al. (2021). We use a fully-connected layer to predict the distribution over token positions with cross-entropy loss. In semantic alignment, the model learns to bring word representations closer to the object representations that they ground to, and push the unrelated pairs farther. We use a contrastive loss over the final layers of the object and text decoders.
### Baselines
Groundless Baseline.A baseline with no grounding ability is developed by pre-training OctoBERT in the same condition but removing the
Figure 4: An overview of OctoBERT, a visually grounded language model pre-trained with three objectives: masked language modeling (MLM), object localization (OL), and grounding through word-region alignment (WRA).
grounding objectives in the loss function. We refer to this groundless model as \(\mathtt{OctoBERT_{w/o\text{ G}}}\). Like a typical pre-trained VLM, _e.g._, VisualBERT (Li et al., 2019), \(\mathtt{OctoBERT_{w/o\text{ G}}}\) performs language modeling based on the object features, without explicit cross-modal referential grounding. We apply \(\mathtt{OctoBERT_{w/o\text{ G}}}\) on \(\mathtt{GOVA}\) task by fine-tuning the model on the pre-training dataset with grounding objective until convergence.
Pre-trained Baselines.For the majority of the pre-trained VLMs, the unseen words are known during pre-training. Also, the primary focus of this work is to understand grounding and bootstrapping in grounded word acquisition. It's not our goal to scale up or re-train all variants of pre-training frameworks. Therefore, we compare our model to the pre-trained VLMs with equal or reasonably larger scales for only reference and analysis purposes. We choose representative baselines in phrase grounding, as presented in Table 1:
* "Detect-and-Recognize" Baseline: Models under this framework rely on a pre-trained frozen object detector, and then learn to predict words from proposed objects. We choose the fine-tuned VisualBERT (Li et al., 2019) for this type.
* "Produce-and-Localize" Baseline: Models under this framework rely on a pre-trained vision-language model to predict the missing word, and then perform referring expression comprehension and propose objects. We combine ViLT (Kim et al., 2021) and MDETR (Kamath et al., 2021) for their competitive performance in vision-conditioned language modeling and phrase grounding individually.
## 4 Empirical Findings
### Grounded Pre-training
The results of this section are obtained from the test immediately following pre-training.
Pre-training Results on Seen WordsThe main results for the pre-training stage are summarized in Table 1. Our direct observation is the strong performance of \(\mathtt{OctoBERT}\) in terms of both grounded metrics, Top-1 Grounded Hit-Rate (G-HR\(@\)1) and Grounded Perplexity (G-PPL). \(\mathtt{OctoBERT}\) significantly outperforms the groundless baseline \(\mathtt{OctoBERT_{w/o\text{ G}}}\) and pre-trained baselines, even for systems pre-trained with a significantly larger amount of data and computing, as shown in Table 2. While \(\mathtt{OctoBERT}\) produces correct predictions of the missing words as well as the locations of the corresponding bounding boxes, it turns out to be challenging for baselines to achieve them both. For "Detect-and-Recognize" baseline (VisualBERT), we observe a comparable object localization performance empowered by the frozen object detector. However, it suffers from a poor language modeling ability (as demonstrated by HR\(@\)1 and PPL, weaker than a fine-tuned RoBERTa). For the "Produce-and-Localize" baseline (ViLT+MDETR), we observe a strong language modeling performance due to the scale of ViLT. Yet, correct word grounding remains difficult, as can be seen from the poor localization performance. These results demonstrate that the \(\mathtt{GOVA}\) task is challenging, and \(\mathtt{OctoBERT}\) is competitive in learning grounded word meanings during pre-training.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline \hline Models & \multicolumn{5}{c}{Seen (\(\mathcal{V}_{\text{w/o\text{ G}}}\))} & \multicolumn{5}{c}{Unseen (\(\mathcal{V}_{\text{w/o\text{ G}}}\))} \\ \cline{2-11} & G-HR\(@\)1 (\(\uparrow\)) & big G-PPL (\(\downarrow\)) & HR\(@\)1 (\(\uparrow\)) & log PPL (\(\downarrow\)) & Acc (\(\uparrow\)) & IoU (\(\uparrow\)) & G-HR\(@\)1 (\(\uparrow\)) & log PPL (\(\downarrow\)) & HR\(@\)1 (\(\uparrow\)) & log PPL (\(\downarrow\)) & Acc (\(\uparrow\)) & IoU (\(\uparrow\)) \\ \hline RoBERTa & - & - & 38.0 & 2.75 & - & - & - & - & 23.1 & 4.96 & - & - \\ RoBERTa (\(\uparrow\)) & - & - & 47.9 & 1.99 & - & - & - & - & 24.3 & 4.38 & - & - \\ VLT & - & - & 64.7 & **1.27** & - & - & - & - & 32.7 & 3.68 & - & 26.3 / 20.2 & 23.9 / 21.7 \\ MDETR & - & - & - & - & 27.8 / 27.0 & 25.3 / 28.0 & - & - & - & - & 26.3 / 20.2 & 23.9 / 21.7 \\ ViT-MDETR & 193.73 & 2.5 / 2.43 & 64.7 & **1.27** & 31.1 / 13.04 & 28.5 / 31.2 & 8.67 / 5.12 & 50.7 / 51.2 & 32.7 / 3.2 & 50.7 / 23.3 & 50.7 / 23.8 \\ VisualBERT (\(\uparrow\)) & 28.5 / - & 29.6 / & - & - & 23.3 & **68.1** / - & 53.3 / - & 102.7 & - & 56.0 / - & 20.7 & 4.81 / 50.6 / - & 45.2 / - \\ \hline \(\mathtt{OctoBERT_{w/o\text{ G}}}\) (T) & 28.9 / 27.8 & 2.3 / 23.38 & 63.9 & 1.41 & 44.0 / 4.0 / 4.00 / 38.2 & 1.1 / 1.1 & 11.89 / 12.04 & 3.7 & 10.87 / 31.9 & 36.2 / 31.0 \\ OctoBERT & **47.9** / **46.3** & **1.79** / **1.81** & **66.9** & **1.26** & **66.8** / **66.3** & **58.8** / **57.6** & 2.3 / 2.3 & 11.58 / 11.74 & 4.2 & 11.01 & **61.3**/ **53.1** & **63.3**/ **48.0** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Test results on the seen and unseen words, obtained immediately after pre-training. Unless noted explicitly as fine-tuned (FT), all results reflect the performance of models without fine-tuning. Evaluations under both All and Any-protocols are provided in the table as (All/Any) pairs. For models depending on a frozen pre-trained object detector, we can only provide evaluation under All-Protocol. We note that the unseen words are only unseen to \(\mathtt{OctoBERT}\), as pre-trained baselines have encountered them during development. We report the results for reference.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Models & \# Param & \# Imgs & \# Caps & Objectives \\ \hline RoBERTa & 120M & - & - & MLM \\ VisualBERT & 180M & 200K & 567K & MLM, TIM \\ ViLT & 110M & 4.0M & 10M & WRA*, MLM, TIM \\ MDETR & 200M & 200K & 1.3M & WRA, OL \\ \hline \(\mathtt{OctoBERT}\) & 200M & 30K & 150K & WRA, MLM, OL \\ \(\mathtt{OctoBERT_{w/o\text{ G}}}\) & 200M & 30K & 150K & MLM, OL \\ \hline \hline \multicolumn{4}{l}{*WRA is formulated as word-patch alignment in ViLT, thus it} \\ \multicolumn{4}{l}{cannot perform object localization without major modifications.} \\ \end{tabular}
\end{table}
Table 2: The baselines for comparisons and references. ITM stands for Image Text Matching, and all the other abbreviations follow Section 2.
**Bootstrapping through Grounded Objectives.** We further provide a cross-time analysis to understand the role of grounded objectives in pre-training efficiency. The results of different training steps are provided in Table 3. From the table, we observe that OctoBERT outperforms both of its groundless variants in language modeling, object localization, and jointly under the grounded perplexity. What's even more striking is that OctoBERT achieves better performance with _10 times less training data_ compared to the model trained without the grounding objective (_i.e._, the WRA objective). These results confirm the crucial role of explicit word-object alignment in efficient grounded word learning. This can be explained by that the grounded objectives attempt to align the vision and language semantic spaces, which ideally benefit both visually conditioned language modeling and language-conditioned object localization. Although it is possible to build a mapping between word and object representations through cross-modal probing and fine-tuning after pre-training, these methods are not comparable to systems with grounded objectives in terms of efficiency and performance.
**Pre-training Results on Unseen Words: Word-Agnostic Grounding** One important finding of the pre-trained model is the surprising performance in localizing the unseen words behind the MASKs. As shown in Table 1, OctoBERT achieves a high Any-IoU of 56.3% and Any-localization accuracy of 61.3% for the unseen words, which are very close to its performance on the seen set and surpass baselines that have seen these words. Moreover, as anticipated, since these words are held out during pre-training, OctoBERT fails to correctly unmask these unseen words, leading to a high log perplexity of 11.01 and low HR of 4.2, compared to that of 1.26 and 66.9 on the seen words. Figure 5 shows an example of such word-agnostic grounding.
This performance disparity in language modeling and referent localization on unseen words suggests that OctoBERT has developed a certain level of word-agnostic grounding, _i.e._, to locate the most likely referent of a word through both the linguistic context and the visual context, even if the word itself is never seen during pre-training. A similar situation is faced by human language learners when inferring the grounded meaning of a new word, as we described earlier in Figure 1. Our experiment demonstrates that, through grounded pre-training, it is possible for a vision-language system to acquire word-agnostic grounding ability, which opens up the opportunity to enable human-like fast mapping when learning new words.
### Few-Shot New Words Acquisition
In this section, we task the model to acquire unseen words from a few samples of raw image-text pairs, without any bounding boxes or word-object mappings annotation. As we have demonstrated the model's word-agnostic grounding, we seek to explore if this ability can be transferred to facilitate learning unseen words when a large amount of data and grounded supervision are no longer available. Specifically, we perform few-shot learning on the pre-trained OctoBERT with only masked language modeling as the learning objective. More hyper-parameter details are available in Appendix B.2.
Learning New Words through Incremental Learning.We first explore the multi-class incremental learning setting, in which the pre-trained model is tasked to acquire the 31 unseen words from a few-shot learning session. The experiment is repeated with sample sizes of 8, 16, 24, and 32 immediately after pre-training. As shown in Figure 6, even with as few as 8 samples per word, OctoBERT can significantly bring down the grounded perplexity of unseen words, while mostly maintaining the grounded perplexity of the seen words without catastrophic forgetting. Compared
Figure 5: Although the word “elephant” is unseen to OctoBERT, the model is still able to localize the object in the image referred to by the MASK.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \# Steps & Metrics & OctoBERT & OctoBERT\({}_{\text{wIo G}}\) (FT) \\ \hline \multirow{3}{*}{10k} & IoU (\(\uparrow\)) & **46.7 / 46.2** & 36.9 / 35.3 \\ & log PPL (\(\downarrow\)) & **1.46** & 1.53 \\ & log G-PPL (\(\downarrow\)) & **2.22 / 2.23** & 2.52 / 2.57 \\ \hline \multirow{3}{*}{50k} & IoU (\(\uparrow\)) & **58.1 / 57.1** & 39.6 / 38.8 \\ & log PPL (\(\downarrow\)) & **1.26** & 1.44 \\ & log G-PPL (\(\downarrow\)) & **1.80 / 1.82** & 2.34 / 2.38 \\ \hline \multirow{3}{*}{100k} & IoU (\(\uparrow\)) & **58.7 / 57.6** & 40.0 / 38.2 \\ & log PPL (\(\downarrow\)) & **1.26** & 1.41 \\ \cline{1-1} & log G-PPL (\(\downarrow\)) & **1.79 / 1.81** & 2.34 / 2.38 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of OctoBERT and its non-grounding version at different training steps. OctoBERT\({}_{\text{wIo G}}\) is evaluated with fine-tuning. Both Any and All-protocols are provided in the table as (All/Any).
to OctoBERT without the grounding objective, the full OctoBERT demonstrates better acquisition performance for unseen words. It's important to note that these few shot examples are text/image pairs without explicit grounding annotation. OctoBERT is able to quickly acquire grounded meanings of the new words (_e.g._, only with 8 examples) with a performance close to that of seen words.
We further perform a word-specific controlled study with a one-class incremental learning setting. We present results on two unseen words (pizza and circular) in Table 4. The complete results are available in Appendix D.
### Predictors of Model Behaviors
There has been an interest to identify predictors that can explain/anticipate the performance or behavior of pre-trained language models (Chang and Bergen, 2022). This exploration not only offers valuable insights for future model development, but also serves as a cognitive inquiry to evaluate the extent to which language models align with human language acquisition patterns. In this section, we present the first work of this nature on vision-language models. Specifically, we note that the OctoBERT model relies on a RoBERTa encoder, which might have already been equipped with prior linguistic knowledge. To assess the cognitive alignment of vision-language models to human language acquisition, we additionally pre-trained the OctoBERT and OctoBERT\({}_{\text{w/o G}}\) models with a randomly initialized RoBERTa encoder.
To comprehensively capture various aspects of words, we carefully select eight distinct predictors that encompass intrinsic psycho-linguistic characteristics, distribution patterns within the training corpus, and visual representations within the training images. We select 3 **psycho-linguistic predictors**, each collected and normalized from the MRC Database (Coltheart, 1981):
* Familiarity, the degree of familiarity or exposure people have to words;
* Concreteness, the degree to which words have a perceptible physical referent or are associated with tangible objects or experiences;
* Imageability, the degree to which words elicit people's mental imagery.
Another 3 **linguistic predictors** are considered:
* Unigram perplexity;
* RoBERTa perplexity, where RoBERTa is fine-tuned on the captions to serve as the upper bound of unimodal language model performance;
* # Co-occur phrases, the average number of co-occurring groundable phrases in a caption.
We finally choose 2 **perceptual predictors**:
* # Co-occur objects, the average number of co-occurring objects in an image;
* Bbox size, the average proportion of an image occupied by the bounding boxes of the referents.
To assess the statistical significance of each predictor, we performed linear regressions with likelihood ratio tests on different variants of models. Similar to Chang and Bergen (2022), we compare the overall regression including the target predictor to a regression that included all predictors except the target. We additionally present the beta weights (with signs) to capture the magnitude and direction of the correlation. Figure 7 displays heatmaps indicating the statistical significance (in terms of negative logarithmic \(p\)-values) of each predictor concerning Log G-PPL, Log PPL, and Any IoU. Insignificant tests are omitted from the figure.
Correlation with Linguistic and Perceptual Predictors.Our findings revealed a positive correlation between the unigram and RoBERTa log perplexity and the models' log perplexity, both for grounded and ungrounded scenarios. This indicates that vision-language models still heavily rely on
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{\# Samp.} & \multicolumn{2}{c}{log G-PPL (pizza)} & \multicolumn{2}{c}{log G-PPL (circular)} \\ \cline{2-5} & OctoBERT & OctoBERT\({}_{\text{w/o G}}\) & OctoBERT & OctoBERT\({}_{\text{w/o G}}\) \\ \hline
0 & 10.70 & 9.59 & 15.21 & 15.12 \\
8 & **1.47** & 2.21 & **1.59** & 2.25 \\
16 & **1.07** & 2.54 & **1.07** & 2.25 \\
24 & **1.19** & 1.25 & **1.55** & 1.81 \\
32 & **0.90** & 1.18 & **1.23** & 1.61 \\ \hline \hline \end{tabular}
\end{table}
Table 4: The log G-PPL (All-Protocol) of unseen words in one-class incremental learning, each unseen word with a sample size ranging from 8 to 32.
Figure 6: The log G-PPL (All-Protocol) of seen and unseen words in multi-class incremental learning, each unseen word with a sample size ranging from 8 to 32.
distributional statistics, similar to unimodal models. While the ungrounded perplexity showed little correlation with perceptual predictors, the Any IoU demonstrated a significant correlation with the number of co-occurring objects and average sizes of bounding boxes. This suggests concepts that are visually salient and less perceptually ambiguous are easier to localize and acquire, consistent with human learners Smith and Yu (2008).
Correlation with Psycho-linguistic PredictorsCounter-intuitively, there was a positive alignment between the human perceived familiarity of words and the machine's perplexities, _i.e._, the more familiar humans are with a word, the more perplexed models get. This contrasts with the ideal cognitive plausibility of language acquisition in humans. This discrepancy implies that current vision-language models may not fully achieve cognitive plausibility, which might be explained by the fact that many concepts (_e.g._, wild animals, musical instruments) appear abundantly in internet images but not in daily lives. In terms of imageability, it aligned well with human intuition, exhibiting a positive correlation with Any IoU and a negative correlation with perplexities. However, the concreteness predictor surprisingly exhibited the opposite correlation. This discrepancy could be attributed to the nuanced distinction between imageability and concreteness. For instance, while "hat" is concrete because it refers to a tangible object, it also possesses visual diversity due to its generality (_e.g._, many types of hats which look very differently), making it challenging to acquire. Conversely, "blue" is more imageable as it easily evokes a color, relatively stable, despite not referring to a specific tangible object. To learn the meaning of "hat," a human language learner may benefit from physically interacting with the object, and understand that the hat is an item to cover for the head, regardless of its visual appearance. To address this gap, a potential future direction could involve developing language learning agents that acquire words through physical interactions rather than passive perception, allowing for a more comprehensive understanding of word meanings.
## 5 Related Work
Vision-Language MappingMapping plays a central role in classic lexicon acquisition problem Gleitman and Landau (1994); Clark (1995). Primarily, researchers focused on grounding words to their meaning symbols, building learning mechanisms using specific mental biases to simulate children's word acquisition, and giving computational accounts for psycholinguistic phenomena Siskind (1996); Regier (2005); Goodman et al. (2007); Fazly et al. (2010). Early efforts along this line incorporate visual grounding either by learning a statistical or neural mapping from object categories Roy and Pentland (2002); Yu (2005); Xu and Tenenbaum (2007); Yu and Ballard (2007); Yu and Siskind (2013) and more complicated visual features Qu and Chai (2010); Mao et al. (2019, 2021); Pratt et al. (2020) to linguistic labels. These studies are usually in a closed world with limited vocabulary Krahmer and van Deemter (2019), and words are usually isolated from the natural context of use. More recently, there exist multi-modal understanding tasks, _e.g._, object retrieval Guadarrama et al. (2014); Hu et al. (2016), referring expression comprehension Liu et al. (2014); Yu et al. (2016); Mao et al. (2016); Wu et al. (2020), and phrase grounding Plummer et al. (2015) to map text to corresponding objects. Our setup is closely related to this line as we position _grounding_ as an explicit word-referent mapping
Figure 7: Heatmaps for statistical significance for each predictor towards each metric. The beta weights and signs are presented outside of the parentheses, and the negative log \(p\)-values are presented in the parentheses. Insignificant tests with \(p>0.05\), _i.e._, \(-\log(p)<1.30\), are discarded. w/(o) Init refers to the text encoder initialization.
problem. The difference is that, our work goes beyond grounding to study open-vocabulary acquisition through fast mapping, a more complicated but realistic challenge faced by AI agents.
Vision-Language Pre-trainingDistributional word representations can be acquired through language modeling, and developing language models from visual data has been extensively studied by the community (Chrupala et al., 2015; Lazaridou et al., 2015; Li et al., 2017; Suris et al., 2020). Recent years have seen increasing research to enrich language representations with visually-augmented language modeling (Tan and Bansal, 2020; Lu et al., 2022; Wang et al., 2022) and to learn multimodal representations with vision-language pre-training (VLP) (Du et al., 2022). We are particularly interested in VLP models with fine-grained grounding objectives, _e.g._, Word-Region Alignment (WRA). These models either pre-train with weakly supervised alignment algorithms like optimal transport that matches words with patches (Kim et al., 2021) or proposals from a frozen detector (Chen et al., 2020; Su et al., 2020), or perform explicit word grounding by pre-training a language-conditioned detector (Kamath et al., 2021; Li et al., 2022; Zhong et al., 2022; Dou et al., 2022). Our model falls along this line, which jointly performs language modeling, object localization, and grounding during pre-training, rather than relying upon a pre-existing object detector.
Vision-Language TasksTo evaluate vision-language systems, many downstream tasks have been formulated. Some related formulations are summarized in Table 5 in Appendix. While demonstrating some vision-language capabilities, these down-stream tasks provide limited insights into whether these models truly capture the grounded meaning of words with respect to the external environment. Our task design specifically targets the machine's ability to predict words and ground words to perception. More akin to our formulation is the vision-based language modeling task (Jin et al., 2020) in a continual learning setting. Our work differs mainly in two aspects. First, the proposed task only predicts masked tokens based on the visual context, which leaves the referential uncertainty (_i.e._, grounding) unattended (_e.g._, in Figure 2, correct prediction of the word "boat" does not guarantee correct grounding). Also, Jin et al. (2020) focuses on compositionality, while we seek to address few-shot grounded word learning when unseen words are encountered after pre-training.
Open-Vocabulary Object DetectionEarly works formulate fast mapping of new words as a zero-shot object classification problem, which aims to generalize from known object labels to unknown ones (Socher et al., 2013; Frome et al., 2013; Elhoseiny et al., 2013; Lazaridou et al., 2014). The setting later extends to a localization task, referred to as zero-shot object detection (ZSD) (Bansal et al., 2018; Zhu et al., 2019, 2020; Rahman et al., 2020). More recently, open-vocabulary object detection (OVD) (Zareian et al., 2021; Gu et al., 2022; Du et al., 2022; Minderer et al., 2022) combines ZSD with weakly supervised object detection (WSD) to address the unrealistic constrain of traditional zero-shot settings. OVD assumes the availability of coarse-grained image-caption pairs, and attempts to generalize from limited fine-grained annotation of object categories to unseen ones. Nevertheless, this line of work positions words as object categories and isolates them from their linguistic context (_e.g._, sentences). Our setup instead challenges models to perform language modeling in human-generated captions.
## 6 Conclusion and Future Work
The connection between language and their referents captures the grounded meaning of words, and an explicit treatment is key to empowering efficient open-world language learning abilities in humans and AI agents. This work introduces Grounded Open Vocabulary Acquisition (GOVA), a scalable formulation to examine grounding and fast mapping in open-world grounded language learning. We propose OctoBERT, a novel visually grounded language model to investigate a paradigm where the model initially acquires grounding ability during pre-training and subsequently applies this ability to quickly learn new words without explicit grounding supervision. Our empirical findings highlight the significance of visual grounding in neural word acquisition. Especially, we find that pre-trained OctoBERT can serve as a foundation for fast mapping of novel grounded words via few-shot learning. We also conduct a comprehensive analysis to explore potential predictors influencing the performance of vision-language models, revealing both consistent and surprising behaviors with respect to human language learning patterns. These insights pave the way for future research in grounded language learning in the open world.
### Limitations
In this work, we limit ourselves to object-centric grounding, which ignored that language can ground events, attributes, manners, mental states, etc. The grounded meaning of some groundable words, especially ADVs, NUMs, VERBs, and PRONs, cannot be fully captured by the bounding boxes alone. Future work should explore better task formulations to study the acquisition of their grounded meanings. An exciting future work along this line is to extend the setting from images to videos and physical interactions with the environment, and to incorporate the rich temporal dynamics of the world for language acquisition. In addition, we ignored the social aspects of language learning, where children infer the referents of words from their caregivers through communication Carpenter et al. (1998); Bloom (2000). Future work could also investigate grounded word acquisition from natural dialogue.
## Ethics Statement
This project does not involve any research artifacts generated through human subject studies. Despite the considerable promise of OctoBERT, it is crucial to examine its ethical and societal implications. The computational model relies on pre-trained language models and extensive text-image datasets, which could contain hidden biases that may result in fairness problems within the algorithms. By recognizing and actively addressing these implications, we aim to increase awareness among practitioners if the model is deployed as a language-learning agent in the future.
## Acknowledgments
This work was supported in part by NSF IIS-1949634, NSF SES-2128623, and by the Automotive Research Center (ARC) at the University of Michigan. The authors would like to thank the anonymous reviewers for their valuable feedback.
|
2310.13225 | Scalable Neural Network Kernels | We introduce the concept of scalable neural network kernels (SNNKs), the
replacements of regular feedforward layers (FFLs), capable of approximating the
latter, but with favorable computational properties. SNNKs effectively
disentangle the inputs from the parameters of the neural network in the FFL,
only to connect them in the final computation via the dot-product kernel. They
are also strictly more expressive, as allowing to model complicated
relationships beyond the functions of the dot-products of parameter-input
vectors. We also introduce the neural network bundling process that applies
SNNKs to compactify deep neural network architectures, resulting in additional
compression gains. In its extreme version, it leads to the fully bundled
network whose optimal parameters can be expressed via explicit formulae for
several loss functions (e.g. mean squared error), opening a possibility to
bypass backpropagation. As a by-product of our analysis, we introduce the
mechanism of the universal random features (or URFs), applied to instantiate
several SNNK variants, and interesting on its own in the context of scalable
kernel methods. We provide rigorous theoretical analysis of all these concepts
as well as an extensive empirical evaluation, ranging from point-wise kernel
estimation to Transformers' fine-tuning with novel adapter layers inspired by
SNNKs. Our mechanism provides up to 5x reduction in the number of trainable
parameters, while maintaining competitive accuracy. | Arijit Sehanobish, Krzysztof Choromanski, Yunfan Zhao, Avinava Dubey, Valerii Likhosherstov | 2023-10-20T02:12:56Z | http://arxiv.org/abs/2310.13225v2 | # Scalable Neural Network Kernels
###### Abstract
We introduce the concept of _scalable neural network kernels_ (SNNKs), the replacements of regular _feedforward layers_ (FFLs), capable of approximating the latter, but with favorable computational properties. SNNKs effectively disentangle the inputs from the parameters of the neural network in the FFL, only to connect them in the final computation via the dot-product kernel. They are also strictly more expressive, as allowing to model complicated relationships beyond the functions of the dot-products of parameter-input vectors. We also introduce the _neural network bundling process_ that applies SNNKs to compactify deep neural network architectures, resulting in additional compression gains. In its extreme version, it leads to the fully bundled network whose optimal parameters can be expressed via explicit formulae for several loss functions (e.g. mean squared error), opening a possibility to bypass backpropagation. As a by-product of our analysis, we introduce the mechanism of the _universal random features_ (or URFs), applied to instantiate several SNNK variants, and interesting on its own in the context of scalable kernel methods. We provide rigorous theoretical analysis of all these concepts as well as an extensive empirical evaluation, ranging from point-wise kernel estimation to Transformers' fine-tuning with novel adapter layers inspired by SNNKs. Our mechanism provides up to 5x reduction in the number of trainable parameters, while maintaining competitive accuracy.
## 1 Introduction
Consider a kernel function: \(\mathrm{K}:\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}\), taking as input two feature vectors encoding latent embeddings of their corresponding objects and returning their similarity. Kernel methods are among the most theoretically principled approaches to statistical machine learning (ML) and have proven effective in numerous real-world problems (Scholkopf & Smola, 2002; Kontorovich et al., 2008). Despite their theoretical guarantees and applicability in a rich spectrum of ML settings, the main drawback of these techniques is a high computational complexity, at least quadratic in the size \(N\) of the training dataset. For example, the kernel regression has time complexity \(\mathcal{O}(N^{3})\).
To address this issue, Rahimi & Recht (2007) proposed to construct a random feature (RF) map \(\Phi:\mathbb{R}^{d}\to\mathbb{R}^{m}\) that transforms an input point \(\mathbf{z}\) to a finite-dimensional feature vector \(\Phi(\mathbf{z})\in\mathbb{R}^{m}\) such that: \(\mathrm{K}(\mathbf{x},\mathbf{y})=\mathbb{E}[\Phi(\mathbf{x})^{\top}\Phi( \mathbf{y})]\) (effectively approximately linearizing kernel function). Approximating general kernels \(\mathrm{K}(\mathbf{x},\mathbf{y})\) via linear (dot-product) kernels \(\mathrm{K}(\mathbf{x},\mathbf{y})\approx\widehat{\mathbf{x}}^{\top}\widehat{ \mathbf{y}}\) for \(\widehat{\mathbf{z}}=\Phi(\mathbf{z})\) drastically changes computational complexity landscape, which is now dominated by the number \(m\) of random features, thus providing computational gains if \(m\ll N\). Since their seminal work, there has been a variety of works proposing random features for a broad range of kernels like Gaussian, Matern (Choromanski et al., 2018) and polynomial (Kar & Karnick, 2012; Wacker et al., 2022; 1).
In the meantime, with the advances in: the optimization algorithms for deep ML architectures and the accelerators' hardware, neural network (NN) models (Goodfellow et al., 2016; Schmidhuber, 2014; LeCun et al., 2015) have become predominant in machine learning. The _feedforward layer
(FFL) is the core computational module of NNs and is of the following form:
\[\mathbf{x}\to f(\mathbf{W}\mathbf{x}+\mathbf{b}) \tag{1}\]
for \(\mathbf{x}\in\mathbb{R}^{d},\mathbf{W}\in\mathbb{R}^{l\times d},\mathbf{b}\in \mathbb{R}^{l}\) (_bias_) and an _activation function_\(f:\mathbb{R}\to\mathbb{R}\) (applied element-wise). The expressiveness of deep NNs, far surpassing standard kernel methods, comes from stacking together several FFLs, each encoding **non-linear** mapping with **learnable \(\mathbf{W},\mathbf{b}\)**.
In this work, we draw a deep connection between scalable kernel methods and neural networks. We reinterpret the FFL as outputting the expected vector of dot-products of: **(1)** the latent embeddings of the input \(\mathbf{x}\) and **(2)** the parameters: \(\mathbf{W}\), b of the FFL, effectively disentangling input from model's parameters in the computational graph, only to connect them in the final computation via the dot-product kernel. To be more specific, we think about the FFL as the following transformation:
\[\begin{cases}\overline{\mathrm{K}}_{f}(\mathbf{x},(\mathbf{W},\mathbf{b})) \stackrel{{\mathrm{def}}}{{=}}\left(\mathrm{K}_{f}(\mathbf{x},( \mathbf{w}^{0},b_{0})),...,\mathrm{K}_{f}(\mathbf{x},(\mathbf{w}^{l-1},b_{l- 1}))\right)^{\top},\\ \mathrm{K}_{f}(\mathbf{x},(\mathbf{w},b))\stackrel{{\mathrm{def} }}{{=}}\mathbb{E}[\Phi_{f}(\mathbf{x})^{\top}\Psi_{f}(\mathbf{w},b)],\end{cases} \tag{2}\]
where mappings: \(\Phi_{f}:\mathbb{R}^{d}\to\mathbb{R}^{m},\Psi_{f}:\mathbb{R}^{d}\times\mathbb{ R}\to\mathbb{R}^{m}\) satisfy: \(f(\mathbf{w}^{\top}\mathbf{x}+b)=\mathbb{E}[\Phi_{f}(\mathbf{x})^{\top}\Psi_{f }(\mathbf{w},b)]\) and \(\mathbf{w}^{0},...\mathbf{w}^{l-1}\) are the transposed rows of \(\mathbf{W}\). Then, in the instantiation of the layer the expectations are dropped out. Rewriting an FFL in terms of two towers: one corresponding to the input and one to its learnable parameters has several advantages:
1. **network compression:** in the above formulation, instead of transforming layer parameters with \(\Psi_{f}\), one can directly learn vectors \(\Psi_{f}(\mathbf{w}^{i},b_{i})\) for \(i=0,...,l-1\). Then the number of trainable parameters becomes \(O(lm)\) rather than \(O(ld)\) and for \(m\ll d\) the layer effectively has a reduced number of parameters.
2. **computational savings:** if RFs can be constructed in time \(o(dl)\) per point and \(m\ll d\), the overall time complexity \(o(dl)\) of the FFL (given pre-computed embeddings \(\Psi_{f}(\mathbf{w}^{i},b_{i})\)) is **sub-quadratic** in layers' dimensionalities,
3. **deep NN bundling process:** a two-tower representation can be used iteratively to compactify multiple FFLs of NNs, the process we refer to as _neural network bundling_ (Sec. 3.3); this also leads to the computational gains.
4. **deep NNs as scalable kernels:** the extreme version of the bundling procedure, involving all the layers, provides a two-tower factorization of the entire deep NN with several potential practical and theoretical implications (Sec. 3.3). In particular, it leads to an explicit formula for the optimal parameters of the fully-bundled network under several loss objectives (e.g. mean squared loss), opening a possibility to bypass backpropagation.
Figure 1: Pictorial representation of different NN layers discussed in the paper. Pink arrays represent NN weight matrices and grey ones, Gaussian projections matrices applied in SNNKs. Nonlinear transformations applied in mappings \(\Phi\) and \(\Psi\) are symbolically represented as functions \(g\) and \(h\) respectively. **Upper left:** Regular FFL with activation \(f\). **Upper right:** SNNK applied to a single FFL. **Bottom:** Bundling process using SNNKs and applied two a deep neural network module.
In order to find mappings: \(\Phi_{f}\), \(\Psi_{f}\) from Eq. 2, we develop a new bounded random feature map mechanism, called _universal random features_ (or URFs) that leads to the unbiased estimation of \(f(\mathbf{w}^{\top}\mathbf{x}+b)\) as long as \(f\) has a well-defined Fourier Transform (FT), either in the classical Riemannian or distributional sense. To derive URFs, we combine Fourier analysis techniques with recent methods for softmax-kernel estimation from Likhosherstov et al. (2022).
**Note:** We do not put any additional assumptions regarding \(f\), in particular \(f\) is not required to be differentiable. Furthermore, function \(\mathrm{K}_{f}\)**does not need** to be positive semi-definite. This is critical for applications in neural networks, where the activation function \(f\) usually does not correspond to a positive semi-definite kernel.
To summarize, our main contributions in this paper are as follows:
* We introduce the _scalable neural network kernel_ module (SNNK) as a replacement of a traditional FFL (Sec. 3), providing the disentanglement of the network's input and its parameter-set before final dot-product computation, as given in Eq. 2 (see also: Fig. 1).
* We accompany SNNKs with our universal random features mechanism (URFs) to efficiently: (1) construct mappings \(\Phi_{f}\) and \(\Psi_{f}\) from Eq. 2 and consequently: (2) implement SNNKs (Sec. 3.1). We provide explicit formulae for URFs for trigonometric maps. Those produce SNNK-based replacements of the SIREN networks from Sitzmann et al. (2020).
* We propose new NN-layers corresponding to the specific SNNK instantiation, called \(\mathrm{ReLU}\)-\(\mathrm{SNNK}\) (Sec. 3.2), that we found particularly effective in downstream applications (see: Sec. 4.3.2). We show that they are related to the class of the _arc-cosine kernels_Cho & Saul (2011). We also demonstrate using them that SNNKs are **strictly more expressive** than regular FFLs, as allowing to compute the functions of the inputs and parameters that cannot be defined as point-wise transformed vectors of their dot-products.
* We introduce the neural network compactification process, that we refer to as _neural network bundling_, leveraging SNNKs (see: Sec. 3.3 and Fig. 1).
* We provide an exhaustive empirical evaluation of SNNKs, from point-wise kernel estimation to the adapter-based Transformers' fine-tuning, providing about 5x reduction of the number of trainable parameters (Sec. 4).
## 2 Related Work
The literature on random features is vast, yet most of the works focus on approximating positive definite kernels. The results on dimensionality reduction and the so-called _Johnson-Lindenstrauss Transform_ (or JLT) (Dasgupta & Gupta, 2003; Dasgupta et al., 2010; Ailon & Liberty, 2013) for the dot-product kernel marked the birth of the subject as as an archetype mechanism that Rahimi & Recht (2007) extended from linear to non-linear shift-invariant kernels. A substantial effort was made to further improve the accuracy of RF-methods by entangling projections used to construct RFs (Choromanski et al., 2017; Yu et al., 2016; Choromanski et al., 2018; Rowland et al., 2018).
For certain classes of functions \(f\), RF-mechanisms leading to the linearization of \(\mathrm{K}_{f}\) have been already developed. In addition to the rich recent literature on the approximation techniques for the softmax-kernel \(\mathrm{K}_{\mathrm{exp}}(\mathbf{x},\mathbf{y})=\exp(\mathbf{x}^{\top} \mathbf{y})\)(Likhosherstov et al., 2022; 2023; Choromanski et al., 2021), algorithms for analytic \(f\) with positive coefficients of their Taylor series expansion were given (Kar & Karnick, 2012). Other RF-methods assume that kernel inputs are taken from the unit-sphere (Scetbon & Harchaoui, 2021; Han et al., 2022). Both assumptions are unrealistic for the neural network applications as far as inputs \(\mathbf{x}\) are concerned (interestingly, the latter one would be however more justifiable for the parameter-tower as long as bounded-norm weight matrices are considered, e.g. _orthogonal neural networks_(Helfrich et al., 2018). We would like to emphasize that our two-tower mechanism, effectively leading to the linearization of the FFLs from Eq. 2, can in principle work with various RF-algorithms, and not only our proposed URFs.
The kernels applied in connection to neural networks have been widely studied (Bengio & Lecun, 2007). Such kernels are generally constructed using dot-products of outputs of the shallow neural networks with various non-linearities like ReLU (Cho & Saul, 2009; Bresler & Nagaraj, 2020) and tanh (Williams, 1996) or the gradients of the network like the NTK kernel (Jacot et al., 2020). Most
of the work in linearizing NNs via kernels have been done in the case of a \(2\)-layer network where
\[J(\mathbf{x};\mathbf{\theta})=\sum_{i=1}^{N}a_{i}f(\mathbf{x}^{\top}\mathbf{w}^{i}), \ \mathbf{\theta}=(a_{1},...,a_{N};\mathbf{w}^{1},...,\mathbf{w}^{N})\in\mathbb{R}^{N(d+1)} \tag{3}\]
It is assumed that \(\mathbf{w}^{i}\) and \(f\) (non-linear activation) are fixed and scalars and \(a_{i}\) are trainable. Under various assumptions, one can write a compact linearized form of this neural network (Cho & Saul, 2009; 2011; Ghorbani et al., 2020). Moreover, in the above setting, \(J(\mathbf{x};\mathbf{\theta})\) corresponds to the first-order Taylor expansion of \(J\) with respect to the top-layer weights \(a_{i}\) which was first explored by (Neal, 1996). Even though our setting is fundamentally different, as our goal is to linearize single layers to disentangle the weights and the inputs, we build on the above intuition to create our SNNK-layers (see also: discussion in Appendix A). Note that NTK-based analysis, as leveraging Taylor-based linearization of the NN, is valid only for the mature stage of training/finetuning when weights do not change much and thus such a linearization is accurate (Malladi et al., 2022). SNNKs do not rely on this assumption. Furthermore, SNNKs can be used also in the context of non-positive definite (Ong et al., 2004) and asymmetric (He et al., 2023) kernels since mappings \(\Phi\) and \(\Psi\) in principle are different (on expectation they can produce both symmetric and asymmetric functions).
Arc-cosine kernels were studied in the context of deep NNs before (Cho & Saul, 2009). However, in (Cho & Saul, 2009), the weights are still entangled with the FFL-input, as the initial latent representations of the inputs (for random parameters) are interpreted as RFs for the arc-cosine kernel.
## 3 Scalable Neural Network Kernels (SNNKs)
The scalable neural network kernel (SNNK) computational module is defined as follows:
\[\begin{cases}\overline{\mathrm{SNNK}}_{f}(\mathbf{x},(\mathbf{W},\mathbf{b}) )\stackrel{{\mathrm{def}}}{{=}}\left(\mathrm{SNNK}_{f}(\mathbf{x },(\mathbf{w}^{0},b_{0})),...,\mathrm{SNNK}_{f}(\mathbf{x},(\mathbf{w}^{l-1}, b_{l-1}))\right)^{\top},\\ \mathrm{SNNK}_{f}(\mathbf{x},(\mathbf{w},b))\stackrel{{\mathrm{ def}}}{{=}}\Phi_{f}(\mathbf{x})^{\top}\Psi_{f}(\mathbf{w},b),\end{cases} \tag{4}\]
for, \(\mathbf{x}\in\mathbb{R}^{d}\), some mappings: \(\Phi_{f}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{m}\), \(\Psi_{f}:\mathbb{R}^{d}\times\mathbb{R}\rightarrow\mathbb{R}^{m}\) and transposed rows of \(\mathbf{W}\in\mathbb{R}^{l\times d}\): \(\mathbf{w}^{0},...\mathbf{w}^{l-1}\). As we show in Sec. 3.1, functions \(\Phi_{f},\Psi_{f}\) can be constructed in such a way that the SSNK module approximates a particular FFL, i.e.: \(\overline{\mathrm{SNNK}}_{f}(\mathbf{x},(\mathbf{W},\mathbf{b}))\approx f( \mathbf{W}\mathbf{x}+\mathbf{b})\), but mechanisms that do not imitate known FFLs are also of interest (see: Sec. 3.2).
**Time complexity:** If we denote by \(t_{m}(d)\) time complexity for constructing an embedding: \(\Phi_{f}(\mathbf{x})\), then time complexity for constructing \(\overline{\mathrm{SNNK}}_{f}(\mathbf{x},(\mathbf{W},\mathbf{b}))\) (given the pre-computed \(\Psi_{f}(\mathbf{w}^{i},b_{i})\) for \(i=0,...,l-1\)) is: \(T_{m,l}(d)=ml+t_{m}(d)\). In Sec. 3.1 we show an algorithms for constructing URFs in time \(t_{m}(d)=O(md)\) and thus computational gains are provided as compared to the regular FFL (with time complexity \(O(ld)\)) as long as \(m\ll\min(l,d)\).
**FFL compression:** As already mentioned in Sec. 1, the key observation is that in the setting, where the layer is learned (and thus \(\mathbf{w}^{0},...,\mathbf{w}^{l-1}\) are learnable), mapping \(\Psi_{f}\)**does not even need to be applied**, since vectors \(\mathbf{\omega}^{\mathrm{def}}\equiv\Psi_{f}(\mathbf{w}^{j},b)\) for \(j=0,...,l-1\) can be interpreted as unstructured learnable vectors. Thus the number of trainable parameters of the SNNK layer is \(O(ml)\), instead of \(O(dl)\) and consequently, the FFL is effectively compressed if \(m\ll d\).
### Universal Random Features (URFs)
In this section, we show how to construct embeddings \(\Phi_{f}(\mathbf{x})\) and \(\Psi_{f}(\mathbf{w},b)\). We denote by \(\mathrm{FT}_{f}\) the _Fourier Transform_ of \(f\), where \(i\in\mathbb{C}\) satisfies: \(i^{2}=-1\):
\[\mathrm{FT}_{f}(\xi)=\int_{\mathbb{R}}f(z)\exp(-2\pi i\xi z)dz \tag{5}\]
If the integral does not exist in the classical Riemannian sense, we use its distributional interpretation. We rewrite \(\mathrm{FT}_{f}\) as: \(\mathrm{FT}_{f}=\mathrm{FT}_{f}^{\mathrm{re},+}-\mathrm{FT}_{f}^{\mathrm{re},- }+i\mathrm{FT}_{f}^{\mathrm{im},+}-i\mathrm{FT}_{f}^{\mathrm{im},-}\), where: \(\mathrm{FT}_{f}^{\mathrm{re},+},\mathrm{FT}_{f}^{\mathrm{re},-},\mathrm{FT}_{ f}^{\mathrm{im},+},\mathrm{FT}_{f}^{\mathrm{im},-}:\mathbb{R}\rightarrow\mathbb{R}_{\geq 0}\). Without loss of generality, we will assume that all four functions are not identically zero.
Let us denote by \(\overline{\mathcal{P}}_{0},\overline{\mathcal{P}}_{1},\overline{\mathcal{P}}_{2 },\overline{\mathcal{P}}_{3}\) some probabilistic distribution on \(\mathbb{R}\) (e.g. Gaussian) and by \(\overline{p}_{0},\overline{p}_{1},\overline{p}_{2},\overline{p}_{3}:\mathbb{R} \rightarrow\mathbb{R}_{\geq 0}\) their corresponding density functions. Furthermore, denote by
\(\mathcal{P}_{0},\mathcal{P}_{1},\mathcal{P}_{2},\mathcal{P}_{3}\) probabilistic distributions of densities: \(p_{0},p_{1},p_{2},p_{3}:\mathbb{R}\rightarrow\mathbb{R}_{\geq 0}\) proportional to: \(\mathrm{FT}_{f}^{\mathrm{re,+}},\mathrm{FT}_{f}^{\mathrm{re,-}},\mathrm{FT}_{f}^ {\mathrm{im,+}},\mathrm{FT}_{f}^{\mathrm{im,-}}\) respectively. We can then write:
\[\begin{split} f(z)=\int_{\mathbb{R}}\mathrm{FT}_{f}(\xi)\exp(2 \pi i\xi z)d\xi&=\sum_{j=0}^{3}c_{j}\int_{\mathbb{R}}\frac{p_{j}( \xi)}{\bar{p}_{j}(\xi)}\exp(2\pi i\xi z)\overline{p}_{j}(\xi)d\xi\\ &=\sum_{j=0}^{3}c_{j}\mathbb{E}_{\xi\sim\overline{p}_{j}}\left[ \frac{p_{j}(\xi)}{\bar{p}_{j}(\xi)}\exp(2\pi i\xi z)\right],\end{split} \tag{6}\]
where: \(c_{0}=\int_{\mathbb{R}}\mathrm{FT}_{f}^{\mathrm{re,+}}(\tau)d\tau,c_{1}=-\int_ {\mathbb{R}}\mathrm{FT}_{f}^{\mathrm{re,-}}(\tau)d\tau,c_{2}=i\int_{\mathbb{R }}\mathrm{FT}_{f}^{\mathrm{im,+}}(\tau)d\tau\), and furthermore \(c_{3}=-i\int_{\mathbb{R}}\mathrm{FT}_{f}^{\mathrm{im,-}}(\tau)d\tau\). For \(\mathbf{x},\mathbf{w}\in\mathbb{R}^{d},b\in\mathbb{R}\), let us denote:
\[\widehat{f}_{j}(\mathbf{x},\mathbf{w},b)=c_{j}\mathbb{E}_{\xi\sim\overline{p} _{j}}\left[\frac{p_{j}(\xi)}{\bar{p}_{j}(\xi)}\exp\left(2\pi i\xi(\mathbf{x}^{ \top}\mathbf{w}+b)\right)\right]=c_{j}\mathbb{E}_{\xi\sim\overline{\mathcal{P} _{j}}}[S_{j}(\xi,b)\exp(\widehat{\mathbf{x}}^{\top}(\xi)\widehat{\mathbf{w}}( \xi))] \tag{7}\]
for \(S_{j}(\xi,b)=\frac{p_{j}(\xi)}{\bar{p}_{j}(\xi)}\exp(2\pi i\xi b)\), \(\widehat{\mathbf{x}}(\xi)=\rho(\xi)\mathbf{x}\), \(\widehat{\mathbf{w}}(\xi)=\eta(\xi)\mathbf{w}\), where \(\rho(\xi),\eta(\xi)\in\mathbb{C}\) satisfy: \(\rho(\xi)\eta(\xi)=2\pi i\xi\). Inside the expectation in Eq. 7, we recognize the softmax-kernel value \(\mathrm{K}_{\exp}(\widehat{\mathbf{x}}(\xi),\widehat{\mathbf{w}}(\xi))=\exp( \widehat{\mathbf{x}}^{\top}(\xi)\widehat{\mathbf{w}}(\xi))\). We thus disentangle \(\widehat{\mathbf{x}}(\xi)\) from \(\widehat{\mathbf{w}}\) there, by applying softmax-kernel linearization mechanism from Likhosherstov et al. (2022): \(\exp(\widehat{\mathbf{x}}^{\top}(\xi)\widehat{\mathbf{w}}(\xi))=\mathbb{E}_ {\mathbf{g}\sim\mathcal{N}(0,\mathbf{I}_{d})}[\Lambda_{\mathbf{g}}(\widehat{ \mathbf{x}})\Lambda_{\mathbf{g}}(\widehat{\mathbf{w}})]\), where \(\Lambda_{\mathbf{g}}:\mathbb{R}^{d}\rightarrow\mathbb{R}\) is defined as follows for \(A\leq 0\):
\[\Lambda_{\mathbf{g}}(\mathbf{z})=(1-4A)^{\frac{d}{4}}\exp(A\|\mathbf{g}\|_{2}^ {2}+\sqrt{1-4A}\mathbf{g}^{\top}\mathbf{z}-\frac{\|\mathbf{z}\|_{2}^{2}}{2}) \tag{8}\]
Thus \(\widehat{f}_{j}(\mathbf{x},\mathbf{w},b)=\mathbb{E}_{(\xi,\mathbf{g})\sim \overline{\mathcal{P}}_{j}\otimes\mathcal{N}(0,\mathbf{I}_{d})}[\Gamma_{ \mathbf{g},\xi}^{1}(\mathbf{x})\Gamma_{\mathbf{g},\xi}^{2}(\mathbf{w},b)]\) for \(\Gamma_{\mathbf{g},\xi}^{1}(\mathbf{x}),\Gamma_{\mathbf{g},\xi}^{2}(\mathbf{ w},b)\) given as:
\[\Gamma_{\mathbf{g},\xi}^{1}(\mathbf{x})=\Lambda_{\mathbf{g}}(\rho(\xi)\mathbf{ x}),\ \Gamma_{\mathbf{g},\xi}^{2}(\mathbf{w},b)=c_{j}S_{j}(\xi,b)\Lambda_{\mathbf{g}}(\eta( \xi)\mathbf{w}) \tag{9}\]
That observation directly leads to the RF mechanism for the estimation of \(\widehat{f}_{j}(\mathbf{x},\mathbf{w},b)\). We can rewrite: \(\widehat{f}_{j}(\mathbf{x},\mathbf{w},b)=\mathbb{E}[\Phi^{j}(\mathbf{x})^{ \top}\Psi^{j}(\mathbf{w},b)]\) for \((\xi_{1},\mathbf{g}_{1}),...,(\xi_{m},\mathbf{g}_{m})\sim\overline{\mathcal{P} }_{j}\otimes\mathcal{N}(0,\mathbf{I}_{d})\) and:
\[\begin{split}\Phi^{j}(\mathbf{x})=\frac{1}{\sqrt{m}}(\Gamma_{ \mathbf{g}_{1},\xi_{1}}^{1}(\mathbf{x}),...,\Gamma_{\mathbf{g}_{m},\xi_{m}}^{1 }(\mathbf{x}))^{\top},\\ \Psi^{j}(\mathbf{w},b)=\frac{1}{\sqrt{m}}(\Gamma_{\mathbf{g}_{1}, \xi_{1}}^{2}(\mathbf{w},b),...,\Gamma_{\mathbf{g}_{m},\xi_{m}}^{2}(\mathbf{w},b ))^{\top}\end{split} \tag{10}\]
Several strategies can be used to construct samples \((\xi_{1},\mathbf{g}_{1}),...,(\xi_{m},\mathbf{g}_{m})\), e.g. iid sampling or block-iid sampling with a fixed \(\xi\) used within a block, but constructed independently for different blocks. In the experiments, we also choose: \(\rho(\xi)=2\pi i\xi\) and \(\eta(\xi)\)=1.
The case of discrete \(\overline{\mathcal{P}}_{j}\) with finite number of atoms:Assume that \((\xi^{1},...,\xi^{K})\) is a sequence of atoms with the corresponding positive probabilities: \((p_{1},...,p_{K})\). Then one can also construct \(K\) pairs of RF-vectors \((\Phi^{j}(\mathbf{x};k),\Psi^{j}(\mathbf{w},b;k))_{k=1}^{K}\), each obtained by replacing \(\overline{\mathcal{P}}_{j}\) with a distribution corresponding to a deterministic constant \(p_{i}\) and get \(\Phi^{j}(\mathbf{x}),\Psi^{j}(\mathbf{w},b)\) by concatenating vectors from \((\Phi^{j}(\mathbf{x};k))_{k=1}^{K}\) and \((\Psi^{j}(\mathbf{w},b;k))_{k=1}^{K}\) respectively. This strategy is effective if \(K\) is small.
Note that: \(f(\mathbf{x}^{\top}\mathbf{w}+b)=\sum_{j=0}^{3}\widehat{f}_{j}(\mathbf{x}, \mathbf{w},b)\) and thus \(\Phi_{f}(\mathbf{x})\) and \(\Psi_{f}(\mathbf{w},b)\) can be defined as:
\[\Phi_{f}(\mathbf{x})=\mathrm{concat}\left(\left(\Phi^{j}(\mathbf{x})\right)_{j=0 }^{3}\right),\ \Psi_{f}(\mathbf{w},b)=\mathrm{concat}\left(\left(\Psi^{j}(\mathbf{w},b) \right)_{j=0}^{3}\right) \tag{11}\]
for the vector concatenation operation \(\mathrm{concat}\), completing the description of the URF mechanism.
**Remark 3.1** (boundedness): _We observe that for upper-bounded \(\|\mathbf{x}\|_{2},\|\mathbf{w}\|_{2},|b|\), the entries of \(\Phi_{f}(\mathbf{x})\) and \(\Psi_{f}(\mathbf{w},b)\) are also upper-bounded as long as \(A<0\). This follows directly from the formula for \(\Lambda_{\mathbf{g}}(\mathbf{z})\) in Eq. 8._
Trigonometric activation functions:Let us assume now that \(f(z)=\sin(z)\) or \(f(z)=\cos(z)\). Note that even though none of them has a Fourier Transform in the classical Riemannian sense, both have trivial Fourier Transforms in the broader distributional sense. To see that, we can rewrite both activations as: \(\sin(z)=\frac{exp(iz)-\exp(-iz)}{2i}\) and \(\cos(z)=\frac{exp(iz)+\exp(-iz)}{2}\). Therefore the corresponding distributions used in the URF derivations above become binary distributions over \(\{-\frac{1}{2\pi},\frac{1}{2\pi}\}\). This observation has interesting practical consequences, since it leads to the conceptually simple linearization of the FFLs applied in SIREN networks (see: Sec. 4.2).
### Beyond regular FFLs: the curious case of the ReLU-SNNK layer
We also propose another SNNK layer which is not directly inspired by any known FFL, but turns out to work very well in practice (see: Sec. 4.3.2). In this case, the mappings \(\Phi\) and \(\Psi\) are defined as: \(\Phi(\mathbf{x})=\mathrm{ReLU}(\frac{1}{\sqrt{l}}\mathbf{G}\mathbf{x})\), \(\Psi(\mathbf{w},b)=\mathrm{ReLU}(\frac{1}{\sqrt{l}}\mathbf{G}\mathbf{w})\) for the Gaussian matrix: \(\mathbf{G}\in\mathbb{R}^{l\times d}\) with entries sampled independently at random from \(\mathcal{N}(0,1)\). One can ask a question what kernel does this pair of maps correspond to. It turns out that the answer is particularly elegant.
**Theorem 3.2** (arc-cosine kernels; Cho & Saul (2011)): _The nth-order arc-cosine kernel \(\mathrm{K}_{n}:\mathbb{R}^{d}\times\mathbb{R}^{d}\rightarrow\mathbb{R}\) is defined as: \(\mathrm{K}_{n}(\mathbf{x},\mathbf{y})=\frac{1}{n}\|\mathbf{x}\|_{n}^{2}\| \mathbf{y}\|_{2}^{2}J_{n}(\alpha_{\mathbf{x},\mathbf{y}})\), where \(\alpha_{\mathbf{x},\mathbf{y}}\in[0,\pi]\) stands for an angle between \(\mathbf{x}\) and \(\mathbf{y}\) and \(J(\theta)\stackrel{{\mathrm{def}}}{{=}}(-1)^{n}(\sin(\theta))^{n +1}\frac{\partial^{\pi}}{\partial\theta^{\pi}}\left(\frac{\pi-\theta}{\sin( \theta)}\right)\). Then, \(\mathrm{K}_{n}\) can be linearized as: \(\mathrm{K}_{n}(\mathbf{x},\mathbf{y})=2\mathbb{E}[\Gamma_{n}(\mathbf{x})^{\top }\Gamma_{n}(\mathbf{y})]\) for \(\Gamma_{n}(\mathbf{v})\stackrel{{\mathrm{def}}}{{=}}\mathrm{ReLU }((\mathbf{v}^{\top}\boldsymbol{\omega})^{n})\) and \(\boldsymbol{\omega}\thicksim\mathcal{N}(0,\mathbf{I}_{d})\)._
We conclude that our proposed \(\mathrm{ReLU}\)-\(\mathrm{SNNK}\) layer is a scalable version of the FFL defined as: \(\mathbf{x},\mathbf{W}\in\mathbb{R}^{l\times d},\mathbf{b}\rightarrow(\frac{1} {2}\mathrm{K}_{1}(\mathbf{w}^{1},\mathbf{x}),...,\frac{1}{2}\mathrm{K}_{1}( \mathbf{w}^{l},\mathbf{x}))^{\top}\).
**Remark 3.3**: _The \(\mathrm{ReLU}\)-\(\mathrm{SNNK}\) layer is not a regular FFL since the values of its output dimensions cannot be re-written as \(f(\mathbf{x}^{\top}\mathbf{w}^{i}+b_{i})\) for some \(f:\mathbb{R}\rightarrow\mathbb{R}\) (interestingly, after \(\Gamma\)-base pre-processing, it can be still interpreted as a dot-product kernel). This shows that the SSNK mechanism is capable of modeling relationships beyond those of regular FFLs._
### Bundling neural networks with SNNKs
We are ready to propose the neural network _bundling process_, relying on the SSNK-primitives. Consider the following deep NN module with input \(\mathbf{x}=\mathbf{x}_{0}\in\mathbb{R}^{d_{0}}\) and output \(\mathbf{y}=\mathbf{x}_{L}\in\mathbb{R}^{d_{L}}\):
\[\begin{cases}\mathbf{x}_{i+1}=f_{i+1}(\mathbf{W}_{i}\mathbf{x}_{i}+\mathbf{b} _{i})\text{; }i=0,...,L-1,\\ \mathbf{x}_{0}=\mathbf{x}\end{cases} \tag{12}\]
for: (1) matrices \(\mathbf{W}_{i}\in\mathbb{R}^{d_{i+1}\times d_{i}}\), (2) bias vectors: \(\mathbf{b}_{i}\in\mathbb{R}^{d_{i+1}}\), and (3) activations: \(f_{i}:\mathbb{R}\rightarrow\mathbb{R}\).
To understand how the bundling process works, we start by replacing first FFL in Eq. 12 with its SNNK analogue. We obtain the following computational block:
\[\begin{cases}\widehat{\mathbf{x}}_{i+1}=\widehat{f}_{i+1}(\widehat{\mathbf{W} }_{i}\widehat{\mathbf{x}}_{i}+\widehat{\mathbf{b}}_{i})\text{ for }i=0,...,L-2,\\ \widehat{\mathbf{x}}_{0}=\Phi_{f_{1}}(\mathbf{x}_{0}),\\ \widehat{\mathbf{W}}_{0}=\mathbf{W}_{1}\Psi_{f_{1}}(\mathbf{W}_{0},\mathbf{b }_{0})\text{; }\widehat{\mathbf{W}}_{i}=\mathbf{W}_{i+1}\text{ for }i=1,...,L-2,\\ \widehat{f}_{i+1}=f_{i+2},\ \widehat{\mathbf{b}}_{i}=b_{i+1}\text{ for }i=0,...,L-2\end{cases} \tag{13}\]
In the system of equations above, \(\Psi_{f}(\mathbf{W}_{0},\mathbf{b}_{0})\) is a matrix with transposed rows of the form: \(\Psi_{f}(\mathbf{W}_{0}^{j},\mathbf{b}_{0}^{j})\), where \(\mathbf{W}_{0}^{j}\) for \(j=0,...,d_{1}-1\) are the transposed rows of \(\mathbf{W}_{0}\) and \(\mathbf{b}_{0}=(\mathbf{b}_{0}^{0},...,\mathbf{b}_{0}^{d_{1}-1})^{\top}\). We have thus successfully replaced a module of \(L\) feedforward layers with a module of \((L-1)\) feedforward layers. By continuing this procedure, we can ultimately get rid of the FFLs and obtain an estimator \(\overline{\mathbf{y}}\) of \(\mathbf{y}\), given as: \(\overline{\mathbf{y}}=\overline{\mathbf{W}\mathbf{x}}\), where
\[\begin{cases}\overline{\mathbf{x}}=\Phi_{f_{L}}\left(\Phi_{f_{L-1}}(...\Phi_{ f_{1}}(\mathbf{x}_{0})...)\right)\\ \overline{\mathbf{W}}=\Psi_{f_{L}}\left(\mathbf{W}_{L-1}\Psi_{f_{L-1}}(... \mathbf{W}_{2}\Psi_{f_{2}}(\mathbf{W}_{1}\Psi_{f_{1}}(\mathbf{W}_{0},\mathbf{b }_{0}),\mathbf{b}_{1})...,),\mathbf{b}_{L})\right)\in\mathbb{R}^{d_{L}\times m} \end{cases} \tag{14}\]
This has several important consequences. In inference, replacing matrices \(\mathbf{W}_{0},...,\mathbf{W}_{L-1}\) with one matrix \(\overline{\mathbf{W}}\) is a effectively a compression scheme (that does not necessarily need to be applied to
all the layers, but a particular consecutive set of layers of interest). If we apply bundling process to the entire deep neural network, we effectively provide its two-tower factorization with input disentangled from the parameters. In training, we can treat \(\overline{\mathbf{W}}\) as an unstructured parameter matrix and directly learn it (see results in Appendix H.3, table 5). Since the output \(\overline{\mathbf{y}}\) is now modeled as an action of the unstructured learnable matrix \(\overline{\mathbf{W}}\) on the _pre-processed_ input \(\overline{\mathbf{x}}\), for several loss functions there exists an explicit formula for the optimal \(\overline{\mathbf{W}}\). This is the case in particular for the standard regression loss (see discussion in Appendix H.3). If bundling is applied to a particular module, backpropagation through it is not necessary since there exists an explicit formula for the corresponding Jacobian.
## 4 Experiments
We present an extensive empirical evaluation on SNNK on a wide range of experiments. More details on each of the experiments can be found in the Appendix E.
### Pointwise Kernel Estimation
As a warm-up, we test the accuracy of the applied RF-mechanisms on synthetic data. We take \(d=2000\) and \(l=1\). We consider: **(a)** a SIREN-FFL with the activation function \(f(u)=\sin(u)\) and bias \(b=0.5\), **(b)** an arc-cosine-FFL from Sec. 3.2. The entries of the weight vectors \(\mathbf{w}\) and the inputs to the layers are taken independently from \(\frac{1}{\sqrt{d}}\text{Unif}(0,1)\). We report the mean relative error of the NN output (by averaging over \(s=500\) instantiations of the RF-mechanism) made by the RF-based estimator as well as the empirical standard deviation as a function of the number of random projections. This setup corresponds to quantifying the accuracy of the kernel estimator pointwise. The results are presented in Fig. 2 (g and h). Our SNNK provided an accurate approximation with a much smaller number of random projections than the dimensionality \(d\) of the input vectors.
Figure 2: Architecture for **(a)** SNNK layer (see Section A), **(b)** SNNK-Adpt layer **(c)** image fitting (SIREN), MNIST and UCI experiments, **(d)** SNNK-QPNN model, **(e)** SNNK-inspired Adapter-ViT layer, **(f)** SNNK-inspired Adapter-BERT layer. **(g,h)**: The relative error (obtained by averaging over \(s=500\) instantiations of the RF-mechanism) made by the RF-based estimator on the particular entry of the output of the. **(g)** SIREN-FFL and **(h)** are-cosine-FFL as a function of the number of random projections \(p\) (see: Sec. 4.1). The maximum \(p\) for (g) is larger than for (h), as (g) in theory produces larger variance per random projection. The corresponding standard deviations are negligible: **(g)**\(5\cdot 10^{-8}\),\(10^{-12}\), \(5\cdot 10^{-8}\), \(10^{-8}\),\(10^{-12}\), \(2.5\cdot 10^{-9}\), \(10^{-12}\), \(5\cdot 10^{-9}\), \(10^{-12}\), \(10^{-12}\), \(10^{-10}\), \(10^{-12}\), **(h)**\(10^{-12}\), \(3\cdot 10^{-8}\),\(3\cdot 10^{-8}\),\(2\cdot 10^{-8}\), \(10^{-12}\), \(5\cdot 10^{-9}\).
### Toy Experiments
SNNKs are versatile and can be used as a drop-in replacement for FFLs in a wide variety of NNs like the SIREN network Sitzmann et al. (2020), QPNN - a physics-inspired Neural Network (PINN) to solve the Hamiltonian for quantum physical systems (Sehanobish et al., 2021) and a simple multi-layer perceptron (MLP) for classification on MNIST (LeCun and Cortes, 2010). We use the sine activation variant for the first two experiments and the ReLU variant for MNIST. \(32\) random features are used for the solution of the 2-body problem and MNIST and 64 random features for the image fitting problem. We match the performance of the baseline NNs on the 2-body and the image fitting problem (see figure 3) and outperform the baseline on MNIST (Figure 9), while incurring lower training costs. For additional details regarding these experiments, see Appendix E.1.
### Finetuning Experiments
In this subsection, we show how SNNKs can be used for parameter efficient finetuning. For experiments on text, we use the GLUE benchmark consisting of 8 different natural language understanding tasks (Wang et al., 2018). For vision tasks, we use CiFAR-10 and CiFAR-100 (Krizhevsky et al., 2009). BERT-base (Devlin et al., 2019) is used as a backbone for text experiments and ViT (Kolesnikov et al., 2021) for image experiments. Our code is built on top of Transformers (Wolf et al., 2020) and adapter Transformer library (Pfeiffer et al., 2020). Detailed comparisons with various baselines can be found in Appendix I and additional experiments can be found in Appendix H.
#### 4.3.1 Linearizing the Pooler Layer in Transformers
For text classification tasks, a SNNK layer can be used as a drop-in replacement for the pooler layer which is a linear layer with a tanh activation. For these set of experiments, the base model is frozen and only the pooler and the classifier weights are tuned. We get computational gains as the number of random features employed by SNNK is smaller than that of the hidden size of the Transformers. More details are presented in Appendix E.2.
On GLUE dev set, our SNNK-linearized pooler models outperform the baselines on 5 out of 8 tasks (Table 1 (top half)). Additional results can be found in Appendix H.
In this setting, the linearized pooler weights can be merged with the classifier weights to create a weight matrix of size (# number of random features \(\times\) number of classes) and then one can simply store the newly merged layer instead of separately storing the trained classifier and pooler layers. This dramatically _reduces_ the storage from 18.92 Megabit to only.**02** Megabit leading to a compres
Figure 4: Comparison of trainable parameters between various layers/modules and the drop in replacement NNK layers. Results for CiFar-10 and CiFar-100 for SNNK-Adapter models.
Figure 3: (1) **Left column** : Injecting SNNK in a PINN network to approximate the potential energy of the 2-body system. Top to bottom : Ground truth potential, Learned potential by QPNN (Sehanobish et al., 2021) and QPNN-SNNK. QPNN-SNNK can learn the potential function perfectly even using less trainable parameters than the baseline QPNN. (2) **Rightmost three column** : Siren network on the first row, fitting not only the image, but also the gradients. SNNK on the bottom row produces an accurate approximation of the above.
sion factor of \(\mathbf{1/1000}\). More details are presented in Appendix C. Ablation studies on the number of random parameters for this experimental setting are presented in Appendix G.
#### 4.3.2 SNNK-inspired Adapter Layers
Adapters in Transformers were first introduced in (Houlsby et al., 2019) and there has been a lot of work designing different architectures (Pfeiffer et al., 2020; Karimi Mahabadi et al., 2021; Moosavi et al., 2022) and unifying various paradigms (Moosavi et al., 2022; He et al., 2022). Adapters are bottleneck MLPs which are (generally) added twice to each Transformer layer. In our work, we replace each adapter block by a single SNNK layer (Figure 2 (e) and (f)) using only \(\mathbf{16}\) random features resulting in a big drop of training parameters (see Figure 4). Figure 4 (b) shows the architecture of SNNK-inspired adapter layers. Additional details are presented in Appendix B.
As is customary for adapter experiments, base model is frozen and only the adapters and classifier are tuned. Table 1 (bottom half) shows our results on using SNNK layers in place of adapters on the GLUE dev set. We outperform the baseline on \(5\) out of \(8\) datasets while employing only \(\mathbf{1/3}\) of the training parameters. On MNLI, it is noted in (Houlsby et al., 2019), that using smaller adapter size causes worse performance and performance boost can be achieved by increasing the size of the adapter (\(256\) is used in their case). Similar to this observation, we note that we can improve performance and match the baselines on large datasets (ex. MNLI, QNLI) as we increase the number of random features (see Figure 5). Our method also produces competitive performance on image datasets like Cifar-10 and Cifar-100 (see Figure 4 (right 2 figures)). Detailed comparisons with SOTA parameter efficient finetuning methods can be found in Table 6 (vision tasks) and in Table 7 (GLUE tasks).
Moreover, we note that our methods are completely orthogonal to techniques such as gating mechanism in (Mao et al., 2022) or algorithms relying on dropping suitable adapter layers (Moosavi et al., 2022; Ruckle et al., 2021). Thus it can be easily combined with them.
### Experiments on UCI datasets
We have conducted experiments with a variety of real-world datasets found in the UCI Machine Learning Repository (UCI MLR).1. We trained a three-layer MLP model as baseline (see Appendix Sec. F.5 for details). We varied the output of the middle-layer to train MLPs with different sizes. For
Figure 5: Ablation with different number of random features for the ReLU-SNNK-adapter experiments on the GLUE dev set. \(AA\) is the reported adaptable adapter numbers in Moosavi et al. (2022).
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline Dataset & RTL & MRPC & QUNLI & QQP & SST2 & MNLI & STSB & COLA \\ \hline End-baseline (J.e. et al., 2019) & 52.5 & 51.3 & 74.5 & **72.0** & 51.9 & **56.4** & 75.1 & 20.4 \\ \hline End-baseline (SNNK-based (ours)) & \(\mathbf{61.36\pm 1.15}\) & \(\mathbf{82.07\pm 1.07}\) & \(\mathbf{76.52\pm 0.27}\) & \(\mathbf{76.01\pm 0.17}\) & \(\mathbf{65.21\pm 0.34}\) & \(\mathbf{56.02\pm 0.27}\) & \(\mathbf{76.82\pm 0.27}\) & \(\mathbf{26.81\pm 0.30}\) \\ \hline End-baseline (J.e. et al., 2019) & \(\mathbf{61.39\pm 1.27}\) & \(\mathbf{51.90\pm 1.06}\) & \(\mathbf{90.30\pm 0.25}\) & \(\mathbf{88.09\pm 0.16}\) & \(\mathbf{91.31\pm 0.31}\) & \(\mathbf{52.09\pm 0.47}\) & \(\mathbf{83.52\pm 0.17}\) & \(\mathbf{51.41\pm 1.28}\) \\ \hline End-baseline (J.e. et al., 2019) & \(\mathbf{60.68\pm 1.24}\) & \(\mathbf{91.26\pm 1.39}\) & \(\mathbf{90.44\pm 0.16}\) & \(\mathbf{58.52\pm 0.23}\) & \(\mathbf{92.31\pm 0.27}\) & \(\mathbf{52.06\pm 0.17}\) & \(\mathbf{88.81\pm 0.14}\) & \(\mathbf{58.21\pm 0.63}\) \\ \hline \end{tabular}
\end{table}
Table 1: SNNK experiments on GLUE benchmarks. MCC score is reported for CoLA, F1 score is reported for MRPC and QQP, Spearman correlation is reported for STSB. Accuracy scores are reported for the other tasks. All results are obtained by averaging over 5 seeds.
our method, we replace the middle-layer with SNNK (Figure 2 (c)). SNNK matches or outperforms the baseline while using only a fraction of the training parameters (Figure 6).
## 5 Conclusion
We present scalable neural network kernels (SNNK), a novel efficient NN computational model that can be used to replace regular feedforwards layers in MLPs, where inputs and parameters are disentangled and connected only in the final computation via a dot-product kernel. We introduce a general mechanism of the universal random features (URFs) to instantiate SNNKs, show that SNNKs are capable of encoding subtle relationships between parameter- and input-vector beyond functions of their dot-products and finally, explain how they lead to the compactification of the NN stack via the so-called bundling process. We complement our theoretical findings with the exhaustive empirical analysis, from pointwise kernel estimation to training Transformers with adapters.
## Ethics Statement
This paper focuses mostly on the algorithmic properties of the techniques linearizing kernels associated with feedfoward layers' calculations for the computational gains. The experiments with adapter-based fine-tuning of Transformers are presented to illustrate the main concepts. It should be noted though that Transformers should be used cautiously given their considerable computational footprint (improved though while adapters are applied) and the corresponding carbon footprint.
## Reproducibility Statement
Hyperparameters to reproduce each experiment is detailed in section F. All the code to reproduce the experiments will be provided on acceptance of this preprint.
## Author Contributions
AS and KC led the project. AS ran several empirical studies on GLUE, CiFar-10 and CiFar-100 datasets and proposed several strategies for efficiently using SNNK-layers within Transformer models. KC proposed FFL linearization-schemes, URFs, the bundling mechanism and implemented all linearization-schemes. YZ ran empirical studies on GLUE, CiFar-10 and CiFar-100 datasets. AD implemented and ran all UCI experiments, helped with GLUE/image experiments, proposed strategy for efficiently using SNNK-layers and created all figures in experiments. VL proposed an idea to linearize FFLs by disentangling inputs from weights. All authors contributed to the writing of the manuscript.
|
2308.09037 | MarginMatch: Improving Semi-Supervised Learning with Pseudo-Margins | We introduce MarginMatch, a new SSL approach combining consistency
regularization and pseudo-labeling, with its main novelty arising from the use
of unlabeled data training dynamics to measure pseudo-label quality. Instead of
using only the model's confidence on an unlabeled example at an arbitrary
iteration to decide if the example should be masked or not, MarginMatch also
analyzes the behavior of the model on the pseudo-labeled examples as the
training progresses, to ensure low quality predictions are masked out.
MarginMatch brings substantial improvements on four vision benchmarks in low
data regimes and on two large-scale datasets, emphasizing the importance of
enforcing high-quality pseudo-labels. Notably, we obtain an improvement in
error rate over the state-of-the-art of 3.25% on CIFAR-100 with only 25 labels
per class and of 3.78% on STL-10 using as few as 4 labels per class. We make
our code available at https://github.com/tsosea2/MarginMatch. | Tiberiu Sosea, Cornelia Caragea | 2023-08-17T15:19:04Z | http://arxiv.org/abs/2308.09037v1 | # MarginMatch: Improving Semi-Supervised Learning with Pseudo-Margins
###### Abstract
We introduce MarginMatch, a new SSL approach combining consistency regularization and pseudo-labeling, with its main novelty arising from the use of unlabeled data training dynamics to measure pseudo-label quality. Instead of using only the model's confidence on an unlabeled example at an arbitrary iteration to decide if the example should be masked or not, MarginMatch also analyzes the _behavior_ of the model on the pseudo-labeled examples as the training progresses, to ensure low quality predictions are masked out. MarginMatch brings substantial improvements on four vision benchmarks in low data regimes and on two large-scale datasets, emphasizing the importance of enforcing high-quality pseudo-labels. Notably, we obtain an improvement in error rate over the state-of-the-art of \(3.25\%\) on CIFAR-100 with only \(25\) labels per class and of \(3.78\%\) on STL-10 using as few as \(4\) labels per class. We make our code available at [https://github.com/tsosea2/MarginMatch](https://github.com/tsosea2/MarginMatch).
## 1 Introduction
Deep learning models have seen tremendous success in many vision tasks [14, 42, 27, 43, 22]. This success can be attributed to their scalability, being able to produce better results when they are trained on large datasets in a supervised fashion [15, 27, 34, 35, 43, 47]. Unfortunately, large labeled datasets annotated for various tasks and domains are difficult to acquire and demand considerable annotation effort or domain expertise. Semi-supervised learning (SSL) is a powerful approach that mitigates the requirement for large labeled datasets by effectively making use of information from unlabeled data, and thus, has been studied extensively in vision [44, 45, 23, 36, 38, 39, 4, 5, 25, 30, 36].
Recent SSL approaches integrate two important components: consistency regularization [46, 49] and pseudo-labeling [25]. Consistency regularization works on the assumption that a model should output similar predictions when fed perturbed versions of the same image, whereas pseudo-labeling uses the model's predictions of unlabeled examples as labels to train against. For example, Sohn et al. [41] introduced FixMatch that combines consistency regularization on weak and strong augmentations with pseudo-labeling. FixMatch relies heavily on a high-confidence threshold to compute the unsupervised loss, disregarding any pseudo-labels whose confidence falls below this threshold. While training using only high-confidence pseudo-labels has shown to consistently reduce the confirmation bias [1], this rigid threshold allows access only to a small amount of unlabeled data for training, and thus, ignores a considerable amount of unlabeled examples for which the model's predictions do not exceed the confidence threshold. More recently, Zhang et al. [49] introduced FlexMatch that relaxes the rigid confidence threshold in FixMatch to account for the model's learning status of each class in that it adaptively scales down the threshold for a class to encourage the model to learn from more examples from that class. The flexible thresholds in FlexMatch allow the model to have access to a much larger and diverse set of unlabeled data to learn from, but lowering the thresholds can lead to the introduction of wrong pseudo-labels, which are extremely harmful for generalization. Interestingly, even when the high-confidence threshold is used in FixMatch can result in wrong pseudo-labels. See Figure 1 for incorrect pseudo-labels detected in the training set after we apply FixMatch and FlexMatch on ImageNet. We posit that a drawback of FixMatch and FlexMatch and in general of any pseudo-labeling approach is that they use the confidence of the model only at the current iteration to enforce quality of pseudo-labels and completely ignore model's predictions at prior iterations.
In this paper, we propose MarginMatch, a new SSL approach that monitors the _behavior_ of the model on the unlabeled examples as the training progresses, from the beginning of training until the current iteration, instead of using only the model's current _belief_ about an unlabeled example (i.e., its confidence at the current iteration) to decide if the example should be masked or not.
We estimate a pseudo-label's contribution to learning and generalization by introducing pseudo-margins of unlabeled examples averaged across training iterations. Pseudo-margins of unlabeled examples extend the margins from machine learning [3, 11, 18, 33] which provide a measure of confidence of the outputs of the model and capture the
difference between the output for the correct (gold) label and the other labels. In our case, the pseudo-margins capture how much larger the assigned logit (the logit corresponding to the argmax of the model's prediction) is compared with all other logits at iteration \(t\). Similar to FlexMatch, in MarginMatch we take advantage of the flexible confidence thresholds to allow the model to learn from larger and more diverse sets of unlabeled examples, but unlike FlexMatch, we train the model itself to identify the characteristics of mislabeled pseudo-labels simply by monitoring the model's training dynamics on unlabeled data over the iterations.
We carry out comprehensive experiments using established SSL experimental setups on CIFAR-10, CIFAR-100 [21], SVHN [31], STL-10 [8], ImageNet [10], and WebVision [26]. Despite its simplicity, our findings indicate that MarginMatch produces improvements in performance over strong baselines and prior works on all datasets at no additional computational cost. Notably, compared to current state-of-the-art, on CIFAR-100 we see \(3.02\%\) improvement in error rate using only \(4\) labels per class and \(3.78\%\) improvement on STL-10 using the same extremely label-scarce setting of \(4\) labels per class. In addition, on ImageNet [10] and WebVision [26] we find that MarginMatch pushes the state-of-the-art error rates by \(0.97\%\) on ImageNet and by \(0.79\%\) on WebVision.
Our contributions are as follows:
1. We introduce a new SSL approach which we call MarginMatch that enforces high pseudo-label quality during training. Our approach allows access to a large set of unlabeled data to learn from (thus, incorporating more information from unlabeled data) and, at the same time, monitors the training dynamics of unlabeled data as training progresses to detect and filter out potentially incorrect pseudo-labels.
2. We show that MarginMatch outperforms existing works on six well-established computer vision benchmarks showing larger improvements in error rates especially on challenging datasets, while achieving similar convergence performance (or better) than prior works.
3. We perform a comprehensive analysis of our approach and indicate potential insights into why our MarginMatch substantially outperforms other SSL techniques.
## 2 MarginMatch
NotationLet \(L=\{(x_{1},y_{1}),...,(x_{B},y_{B})\}\) be a batch of size \(B\) of **labeled** examples and \(U=\{\hat{x}_{1},...,\hat{x}_{\nu B}\}\) be a batch of size \(\nu B\) of **unlabeled** examples, where \(\nu\) is the batch-wise ratio of unlabeled to labeled examples. Let \(p_{\theta}(y|x)\) denote the class distribution produced by model \(\theta\) on input image \(x\) and \(\hat{p}_{\theta}(y|x)\) denote the argmax of this distribution as a one-hot label. Let also \(H(p,q)\) denote the cross-entropy between two probability distributions \(p\) and \(q\).
### Background
Consistency regularization [39] is an important component in recent semi-supervised learning approaches and relies on the continuity assumption [2, 23] that the model should output similar predictions on multiple perturbed versions of the same input \(x\). As mentioned above, examples of two such approaches are FixMatch [41] and FlexMatch [49] that use consistency regularization at their core combined with psedo-labeling. In psedo-labeling [25], a model itself is used to assign artificial labels for unlabeled data and only artificial labels whose largest class probability is above a predefined confidence threshold are used during training.
Specifically, FixMatch [41] predicts artificial labels for unlabeled examples using a weakly-augmented version of each unlabeled example and then employs the artificial labels as pseudo-labels to train against but this time using a strongly-augmented version of each unlabeled example. That is, FixMatch minimizes the following batch-wise consistency loss on unlabeled data:
Figure 1: Incorrect pseudo-labels propagated until the end of the training process for FixMatch and FlexMatch on ImageNet.
\[\mathcal{L}_{u}=\sum_{i=1}^{\nu B}\mathbbm{1}\left(\max(p_{\theta}(y| \pi(\hat{x}_{i})))>\tau\right)\quad\times\\ H(\hat{p}_{\theta}(y|\pi(\hat{x}_{i})),p_{\theta}(y|\Pi(\hat{x}_{i}))) \tag{1}\]
where \(\tau\) is a confidence threshold, \(\pi\) and \(\Pi\) are weak and strong augmentations, respectively, and \(\mathbbm{1}\) is the indicator function. Therefore, the low-confidence examples (lower than \(\tau\)) are completely ignored despite containing potentially useful information for model training.
FlexMatch [49] argues that using a _fixed_ threshold \(\tau\) to filter the unlabeled data ignores the learning difficulties of different classes, and thus, introduces class-dependent thresholds, which are obtained by adaptively scaling \(\tau\) depending on the learning status of each class. FlexMatch assumes that a class with fewer examples above the fixed threshold \(\tau\) has a greater learning difficulty, and hence, it adaptively lowers the threshold \(\tau\) to encourage more training examples from this class to be learned. The learning status \(\alpha_{c}\) for a class \(c\) is simply computed as the number of unlabeled examples that are predicted in class \(c\) and pass the fixed threshold \(\tau\):
\[\alpha_{c}=\sum_{i=1}^{n}\mathbbm{1}(\max(p_{\theta}(y|\pi(\hat{x}_{i})))> \tau)\mathbbm{1}(\hat{p}_{\theta}(y|\pi(\hat{x}_{i}))=c) \tag{2}\]
where \(n\) is the total number of unlabeled examples. This learning effect is then normalized and used to obtain the class-dependent threshold for each class \(c\):
\[\mathcal{T}_{c}=\frac{\alpha_{c}}{\max_{c}(\alpha_{c})}\times\tau \tag{3}\]
In practice, FlexMatch iteratively computes new thresholds after each complete pass through unlabeled data, hence we can parameterize \(\mathcal{T}_{c}\) as \(\mathcal{T}_{c}^{t}\), denoting the threshold obtained at iteration \(t\). The unlabeled loss is then obtained by plugging in the adaptive threshold \(\mathcal{T}_{c}^{t}\) in Eq. 1:
\[\mathcal{L}_{u}=\sum_{i=1}^{\nu B}\mathbbm{1}(\max(p_{\theta}(y| \pi(\hat{x}_{i})))>\mathcal{T}_{\hat{p}_{\theta}(y|\pi(\hat{x}_{i}))}^{t}) \quad\times\\ H(\hat{p}_{\theta}(y|\pi(\hat{x}_{i})),p_{\theta}(y|\Pi(\hat{x}_{i}))) \tag{4}\]
The aforementioned works use the confidence of the model _solely at the current iteration_ to enforce quality of pseudo-labels. We believe this is not sufficient as it provides only a myopic view of the model's behavior (i.e., its confidence) on unlabeled data (at a single iteration) and may result in wrong pseudo-labels even when the confidence threshold is high enough (e.g., if the model is miscalibrated or overly-confident [13]). Figure 1 shows examples of images that are added to the training set with a wrong pseudo-label for both FixMatch and FlexMatch. These types of unlabeled examples, which are incorrectly pseudo-labeled and used during training are particularly harmful for deep neural networks, which can attain zero training error on any dataset, even on randomly assigned labels [50], resulting in poor generalization capabilities.
### Proposed Approach: MarginMatch
We now introduce MarginMatch, our new SSL approach that uses the model's training dynamics on unlabeled data to improve pseudo-label data quality. Our approach leverages consistency regularization with weak and strong augmentations and pseudo-labeling, but instead of using only the model's current _belief_ (i.e., its confidence at the current iteration) to decide if an unlabeled example should be used for training or not, our MarginMatch monitors the training dynamics of unlabeled data over the iterations by investigating the _margins_ (a measure of confidence) of the outputs of the model [3]. The margin of a training example is a well established metric in machine learning [3, 11, 18, 33] that quantifies the difference between the logit corresponding to the assigned ground truth label and the largest other logit.
In our SSL formulation, we redefine the concept of margins to _pseudo-margins_ of unlabeled examples since no ground truth labels are available for the unlabeled data. Let \(c\) be the pseudo-label (or the argmax of the prediction, i.e., \(\hat{p}_{\theta}(y|\pi(\hat{x}))\)) at iteration \(t\) on unlabeled example \(\hat{x}\) after applying weak augmentations. We define the _pseudo-margin_ (PM) of \(\hat{x}\) with respect to pseudo-label \(c\) at iteration \(t\) as follows:
\[\text{PM}_{c}^{t}(\hat{x})=z_{c}-max_{c=i}(z_{i}) \tag{5}\]
where \(z_{c}\) is the logit corresponding to the assigned pseudo-label \(c\) and \(\max_{c:l=i}(z_{i})\) is the largest _other_ logit corresponding to a label \(i\) different from \(c\). To monitor the model's predictions on \(\hat{x}\) with respect to pseudo-label \(c\) from the beginning of training to iteration \(t\), we average all the margins with respect to \(c\) from the first iteration until \(t\) and obtain the average pseudo-margin (APM) as follows:
\[\text{APM}_{c}^{t}(\hat{x})=\frac{1}{t}\sum_{j=1}^{t}\text{PM}_{c}^{j}(\hat{x}) \tag{6}\]
Here \(c\) acts as the "ground truth" label for the APM calculation. Note that if at a prior iteration \(t^{\prime}\), the assigned pseudo-label is different from \(c\) (say \(c^{\prime}\)), then the APM calculation at iteration \(t^{\prime}\) is done with respect to \(c^{\prime}\) (by averaging all margins with respect to \(c^{\prime}\) from 1 to \(t^{\prime}\)). In practice, we maintain a vector of pseudo-margins for all classes accumulated over the training iterations and dynamically retrieve the accumulated pseudo-margin value of the argmax class \(c\) to obtain the APM\({}_{c}^{t}\) at iteration \(t\).
Intuitively, if \(c\) is the pseudo-label of \(\hat{x}\) at iteration \(t\), then \(\text{PM}_{c}^{t}\) with respect to class \(c\) at iteration \(t\) will be positive. In contrast, if the argmax of the model prediction on \(\hat{x}\) at a previous iteration \(t^{\prime}<t\) is different from \(c\), then \(PM_{c}^{t^{\prime}}\) at \(t^{\prime}\)
with respect to \(c\) will be negative. Therefore, if over the iterations, the model predictions do not agree frequently with the pseudo-label \(c\) from iteration \(t\) and the model fluctuates significantly between iterations on the predicted label, the APM for class \(c\) will have a low, likely negative value. Similarly, if the model is highly uncertain of the class of \(\hat{x}\) (reflected in a high entropy of the class probability distribution), the APM for class \(c\) will have a low value. These capture the characteristics of mislabeled examples or of those harmful for training. Motivated by these observations, MarginMatch leverages the APM of the assigned pseudo-label \(c\) and compares it with an APM threshold to mask out pseudo-labeled examples with low APMs. Formally, the unlabeled loss in MarginMatch is:
\[\mathcal{L}_{u}=\sum_{i=1}^{\nu B}\mathbb{1}(\text{APM}^{t}_{ \hat{p}_{\theta}(y|\pi(\hat{x}_{i}))}(\hat{x}_{i})>\gamma^{t})\quad\times\\ \mathbb{1}(\max(p_{\theta}(y|\pi(\hat{x}_{i})))>\mathcal{T}^{t}_{ \hat{p}_{\theta}(y|\pi(\hat{x}_{i}))})\quad\times\\ H(\hat{p}_{\theta}(y|\pi(\hat{x}_{i})),p_{\theta}(y|\Pi(\hat{x}_{i}))) \tag{7}\]
where \(\gamma^{t}\) is the APM threshold at iteration \(t\), estimated as explained below, and \(\mathcal{T}^{t}_{\hat{p}_{\theta}(y|\pi(\hat{x}_{i}))}\) is the flexible threshold estimated as in FlexMatch [49]. To train our model, we adopt the best practices [49, 41] and optimize the weighted combination of the supervised and unsupervised losses:
\[\mathcal{L}=\mathcal{L}_{s}+\lambda\mathcal{L}_{u} \tag{8}\]
where the supervised loss is given by:
\[\mathcal{L}_{s}=\frac{1}{B}\sum_{i=1}^{B}H(y_{i},p_{\theta}(y|\pi(x_{i}))) \tag{9}\]
**Average Pseudo-Margin Threshold Estimation**
Inspired by Pleiss et al. [33], we propose to estimate the average pseudo-margin threshold \(\gamma^{t}\) by analyzing the training dynamics of a special category of unlabeled examples, which we force to be _erroneous_ or mislabeled examples. That is, to create the sample of _erroneous_ examples \(E\), we randomly sample a subset of unlabeled examples from \(U\) that we assign to an inexistent (or virtual) class \(C+1\) at the beginning of the training process and remove them from \(U\). The purpose of these erroneous examples is to mimic the training dynamics of incorrectly pseudo-labeled (unlabeled) examples and use them as proxy to estimate the cutoff of (potentially) mislabeled pseudo-labels. Since the examples in \(E\)_should_ belong to one of the \(C\) original classes, assigning them to the inexistent class \(C+1\) makes them by definition mislabeled (see Appendix A for additional insights into this virtual class). As with all unlabeled examples from \(U\), we compute \(APM^{t}_{C+1}\) for the special category of erroneous examples from \(E\), but unlike the unlabeled examples from \(U\), the erroneous ones from \(E\) have a fixed class \(C+1\). To mimic the training dynamics of unlabeled examples from \(U\), we use strong augmentations to compute the loss of the erroneous examples from \(E\). That is, given a batch \(E_{b}\) of \(B\) erroneous examples, the erroneous sample loss becomes:
\[\mathcal{L}_{e}=\sum_{i=1}^{B}H(C+1,p_{\theta}(y|\Pi(\tilde{x}_{i}))) \tag{10}\]
At iteration \(t\), we use the APMs of the erroneous examples to choose the APM threshold \(\gamma^{t}\). We set \(\gamma^{t}\) as the APM of the \(95^{th}\) percentile erroneous sample. The total loss becomes:
\[\mathcal{L}=\mathcal{L}_{s}+\lambda(\mathcal{L}_{u}+\mathcal{L}_{e}) \tag{11}\]
Our full MarginMatch algorithm is shown in Algorithm 1.
Exponential Moving Average of Pseudo-MarginsThe current definition of APM weighs the pseudo-margin at iteration \(t\) identical to the pseudo-margin at a much earlier iteration \(p\) (\(t>>p\)). This is problematic since very old pseudo-margins eventually become deprecated (especially due to the large number of iterations through unlabeled data in consistency training (\(\sim 9K\))), and hence, the old margins are no longer indicative of the current learning status of the model. To this end, instead of averaging all pseudo-margins (from the beginning of training to the current iteration), we propose to use an exponential moving average to place more importance on recent iterations. Formally, APM becomes:
\[\text{APM}^{t}_{c}(\hat{x})=\text{PM}^{t}_{c}(\hat{x})*\frac{\delta}{1+t}+ \text{APM}^{t-1}_{c}(\hat{x})*(1-\frac{\delta}{1+t}) \tag{12}\]
We set the smoothing parameter \(\delta\) to \(0.997\) in experiments.
## 3 Experiments
We evaluate the performance of our MarginMatch on a wide range of SSL benchmark datasets. Specifically, we perform experiments with various numbers of labeled examples on CIFAR-10, CIFAR-100 [21], SVHN [31], STL-10 [8], ImageNet [10], and WebVision [26]. For smaller scale datasets such as CIFAR-10, CIFAR-100, SVHN, and STL-10 we randomly sample a small number of labeled examples per class (ranging from \(4\) labels per class to \(400\) labels per class) and treat them as labeled data, whereas the remaining labeled examples are treated as unlabeled data, except for STL-10 [8], which provides its own set of unlabeled examples. On ImageNet and WebVision, we use \(\sim 10\%\) of the available labeled examples as labeled data, with the remaining (\(90\%\)) being treated as unlabeled data. In all our experiments, we sample \(5\%\) of the unlabeled data and place it in the set of erroneous examples.
We report the mean and standard deviation of error rates from five runs with different parameter initializations. Similar to FixMatch [41], we use Wide Residual Networks [48]: WRN-28-2 for CIFAR-10 and SVHN; WRN-28-8 for CIFAR-100; and WRN-37-2 for STL-10. We use ResNet-50 [14] for both ImageNet and WebVision.
In our experiments, we adopt the same hyperparameters as FixMatch [41]. Specifically, we use stochastic gradient descent (SGD) with a momentum of \(0.9\). We start with a learning rate of \(0.03\) and employ a cosine learning rate; at iteration \(k\), our learning rate is \(\eta(k)=cos(\frac{Tk\pi}{16K})\), where \(K\) is the maximum number of iterations and is set to \(2^{20}\). We also leverage the same data augmentations as in FlexMatch [49]. Specifically, for weak augmentations we employ a standard flip-and-shift augmentation and use RandAugment [9] for strong augmentations. We set the batch size \(B=64\), ratio of unlabeled to labeled data in a batch to \(\nu=7\) and weigh the supervised and unsupervised losses equally (i.e., \(\lambda=1\)). We set our initial confidence threshold to \(\tau=0.95\) and our average pseudo-margin (APM) threshold to the APM of the \(95^{th}\) percentile threshold sample. To report the error rates, we compare all the approaches using the model at the end of training as in FixMatch [41].
### Cifar-10, CIFAR-100, SVHN, and STL-10
We compare MarginMatch against strong baselines and prior works: Pseudo-Labeling [1], Unsupervised Data Augmentation (UDA) [46], MixMatch [5], ReMixMatch [4], FixMatch [41], and FlexMatch [49]. We show in Table 1 the error rates obtained by our MarginMatch and the baselines on the CIFAR-10, CIFAR-100, SVHN and STL-10 datasets. First, we observe that our approach improves the performance on both CIFAR-10 and CIFAR-100. On CIFAR-10, MarginMatch improves performance in all data regimes upon FlexMatch [49], which is the current state-of-the-art, while mantaining a good error rate standard deviation. On CIFAR-100, which is significantly more challenging than CIFAR-10, we observe that MarginMatch bring substantially larger improvements. Notably, we see \(3.02\%\) improvement over FlexMatch in error rate using only \(4\) labels per class, and \(3.25\%\) improvement using \(25\) examples per class. These results on CIFAR-100 emphasize the effectiveness of MarginMatch, which performs well on a more challenging dataset.
On SVHN, our approach performs better than FixMatch using \(4\) labels per class and performs similarly with FixMatch using \(25\) and \(100\) labels per class. However, on this dataset, MarginMatch performs much better compared with FlexMatch. For example, MarginMatch achieves \(3.75\%\) error rate using \(4\) labels per class, whereas FlexMatch obtains an error rate of \(8.19\%\) with the same labels per class,
\begin{table}
\begin{tabular}{l|c c c c c c c c c c c} \hline Dataset & \multicolumn{3}{c}{CIFAR-10} & \multicolumn{3}{c}{CIFAR-100} & \multicolumn{3}{c}{SVHN} & \multicolumn{3}{c}{STL-10} \\ \hline \#Labels/Class & 4 & 25 & 400 & 4 & 25 & 100 & 4 & 25 & 100 & 4 & 25 & 100 \\ \hline Pususo-Labeling & \(74.61_{0.20}\) & \(46.49_{2.20}\) & \(15.08_{0.19}\) & \(87.45_{0.85}\) & \(57.74_{0.28}\) & \(36.55_{0.24}\) & \(64.61_{5.00}\) & \(25.21_{2.03}\) & \(9.40_{0.32}\) & \(74.68_{0.99}\) & \(55.45_{2.43}\) & \(32.64_{0.71}\) \\ UDA & \(10.79_{3.75}\) & \(5.32_{0.06}\) & \(41.40_{0.07}\) & \(48.95_{1.50}\) & \(29.43_{0.21}\) & \(23.87_{0.72}\) & \(5.34_{4.47}\) & \(4.26_{0.30}\) & \(3.78_{2.84}\) & \(9.811_{1.15}\) & \(6.81_{1.17}\) \\ MixMatch & \(45.24_{2.15}\) & \(12.76_{1.14}\) & \(7.13_{0.34}\) & \(62.15_{1.57}\) & \(41.51_{1.19}\) & \(28.16_{0.24}\) & \(46.18_{1.78}\) & \(39.8_{0.77}\) & \(36.5_{1.3}\) & \(34.51_{1.54}\) & \(8.95_{0.32}\) & \(10.41_{0.73}\) \\ ReMixMatch & \(5.27_{1.29}\) & \(4.85_{0.13}\) & \(4.04_{0.12}\) & \(4.71_{0.56}\) & \(27.14_{0.23}\) & \(23.78_{0.12}\) & \(4.23_{0.31}\) & \(31.80_{0.94}\) & \(1.94_{0.06}\) & \(31.51_{0.75}\) & \(8.54_{0.48}\) & \(6.19_{2.24}\) \\ FixMatch & \(7.8_{0.28}\) & \(4.91_{0.05}\) & \(4.25_{0.08}\) & \(48.21_{0.82}\) & \(29.45_{0.16}\) & \(22.89_{0.12}\) & \(3.97_{1.18}\) & \(\mathbf{3.13_{0.83}}\) & \(1.97_{0.38}\) & \(38.43_{1.14}\) & \(10.45_{0.04}\) & \(6.43_{0.33}\) \\ FlexMatch & \(5.04_{0.06}\) & \(5.04_{0.00}\) & \(4.19_{0.01}\) & \(39.91_{0.62}\) & \(26.96_{0.08}\) & \(22.44_{0.15}\) & \(8.19_{0.20}\) & \(7.78_{2.35}\) & \(6.72_{0.30}\) & \(29.15_{1.32}\) & \(8.23_{0.13}\) & \(5.77_{1.12}\) \\ \hline MarginMatch & \(\mathbf{4.91_{0.07}}\) & \(\mathbf{4.73_{0.12}}\) & \(\mathbf{3.98_{0.02}}\) & \(\mathbf{36.97_{1.32}}\) & \(\mathbf{23.71_{0.13}}\) & \(\mathbf{23.13_{0.12}}\) & \(\mathbf{3.75_{1.20}}\) & \(3.14_{1.17}\) & \(\mathbf{1.93_{0.01}}\) & \(\mathbf{25.37_{0.28}}\) & \(\mathbf{7.31_{0.38}}\) & \(\mathbf{5.52_{0.18}}\) \\ \hline \end{tabular}
\end{table}
Table 1: Test error rates on CIFAR-10, CIFAR-100, SVHN, and STL-10 datasets. Best results are shown in **blue**.
yielding an improvement of MarginMatch of \(4.44\%\) over FlexMatch. We hypothesize that the low performance of FlexMatch is due to its limitation in handling unbalanced class distributions [49]. On STL-10, MarginMatch as well outperforms all the other approaches both in error rates and error rate standard deviation. Notably, on this dataset, our approach pushes the performance of FlexMatch by \(3.78\%\) in error rate using only \(4\) labels per class and by \(0.92\%\) using \(25\) labels per class.
Next, we compare MarginMatch with FixMatch and FlexMatch in terms of convergence speed in the extremely label-scarce setting of 4 labels per class and show these results in Figure 2. Notably, we observe that MarginMatch has a similar convergence speed (or even better on CIFAR-100) compared with FlexMatch while achieving a lower test error rate than FlexMatch on all datasets with 4 labels per class (see Table 1). Even more strikingly, compared with FixMatch, MarginMatch has a much superior convergence speed for a much better test error rate with 4 labels per class. This is because the rigid thresholds in FixMatch allow access only to a small amount of unlabeled data for training at each iteration and it takes a lot longer for the model to train.
### ImageNet and WebVision
To showcase the effectiveness of our approach in a large-scale setup, we test our MarginMatch on ImageNet [10] and WebVision [26] using \(10\%\) labeled examples in total. We show the results obtained in Table 2. We observe that our MarginMatch outperforms FixMatch and FlexMatch on both datasets. It is worth noting that large-scale self-supervised approaches such as SimCLR [7] achieve high performance on ImageNet but at a much higher computational cost. MarginMatch outperforms other SSL methods using the same ResNet-50 architecture at the same computational cost. We emphasize MarginMatch is most successful and relevant in low data regimes on smaller datasets.
## 4 Ablation Study
**Exponential Moving Average Smoothing for APM Computation** In our approach, we employ an exponential moving average (EMA) of the pseudo-margin values with a smoothing value of \(\delta=0.997\) to compute the APM. We now analyze how our approach performs with different EMA smoothing values or with no EMA at all. Table 3 shows these results on CIFAR-100 with 4 labels per class. First, we observe that employing a simple average of pseudo-margin values for the APM computation (i.e., \(\delta=1\)) performs extremely poorly, obtaining a \(39.72\)% error rate. This result emphasizes that margins eventually become deprecated and it is essential to scale them down in time. Using a low smoothing factor of \(\delta=0.95\) is not effective either, denoting that abruptly forgetting margin values does not work either. Our chosen \(\delta=0.997\) strikes a balance between the two by eliminating the harmful effects of very old margins while keeping track of a good amount of previous estimates (e.g., a margin value computed \(200\) epochs previously is scaled down by \(0.55\), while a margin value computed \(1000\) epochs previously is scaled by \(0.05\)).
**Pseudo-Margin vs. Other Measures for Pseudo-Label Correctness** Our MarginMatch monitors the pseudo-margins of a model's predictions across training iterations to ensure the quality of pseudo-labels. However, other measures such as confidence or entropy exist that can assess the pseudo-label correctness. Hence, we perform an ablation where we replace the pseudo-margins in our MarginMatch with average confidence and entropy and compare their per
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline Dataset & \multicolumn{2}{c}{ImageNet} & \multicolumn{2}{c}{WebVision} \\ & top-1 & top-5 & top-1 & top-5 \\ \hline Supervised & \(48.39\) & \(25.49\) & \(49.58\) & \(26.78\) \\ FixMatch & \(43.66\) & \(21.80\) & \(44.76\) & \(22.65\) \\ FlexMatch & \(42.02\) & \(19.49\) & \(43.87\) & \(22.07\) \\ \hline MarginMatch & \(\mathbf{41.05}\) & \(\mathbf{18.28}\) & \(\mathbf{43.08}\) & \(\mathbf{21.13}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Test error rates on the ImageNet and WebVision datasets. Best results are shown in **blue**.
\begin{table}
\begin{tabular}{l|c c c c c c} \hline \hline \(\delta\) & \(0.95\) & \(0.99\) & \(0.995\) & \(0.997\) & \(0.999\) & \(1\) \\ \hline \hline err rate & \(38.13\) & \(38.05\) & \(37.92\) & \(\mathbf{37.91}\) & \(39.12\) & \(39.72\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Error rates obtained on CIFAR-100 with four examples per class and various smoothing values \(\delta\). Best result is in **blue**.
Figure 2: Convergence speed of MarginMatch against FixMatch and FlexMatch with 4 labels per class.
formance. Specifically, we design the following approaches: **1) Avg Confidence** monitors the confidence of the prediction for each unlabeled example and takes the average over the training iterations; **2) Avg Entropy** monitors the entropy of the class probability distribution for each unlabeled example and takes the average across the training iterations. In addition, we also consider **3) EMA Confidence** and **4) EMA Entropy** which are similar to Avg Confidence and Avg Entropy, respectively, but use an exponential moving average (EMA) instead of the simple averaging. The estimation of the threshold for each of these approaches is done in a similar manner as for pseudo-margins, using erroneous samples and considering the value of the 95th percentile erroneous sample as the threshold.
We show the results obtained using these approaches in Table 4. First, we observe that all measures (pseudo-margin, confidence and entropy) with EMA perform better than their counterpart with simple averaging. Second, EMA Margin achieves the lowest test error rates compared with EMA Confidence and EMA Entropy. Thus, we conclude that pseudo-margins provide an excellent measure for pseudo-label correctness. See Appendix B for some additional insights into why EMA Margin outperforms EMA confidence and entropy.
## 5 Analysis
### Mask Rate and Impurity
We now contrast MarginMatch with FixMatch and FlexMatch in terms of the quality of pseudo-labels using two metrics: _mask rate_ and _impurity_ and show these results in Figure 3, respectively, using CIFAR-100 with \(4\) labels per class. _Mask rate_ is defined as the fraction of pseudo-labeled examples that _do not_ participate in the training at epoch t due to confidence masking or pseudo-margin masking (or both). _Impurity_ in contrast is defined as the fraction of pseudo-labeled examples that _do_ participate in the training at epoch t but with a wrong label. An effective SSL model minimizes both metrics: a low mask rate indicates that the model has access to more unlabeled examples during training (otherwise a low percentage and less diverse set of unlabeled examples are seen during training) while low impurity indicates that the pseudo-labels of these examples are of high quality. Note that we can compute impurity on these two datasets because our unlabeled data comes from the labeled training set of each of these datasets (thus we compare the pseudo-labels against the gold labels of each dataset).
As can be seen from the figures, FixMatch has a significantly larger mask rate due to the rigid confidence threshold set to a high value of \(0.95\). In contrast, FlexMatch lowers the mask rate by \(5\%\) with the introduction of flexible thresholds, but has a much higher impurity compared with FixMatch. Notably, our MarginMatch has only a slightly higher mask rate compared with FlexMatch and at the same time achieves a much lower impurity than FlexMatch and even FixMatch despite that FixMatch employs a very high confidence threshold. These results show that MarginMatch that enforces an additional measure for pseudo-labeled data quality maintains a low mask rate without compromising the quality of the pseudo-labels (i.e., low mask rate and low impurity).
### Anecdotal Evidence
We show in Figure 4 anecdotal evidence of the effectiveness of MarginMatch. To this end, we extract two _bird_ images from our unlabeled portion of CIFAR-10 [21] of various learning difficulties that resemble characteristics of _plane_ images (e.g., the background). The top part of the figure illustrates the progression over the training iterations of the confidence and the confidence thresholds of FlexMatch for the classes _bird_ and _plane_, whereas the bottom part of the figure illustrates the progression of the APM threshold of MarginMatch along with its APMs of _bird_ and _plane_ classes over the training iterations. In the rightmost image, for MarginMatch we can observe that the APM of the _bird_ class becomes stronger and stronger as the training progresses and eventually exceeds the APM threshold of MarginMatch, and
\begin{table}
\begin{tabular}{l|c c c c c c c c c c} \hline \hline Dataset & \multicolumn{3}{c}{CIFAR-10} & \multicolumn{3}{c}{CIFAR-100} & \multicolumn{3}{c}{SVHN} & \multicolumn{3}{c}{STL-10} \\ \hline \#Labels/Class & \(4\) & \(25\) & \(400\) & \(4\) & \(25\) & \(100\) & \(4\) & \(25\) & \(100\) & \(4\) & \(25\) & \(100\) \\ \hline Avg Confidence & \(23.87_{2.73}\) & \(14.21_{1.37}\) & \(7.54_{0.78}\) & \(41.23_{1.25}\) & \(31.49_{1.48}\) & \(24.11_{2.36}\) & \(8.99_{2.47}\) & \(6.54_{0.30}\) & \(4.73_{0.01}\) & \(31.67_{8.44}\) & \(14.87_{1.15}\) & \(7.59_{10.17}\) \\ Avg Entropy & \(8.56_{4.41}\) & \(6.18_{1.15}\) & \(58.50_{1.12}\) & \(35.10_{0.94}\) & \(26.02_{1.11}\) & \(21.23_{0.15}\) & \(15.69_{0.25}\) & \(12.74_{0.78}\) & \(9.33_{0.05}\) & \(29.54_{3.41}\) & \(10.63_{1.35}\) & \(10.84_{0.47}\) \\ Avg Margin & \(7.25_{0.29}\) & \(5.38_{0.76}\) & \(4.73_{0.02}\) & \(39.72_{1.52}\) & \(25.21_{0.22}\) & \(23.18_{1.17}\) & \(18.45_{1.36}\) & \(11.29_{0.39}\) & \(8.40_{0.44}\) & \(28.45_{0.28}\) & \(9.344_{1.34}\) & \(7.59_{20.21}\) \\ \hline \hline EMA Confidence & \(4.91_{0.45}\) & \(4.74_{0.09}\) & \(3.99_{0.05}\) & \(38.67_{0.74}\) & \(25.61_{0.12}\) & \(21.48_{0.17}\) & \(3.84_{0.23}\) & \(3.25_{0.03}\) & \(1.93_{0.20}\) & \(25.9_{0.31}\) & \(7.6_{0.42}\) & \(5.74_{0.37}\) \\ EMA Entropy & \(6.4_{0.43}\) & \(8.34_{0.12}\) & \(4.21_{0.09}\) & \(41.63_{0.76}\) & \(36.84_{0.13}\) & \(22.52_{0.07}\) & \(3.81_{0.26}\) & \(3.17_{0.87}\) & \(21.4_{0.04}\) & \(27.21_{0.45}\) & \(8.28_{0.01}\) & \(6.79_{2.27}\) \\ EMA Margin & \(\mathbf{4.91_{0.07}}\) & \(\mathbf{4.73_{0.12}}\) & \(\mathbf{3.98_{0.02}}\) & \(\mathbf{36.97_{1.32}}\) & \(\mathbf{23.71_{0.13}}\) & \(\mathbf{21.39_{0.12}}\) & \(\mathbf{3.75_{1.20}}\) & \(\mathbf{3.14_{1.17}}\) & \(\mathbf{1.93_{0.01}}\) & \(\mathbf{25.37_{3.58}}\) & \(\mathbf{7.31_{3.55}}\) & \(\mathbf{5.52_{1.15}}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Test error rates comparing pseudo-margin with confidence and entropy. Best results are shown in **blue**.
Figure 3: Mask rate and impurity on CIFAR-100 with \(4\) labeled examples per class.
hence, the image is included in the training until the end with the correct _bird_ label. Interestingly, for the same image, in FlexMatch the confidence for the _bird_ class is very close to the _bird_ confidence threshold and eventually falls below this threshold and exits the training set.
In contrast, the leftmost image is significantly more challenging than the rightmost image since it is more similar to _plane_ images, which makes it an easily confusable example. Here, we observe that the confidence of FlexMatch exceeds the flexible threshold with the incorrect argmax class _plane_ starting from iteration \(3000\). Moreover, Flexmatch continues to use this image with the wrong _plane_ label for the remaining of the training process. Critically, in MarginMatch the APM value for the _plane_ class does not exceed the APM threshold, and the model eventually learns to classify this image correctly and includes it in training with the correct _bird_ pseudo-label.
## 6 Related Work
Here, we focus on various SSL approaches that our MarginMatch directly builds upon although there are other SSL techniques that are not presented in this review, such as approaches based on generative models [17, 24], graph-based approaches [19, 20] and robust SSL [32, 37] (see Appendix C for a comparison between MarginMatch and Robust SSL approaches).
**Self-training**[40, 47, 28, 36] is a popular SSL method where the predictions of a model on unlabeled data are used as artificial labels to train against. Noisy student training [47] is a popular self-training approach that also leverages knowledge distillation [16] and iteratively jointly trains two models in a teacher-student framework. Noisy student uses a larger model size and noised inputs, exposing the student to more difficult learning environments, leading to an increased performance compared to the teacher.
**Pseudo-labeling** is a variant of self-training where these predictions are sharpened to obtain hard labels [25]. The use of hard labels can be seen as a means of entropy minimization [12] and nowadays is a valuable component in most successful SSL approaches [4, 5, 49]. These hard labels are usually used along a confidence threshold, where unconfident unlabeled examples are completely disregarded (e.g., [5]) to avoid using noisy pseudo-labels. Recently, approaches such as Curriculum Labeling (CL) [6] or FlexMatch [49] started to explore curriculum learning in the SSL context. CL proposes a self-pacing strategy of identifying easy and hard examples to ensure that the model first uses easy and progressively moves towards harder examples. Similarly, MarginMatch uses curriculum learning and pseudo-labeling, but the focus of our approach is placed on producing better thresholds for assessing the quality of pseudo-labels.
**Consistency regularization**[2] is a method that applies random perturbations when generating the artificial label, such as data augmentation [4, 5, 41], dropout [39], or adversarial perturbations [29]. Current state-of-the-art approaches [41, 49] exploit a combination of weak and strong data augmentations, which were shown to be extremely beneficial in SSL. The most popular strong augmentations used in the SSL literature are RandAugment [9] and CTAugment [4]. The approaches based on these methods first generate a hard label using pseudo-labeling on a weakly augmented image (i.e., using a low noise transformation such as a flip-and-shift augmentation), then optimize the predictions of the model on a strongly augmented version of the same image towards this hard label. Similar to these approaches, MarginMatch uses the same combination of weak and strong data augmentations.
## 7 Conclusion
In this paper, we proposed a novel semi-supervised learning method that improves the pseudo-label quality using training dynamics. Our new method is lightweight and achieves state-of-the-art performance on four computer vision SSL datasets in low data regimes and on two large-scale benchmarks. MarginMatch takes into consideration not only a flexible confidence threshold to account for the difficulty of each class, but also a measure of quality for each unlabeled example using training dynamics. In addition, MarginMatch is a general approach that can be leveraged in most SSL frameworks and we hope that it can attract future research in analyzing the effectiveness of SSL approaches focused on data quality. As future work, we aim to further explore our method in settings when there is a mismatch between the labeled and unlabeled data distributions (i.e., making use of out-of-domain unlabeled data).
Figure 4: Confidence thresholding vs. APM Thresholding on two images from the CIFAR-10 dataset. |
2302.04548 | Demonstration of deterministic SWAP gate between superconducting and
frequency-encoded microwave-photon qubits | The number of superconducting qubits contained in a single quantum processor
is increasing steadily. However, to realize a truly useful quantum computer, it
is inevitable to increase the number of qubits much further by distributing
quantum information among distant processors using flying qubits. Here, we
demonstrate a key element towards this goal, namely, a SWAP gate between the
superconducting-atom and microwave-photon qubits. The working principle of this
gate is the single-photon Raman interaction, which results from strong
interference in one-dimensional optical systems and enables a high gate
fidelity insensitively to the pulse shape of the photon qubit, by simply
bouncing the photon qubit at a cavity attached to the atom qubit. We confirm
the bidirectional quantum state transfer between the atom and photon qubits.
The averaged fidelity of the photon-to-atom (atom-to-photon) state transfer
reaches 0.829 (0.801), limited mainly by the energy relaxation time of the atom
qubit. The present atom-photon gate, equipped with an in situ tunability of the
gate type, would enable various applications in distributed quantum computation
using superconducting qubits and microwave photons. | Kazuki Koshino, Kunihiro Inomata | 2023-02-09T10:27:16Z | http://arxiv.org/abs/2302.04548v1 | Demonstration of deterministic SWAP gate between superconducting and frequency-encoded microwave-photon qubits
###### Abstract
The number of superconducting qubits contained in a single quantum processor is increasing steadily. However, to realize a truly useful quantum computer, it is inevitable to increase the number of qubits much further by distributing quantum information among distant processors using flying qubits. Here, we demonstrate a key element towards this goal, namely, a SWAP gate between the superconducting-atom and microwave-photon qubits. The working principle of this gate is the single-photon Raman interaction, which results from strong interference in one-dimensional optical systems and enables a high gate fidelity insensitively to the pulse shape of the photon qubit, by simply bouncing the photon qubit at a cavity attached to the atom qubit. We confirm the bidirectional quantum state transfer between the atom and photon qubits. The averaged fidelity of the photon-to-atom (atom-to-photon) state transfer reaches 0.829 (0.801), limited mainly by the energy relaxation time of the atom qubit. The present atom-photon gate, equipped with an _in situ_ tunability of the gate type, would enable various applications in distributed quantum computation using superconducting qubits and microwave photons.
## I Introduction
The number of solid-state qubits contained in a single processor is steadily increasing [1; 2] and has reached 3 digits recently. However, an incomparably larger number of qubits is required in order to make such quantum machines truly useful. Therefore, in near future, it would be indispensable to distribute quantum information among remote quantum processors [3; 4; 5; 6], using deterministic quantum interactions between stationary and flying qubits [7; 8].
In superconducting quantum computation, stationary qubits are encoded on various types of superconducting atoms and flying qubits are encoded on microwave photons propagating in waveguides. In such setups, the atom-photon interaction is drastically enhanced owing to the natural spatial mode-matching between radiation from the atom and a propagating photon in the waveguide [9; 10; 11; 12; 13; 14; 15]. Applying such waveguide QED effects, single microwave photon detection has been accomplished [16; 17; 18; 19; 20; 21]. Another prominent achievement in this field is the deterministic release and catch of a photon by remote atoms, which is accomplished by tuning the atom-waveguide coupling and has been applied for remote entanglement generation and a photon-photon gate [22; 23; 24; 25; 26; 27; 28]. A technical difficulty with such _active_ atom-photon interaction is the need for precise temporal control of the atom-waveguide coupling in accordance with the pulse shape of the photon, without which the capture probability of propagating photons is substantially lost.
In this paper, we demonstrate a deterministic SWAP gate between a superconducting atom and a microwave photon [29]. This gate is based on an essentially _passive_ working principle, the single-photon Raman interaction [30; 31; 32; 33; 34; 35; 36; 37]. This is a phenomenon characteristic to one-dimensional optical systems, originating in the strong destructive interference between an applied photon field to an emitter and radiation from it. This guarantees a high-fidelity gate operation insensitively to the shape and length of the input photon qubit. Besides this point, the present scheme has the following merits for practical implementation. Simple setup--the required system for the present gate is an atom and a resonator coupled in the dispersive regime, each coupled to independent waveguides (Fig. 1). This is a quite common element in a superconducting quantum processor adopting the dispersive qubit readout [38; 39]. Gate tunability--although we demonstrate only the SWAP gate in this work, the present gate is a more general (SWAP)\({}^{\alpha}\) gate (\(0\leq\alpha\leq 1\)) [40; 41; 42; 43] equipped with an _in situ_ tunability of the gate type \(\alpha\) through the amplitude and frequency of the drive pulse to the atom [29]. In particular, in combination with the atom-photon entangling gate (\(\sqrt{\text{SWAP}}\), \(\alpha=1/2\)), the present scheme is applicable to the entanglement generation between remote superconducting qubits as well as the deterministic photon-photon entangling gate. Dual-rail encoding of photon qubit--the photon qubit is encoded on its two different carrier frequencies [44; 45; 46; 47]. In contrast with the single-rail (photon number) encoding, we can avoid degradation of fidelity by photon loss and sharing a phase reference [48].
The rest of this paper is organized as follows. In Sec. II, we present the setup to realize the SWAP gate between a superconducting atom and a microwave photon and explain the working principle of the gate. In Sec. III, we demonstrate the atom-photon SWAP gate. More concretely, we confirm both photon-to-atom and atom-to-photon qubit transfer by constructing the final atom/photon density matrix. The photon qubit in the present gate is a single photon in principle, but we use a weak coherent-state photon instead in this work. Section IV is devoted to summary. In Appendices A and B, details on the density matrix construction are presented. In Appendix C, details on the experimental information are presented.
## II Atom-photon SWAP gate
### Setup
The setup for the present atom-photon SWAP gate is a common one in superconducting quantum computing: a superconducting atom is dispersively coupled to a microwave resonator, and transmission lines are attached to both of them (Fig. 1). One of the lines (Port 2) is coupled to the atom and a microwave drive pulse, which transforms the _bare_ states of the atom-resonator system to the _dressed_ ones within the pulse duration, is applied through this line. The other line (Port 1) is coupled to the resonator and a single microwave photon, which serves as a photon qubit, is input through this line synchronously with the drive pulse. The atom qubit is encoded on its ground and excited states, \(|g\rangle\) and \(|e\rangle\). The photon qubit is encoded on its two different carrier frequencies, \(|\omega_{L}\rangle\) and \(|\omega_{H}\rangle\), where \(\omega_{L}\) (\(\omega_{H}\)) denotes the lower (higher) carrier frequency. The gate operation completes deterministically by bouncing the photon qubit applied through Port 1.
### Principle of SWAP gate
Here, we outline the working principle of the present gate. We label the eigenstates of the atom-resonator system by \(|a,n\rangle\), where \(a=\{g,e\}\) and \(n=\{0,1,\cdots\}\) respectively specify the atomic state and the photon number in the resonator. The atom-resonator system is in the dispersive coupling regime, and the eigenfrequencies are given by \(\omega_{|g,n\rangle}=n\omega_{\rm r}\) and \(\omega_{|e,n\rangle}=\omega_{\rm ge}+n(\omega_{\rm r}-2\chi)\), where \(\omega_{\rm ge}\) and \(\omega_{\rm r}\) are the renormalized frequencies of the atom and the resonator and \(\chi\) is the dispersive shift. In the present atom-photon gate, we use the lowest four levels of the atom
Figure 1: Setup for the SWAP gate between superconducting-atom and microwave-photon qubits. (a) Schematic of the setup. (b) False-colored optical micrograph of the actual device.
Figure 2: Level structure of the atom-resonator system. (a) Bare states, where the drive field from Port 2 is off. (b) Dressed states, where the drive field is on. Note that the energy diagram (b) is drawn in the frame rotating at the drive frequency \(\omega_{\rm d}\).
resonator system (\(a=\{g,e\}\) and \(n=\{0,1\}\)). The principal decay channel of this four-level system is the radiative decay of the resonator to Port 1 with a rate of \(\kappa\). Therefore, when the drive field is off, the radiative decay within this four-level system occurs vertically with \(\kappa\) [Fig. 2(a)].
During the interaction between the photon qubit and the atom-resonator system, we apply a microwave drive to the atom from Port 2. This drive field hybridizes the lower-two _bare_ states \(|g,0\rangle\) and \(|e,0\rangle\) to form the _dressed_ states \(|\widetilde{1}\rangle\) and \(|\widetilde{2}\rangle\). By switching on/off the drive field adiabatically, we can convert the bare and dressed states deterministically as \(|g,0\rangle\leftrightarrow|\widetilde{1}\rangle\) and \(|e,0\rangle\leftrightarrow|\widetilde{2}\rangle\). Similarly, the higher-two states are converted as \(|e,1\rangle\leftrightarrow|\widetilde{3}\rangle\), and \(|g,1\rangle\leftrightarrow|\widetilde{4}\rangle\). In addition to vertical decays (\(|\widetilde{3}\rangle\rightarrow|\widetilde{2}\rangle\) and \(|\widetilde{4}\rangle\rightarrow|\widetilde{1}\rangle\)), oblique decays (\(|\widetilde{3}\rangle\rightarrow|\widetilde{1}\rangle\) and \(|\widetilde{4}\rangle\rightarrow|\widetilde{2}\rangle\)) become allowed due to hybridization in a dressed-state basis.
In particular, under a proper choice of the frequency and power of the drive field, the four radiative decay rates take an identical value of \(\kappa/2\) [Fig. 2(b)]. Then, the levels \(|\widetilde{1}\rangle\), \(|\widetilde{2}\rangle\) and \(|\widetilde{j}\rangle\) (\(j=3\) or \(4\)) function as an "impedance-matched" \(\Lambda\) system [29; 33]: if the system is in the state \(|\widetilde{1}\rangle\) initially and a single photon with frequency \(\omega_{j1}=\omega_{|\widetilde{j}\rangle}-\omega_{|\widetilde{1}\rangle}\) is input from Port 1, a Raman transition \(|\widetilde{1}\rangle\rightarrow|\widetilde{j}\rangle\rightarrow|\widetilde{2}\rangle\) is deterministically induced. As a result, the \(\Lambda\) system switches to the state \(|\widetilde{2}\rangle\) and the input photon becomes down-converted to frequency \(\omega_{j2}\) after reflection. In this study, we choose \(j=4\) and set \(\omega_{L}=\omega_{42}\) and \(\omega_{H}=\omega_{41}\) as the lower and higher carrier frequencies of the photon qubit. The time evolution of the atom and photon qubits is then written as \(|\widetilde{1},\omega_{H}\rangle\rightarrow|\widetilde{2},\omega_{L}\rangle\). The inverse process, \(|\widetilde{2},\omega_{L}\rangle\rightarrow|\widetilde{1},\omega_{H}\rangle\), is also deterministic. In contrast, for the initial states of \(|\widetilde{1},\omega_{L}\rangle\) and \(|\widetilde{2},\omega_{H}\rangle\), the input photon is perfectly reflected as it is without interacting with the \(\Lambda\) system, since the input photon is out of resonance of the \(\Lambda\) system. Namely, \(|\widetilde{1},\omega_{L}\rangle\rightarrow|\widetilde{1},\omega_{L}\rangle\) and \(|\widetilde{2},\omega_{H}\rangle\rightarrow|\widetilde{2},\omega_{H}\rangle\). These four-time evolutions are summarized as
\[(\beta_{1}|\widetilde{1}\rangle+\beta_{2}|\widetilde{2}\rangle)\otimes( \gamma_{1}|\omega_{L}\rangle+\gamma_{2}|\omega_{H}\rangle)\rightarrow(\gamma_ {1}|\widetilde{1}\rangle+\gamma_{2}|\widetilde{2}\rangle)\otimes(\beta_{1}| \omega_{L}\rangle+\beta_{2}|\omega_{H}\rangle), \tag{1}\]
where \(\beta_{1},\cdots,\gamma_{2}\) are arbitrary coefficients satisfying \(|\beta_{1}|^{2}+|\beta_{2}|^{2}=|\gamma_{1}|^{2}+|\gamma_{2}|^{2}=1\).
Before and after the interaction between the photon qubit and the atom-resonator system, we switch off the drive field. Therefore, the atom-resonator system returns to the bare state basis as \(|\widetilde{1}\rangle=|g,0\rangle\) and \(|\widetilde{2}\rangle=|e,0\rangle\). Omitting the resonator's state, which is in the vacuum state at both the initial and final moments, Eq. (1) is rewritten as
\[(\beta_{1}|g\rangle+\beta_{2}|e\rangle)\otimes(\gamma_{1}|\omega_{L}\rangle+ \gamma_{2}|\omega_{H}\rangle)\rightarrow(\gamma_{1}|g\rangle+\gamma_{2}|e \rangle)\otimes(\beta_{1}|\omega_{L}\rangle+\beta_{2}|\omega_{H}\rangle). \tag{2}\]
This is a SWAP gate between the atom qubit and the photon qubit applied through Port 1.
## III Demonstration of SWAP gate
In this section, we demonstrate the atom-photon SWAP gate. This should be done, in principle, by applying a single-photon pulse from Port 1 as the photon qubit. However, in this study, we use a weak coherent-state pulse instead, the mean photon number \(|\alpha|^{2}\) of which is much smaller than unity. This pulse is dichromatic in general with the carrier frequencies \(\omega_{L}\) and \(\omega_{H}\) and has a Gaussian temporal profile with the pulse length of \(t_{\rm p}=100\) ns. This is long enough to satisfy the condition for high-fidelity atom-photon gate, \(t_{\rm p}\gg 1/\kappa\), where \(\kappa\) is the resonator decay rate into Port 1 (see Table 3). If the initial states of the atom and photon qubits are \(\beta_{1}|g\rangle+\beta_{2}|e\rangle\) and \(\gamma_{1}|\omega_{L}\rangle+\gamma_{2}|\omega_{H}\rangle\), respectively, the initial state vector of the atom-photon system is written as
\[|\psi_{i}\rangle\ =\ (\beta_{1}|g\rangle+\beta_{2}|e\rangle)\otimes e^{-| \alpha|^{2}/2}\left[|0\rangle+\alpha(\gamma_{1}|\omega_{L}\rangle+\gamma_{2}| \omega_{H}\rangle)+\cdots\right], \tag{3}\]
where \(\alpha\) represents the complex amplitude of the input photon-qubit pulse, and the dots represent the multiphoton components in the pulse, which are negligible when \(|\alpha|^{2}\ll 1\). This state vector is rewritten as
\[|\psi_{i}\rangle\ =\ c_{1}|g,0\rangle+c_{2}|e,0\rangle+c_{3}|g,\omega_{L}\rangle +c_{4}|e,\omega_{L}\rangle+c_{5}|g,\omega_{H}\rangle+c_{6}|e,\omega_{H} \rangle+\cdots, \tag{4}\]
where \((c_{1},\cdots,c_{6})=e^{-|\alpha|^{2}/2}\times(\beta_{1},\beta_{2},\alpha \beta_{1}\gamma_{1},\alpha\beta_{2}\gamma_{1},\alpha\beta_{1}\gamma_{2},\alpha \beta_{2}\gamma_{2})\).
The atom-photon SWAP gate is completed by bouncing the photon qubit at the capacitance connecting Port 1 and the resonator. The time evolution is given by Eq. (1) and results in the following final state vector,
\[|\psi_{f}\rangle\ =\ c_{1}|g,0\rangle+c_{2}|e,0\rangle+c_{3}|g,\omega_{L} \rangle+c_{5}|e,\omega_{L}\rangle+c_{4}|g,\omega_{H}\rangle+c_{6}|e,\omega_{H} \rangle+\cdots. \tag{5}\]
### State transfer: photon to atom
#### ii.1.1 Procedures for density matrix estimation
In order to demonstrate the atom-photon SWAP gate, we confirm the bidirectional state transfer between the atom and photon qubits. Here, we demonstrate the photon-to-atom state transfer. We denote the density matrix of the final atom qubit by \(\hat{\rho}_{\rm a}^{\rm(c)}\), where the superscript (c) implies that the input photon-qubit pulse is in a coherent state. The matrix element of \(\hat{\rho}_{\rm a}^{\rm(c)}\) is given by \(\hat{\rho}_{\rm a,{nr}}^{\rm(c)}={\rm Tr}_{\rm p}\{\langle m|\psi_{f}\rangle \langle\psi_{f}|n\rangle\}\), where \(m,n(=g,e)\) specify the atomic state and \({\rm Tr}_{\rm p}\) takes the trace over the photonic states. From Eq. (5), \(\rho_{\rm a,{ee}}^{\rm(c)}\) and \(\rho_{\rm a,{eg}}^{\rm(c)}\) are given, up to the second order in \(|\alpha|\), by
\[\rho_{\rm a,{ee}}^{\rm(c)} \approx |\beta_{2}|^{2}+|\alpha|^{2}(|\gamma_{2}|^{2}-|\beta_{2}|^{2}), \tag{6}\] \[\rho_{\rm a,{eg}}^{\rm(c)} \approx \beta_{1}^{*}\beta_{2}+|\alpha|^{2}(\gamma_{1}^{*}\gamma_{2}- \beta_{1}^{*}\beta_{2}). \tag{7}\]
On the other hand, our target quantity here is the density matrix \(\hat{\rho}_{\rm a}^{\rm(s)}\) of the final atom qubit assuming the single-photon input. If the SWAP gate is performed with the single-photon input, the final atomic state is \(\gamma_{1}|g\rangle+\gamma_{2}|e\rangle\) [see Eq. (2)]. Therefore, \(\rho_{\rm a,{ee}}^{\rm(s)}=|\gamma_{2}|^{2}\) and \(\rho_{\rm a,{eg}}^{\rm(s)}=\gamma_{1}^{*}\gamma_{2}\).
Thus, we can reproduce the target density matrix \(\hat{\rho}_{\rm a}^{\rm(s)}\) from the measurable one \(\hat{\rho}_{\rm a}^{\rm(c)}\) by the following procedures: Setting the initial atomic state at \(|g\rangle\) [namely, \((\beta_{1},\beta_{2})=(1,0)\)], we perform the atom-photon SWAP gate and measure the final atomic density matrix elements \(\rho_{\rm a,{ee}}^{\rm(c)}\) and \(\rho_{\rm a,{eg}}^{\rm(c)}\). Putting \((\beta_{1},\beta_{2})=(1,0)\) in Eqs. (6) and (7), they are expected to behave as \(\rho_{\rm a,{ee}}^{\rm(c)}=|\alpha|^{2}|\gamma_{2}|^{2}\) and \(\rho_{\rm a,{eg}}^{\rm(c)}=|\alpha|^{2}\gamma_{1}^{*}\gamma_{2}\). Therefore, by varying the mean photon number \(|\alpha|^{2}\) and measuring the slopes of these quantities, two of the target density matrix elements are estimated as
\[\rho_{\rm a,{ee}}^{\rm(s)} = \frac{d}{d|\alpha|^{2}}\rho_{\rm a,{ee}}^{\rm(c)}, \tag{8}\] \[\rho_{\rm a,{eg}}^{\rm(s)} = \frac{d}{d|\alpha|^{2}}\rho_{\rm a,{eg}}^{\rm(c)}. \tag{9}\]
The other elements are determined by \(\rho_{\rm a,{ge}}^{\rm(s)}=(\rho_{\rm a,{eg}}^{\rm(s)})^{*}\) and \(\rho_{\rm a,{gg}}^{\rm(s)}=1-\rho_{\rm a,{ee}}^{\rm(s)}\).
#### ii.1.2 Pulse sequence
The pulse sequence to measure the density matrix \(\hat{\rho}_{\rm a}^{\rm(c)}\) of the final atom qubit is shown in Fig. 3(a). The measurement is composed of three steps. (i) Initialization: we wait for complete de-excitation of the atom applying no pulses. (ii) SWAP gate: from Port 2, we apply a drive pulse with a flat-top envelope at \(\omega_{4}/2\pi=5.785\) GHz to the atom to implement an _impedance-matched_\(\Lambda\) system [Fig. 2(b)]. Within the drive pulse duration, we input from Port 1 the photon-qubit pulse with a Gaussian envelope, which is dichromatic (\(\omega_{L}/2\pi=\omega_{42}/2\pi=10.208\) GHz and \(\omega_{H}/2\pi=\omega_{41}/2\pi=10.266\) GHz) in general. Note that the photon-qubit pulse in Fig. 3(a) does not have a clear Gaussian envelope because of its dichromatic nature. (iii) Tomography: we first apply a short control pulse (no pulse, \(\pi\) pulse, or 4 kinds of \(\pi/2\) pulse) with a Gaussian envelope at \(\omega_{\rm{ge}}/2\pi=5.839\) GHz to the atom and then dispersively read out the atomic state with a rectangular pulse at \(\omega_{\rm{r}}/2\pi=10.258\) GHz.
#### ii.1.3 Results and discussion
The measured density matrix elements \(\rho_{\rm a,{ee}}^{\rm(c)}\) and \(\rho_{\rm a,{eg}}^{\rm(c)}\) of the final atom qubit are shown in Fig. 3(b), where the initial atom (photon) qubit is in the ground (equator) state, namely, \((\beta_{1},\beta_{2})=(1,0)\) and \((\gamma_{1},\gamma_{2})=(1,e^{i\theta})/\sqrt{2}\). We observe that \(\rho_{\rm a,{ee}}^{\rm(c)}\) is independent of the phase \(\theta\) of the initial photon qubit and increases in proportion to \(|\alpha|^{2}\), and that \(\rho_{\rm a,{eg}}^{\rm(c)}\) is an oscillating function of \(\theta\) and its amplitude grows by increasing \(|\alpha|^{2}\). These observations are in qualitative accordance with Eqs. (6) and (7), which predicts that \(\rho_{\rm a,{ee}}^{\rm(c)}=|\alpha|^{2}/2\) and \(\rho_{\rm a,{eg}}^{\rm(c)}=|\alpha|^{2}e^{i\theta}/2\). These results indicate that the phase information of the initial photon qubit is successfully transferred to the final atom qubit.
In Fig. 3(c), we present the density matrix \(\hat{\rho}_{\rm a}^{\rm(s)}\) of the final atom qubit assuming the single-photon input, estimated by the aforementioned procedures. More details on the estimation are presented in Appendix A. The initial photon qubit is in one of the six cardinal states \([|\omega_{L}\rangle\), \(|\omega_{H}\rangle\), and \((|\omega_{L}\rangle+e^{in\pi/4}|\omega_{H}\rangle)/\sqrt{2}\) for \(n=0,\cdots,3\)]. The agreement
between the initial photon and final atom qubits is fairly good and the averaged fidelity for the six cardinal states reaches 0.829. The principal origin of the infidelity would be the short \(T_{1}\) (\(\sim\) 0.9 \(\mu\)s) of the superconducting atom, which is comparable to the time required for the state tomography of the final atom qubit. An exceptionally high fidelity is attained when the initial photon qubit is in \(|\omega_{L}\rangle\) [first panel in Fig. 3(c)]. This is because the atom remains in the ground state (\(|g,0\rangle\approx|\widetilde{1}\rangle\)) throughout the gate operation and is unaffected by the short \(T_{1}\).
### State transfer: atom to photon
#### iii.2.1 Procedures for density matrix estimation
Here, we demonstrate the atom-to-photon state transfer. More concretely, from the amplitudes of the final photon-qubit pulse (after reflection in Port 1) for the coherent-state input, we estimate the density matrix \(\hat{\rho}_{\rm p}^{(\rm s)}\) of the final
Figure 3: Photon-to-atom state transfer. (a) Pulse sequence depicted in the intermediate frequencies (IFs) and the energy diagrams before, during, and after the SWAP gate. In the energy diagrams, the initial atom (photon) qubit is assumed to be in the polar (equator) state, namely, \((\beta_{1},\beta_{2})=(1,0)\) and \((\gamma_{1},\gamma_{2})=(1,e^{i\theta})/\sqrt{2}\). (b) Measured density matrix elements (\(\rho_{{\rm a},ee}^{(\rm c)}\), \({\rm Re}\rho_{{\rm a},eg}^{(\rm c)}\), \({\rm Im}\rho_{{\rm a},eg}^{(\rm c)}\)) of the final atom qubit for the coherent-state input. The initial atom (photon) qubit is in the ground (equator) state, and the phase \(\theta\) of the photon qubit is varied continuously. The mean photon number \(|\alpha|^{2}\) in the photon-qubit pulse is indicated. (c) Estimated density matrix \(\hat{\rho}_{\rm a}^{(\rm s)}\) of the final atom qubit assuming the single-photon input. Positive, negative, and zero matrix elements are drawn in red, blue, and black dotted lines, respectively. The fidelity to the initial photon qubit is indicated.
photon qubit assuming the single-photon input. The final amplitude \(\xi(t)\) is given by \(\xi(t)=\langle\psi_{f}|\hat{a}|\psi_{f}\rangle\), where \(\hat{a}\) is the annihilation operator for a propagating photon in Port 1. Using Eq. (5), \(\xi(t)\) is given, up to the first order in \(|\alpha|\), by
\[\xi(t)\ =\ \alpha(|\beta_{1}|^{2}\gamma_{1}+\beta_{1}\beta_{2}^{*}\gamma_{2}) \psi_{L}(t)+\alpha(\beta_{1}^{*}\beta_{2}\gamma_{1}+|\beta_{2}|^{2}\gamma_{2} )\psi_{H}(t), \tag{10}\]
where \(\psi_{L(H)}(t)=\langle 0|\hat{a}|\omega_{L(H)}\rangle\) is the single-photon amplitude of the lower (higher) frequency component. When the initial photon-qubit pulse is monochromatic at \(\omega_{L}\), the final amplitude \(\xi_{L}(t)\) is given, by putting \((\gamma_{1},\gamma_{2})=(1,0)\) in Eq. (10), by
\[\xi_{L}(t)\ =\ \alpha|\beta_{1}|^{2}\psi_{L}(t)+\alpha\beta_{1}^{*}\beta_{2} \psi_{H}(t). \tag{11}\]
This equation implies that the initial monochromatic pulse may become dichromatic after reflection, depending on the initial atomic state. However, when the atom is in the ground state initially, the final pulse remains monochromatic at \(\omega_{L}\). We denote its amplitude by \(\zeta_{L}(t)\). Putting \((\beta_{1},\beta_{2})=(1,0)\) in Eq. (11), we have
\[\zeta_{L}(t)\ =\ \alpha\psi_{L}(t). \tag{12}\]
Similarly, when the initial pulse is monochromatic at \(\omega_{H}\), the final amplitude \(\xi_{H}(t)\) is given by
\[\xi_{H}(t)\ =\ \alpha\beta_{1}\beta_{2}^{*}\psi_{L}(t)+\alpha|\beta_{2}|^{2} \psi_{H}(t). \tag{13}\]
\(\zeta_{H}(t)\) is the result for the initial atom being in the excited state. Putting \((\beta_{1},\beta_{2})=(0,1)\) in Eq. (13), we have
\[\zeta_{H}(t)\ =\ \alpha\psi_{H}(t). \tag{14}\]
We denote the overlap integral between \(\zeta_{i}(t)\) and \(\xi_{j}(t)\) (\(i,j=L,H\)) by
\[\eta_{ij}=\int dt\zeta_{i}^{*}(t)\xi_{j}(t). \tag{15}\]
Since \(\psi_{L}(t)\) and \(\psi_{H}(t)\) are orthogonal to each other due to the different carrier frequencies, we obtain \(\eta_{LL}=C|\alpha|^{2}|\beta_{1}|^{2}\), \(\eta_{HH}=C|\alpha|^{2}|\beta_{2}|^{2}\), \(\eta_{LH}=C|\alpha|^{2}\beta_{1}\beta_{2}^{*}\), and \(\eta_{HL}=C|\alpha|^{2}\beta_{1}^{*}\beta_{2}\), where \(C=\int dt|\psi_{j}(t)|^{2}\) (\(j=L,H\)).
On the other hand, our target quantity here is the density matrix \(\hat{\rho}_{\rm p}^{(\rm s)}\) of the final photon qubit assuming the single-photon input. From the right-hand side of Eq. (2), we immediately have \(\hat{\rho}_{\rm p}^{(\rm s)}=(\beta_{1}|\omega_{L})+\beta_{2}|\omega_{H}))( \beta_{1}^{*}\langle\omega_{L}|+\beta_{2}^{*}\langle\omega_{H}|\). Therefore, the following 2\(\times\)2 matrix,
\[\hat{\eta}\ =\ \frac{1}{\eta_{LL}+\eta_{HH}}\left(\begin{array}{cc}\eta_{ LL}&\eta_{LH}\\ \eta_{HL}&\eta_{HH}\end{array}\right), \tag{16}\]
is identical to the target density matrix \(\hat{\rho}_{\rm p}^{(\rm s)}\) in principle.
Thus, we can construct the target density matrix \(\hat{\rho}_{\rm p}^{(\rm s)}\) from the measured amplitudes of the final photon-qubit pulse by the following procedures: Preliminarily, setting the initial atom-qubit state at \(|g\rangle\) (\(|e\rangle\)), we apply a monochromatic photon-qubit pulse at \(\omega_{L}\) (\(\omega_{H}\)) and measure the final amplitude \(\zeta_{L}(t)\) [\(\zeta_{H}(t)\)]. Then, for an arbitrary initial atom-qubit state, we apply a monochromatic pulse at \(\omega_{L}\) (\(\omega_{H}\)) and measure the final amplitude \(\xi_{L}(t)\) [\(\xi_{H}(t)\)]. We construct a 2\(\times\)2 matrix \(\hat{\eta}\) from the overlap integrals between these output amplitudes [Eqs. (15) and (16)]. \(\hat{\eta}\) is identical to the target density matrix \(\hat{\rho}_{\rm p}^{(\rm s)}\) in principle, but is non-Hermitian in practice (see Table 2). We estimate a proper one by the protocol presented in Appendix B.
#### iii.1.2 Pulse sequence
The pulse sequence to measure the amplitude \(\xi(t)\) of the final photon-qubit pulse is shown in Fig. 4(a). The measurement is composed of three steps. (i) Initialization: we wait for complete deexcitation of the atom. We then apply a control pulse with a Gaussian envelope at \(\omega_{\rm gge}/2\pi=5.835\) GHz from Port 2 to the atom to prepare it in one of the six cardinal states. (ii) SWAP gate: from Port 2, we apply a drive pulse with a flat-top envelope at \(\omega_{\rm d}/2\pi=5.775\) GHz to the atom to constitute an impedance-matched \(\Lambda\) system [Fig. 1(b)]. Within the pulse duration, we input from Port 1 a weak monochromatic (at \(\omega_{L}/2\pi=10.201\) GHz or \(\omega_{H}/2\pi=10.263\) GHz) photon-qubit pulse with a Gaussian envelope. (iii) Measurement: we measure the amplitude of the reflected photon-qubit
pulse in Port 1, which is dichromatic in general. Note that \(\omega_{L}\) and \(\omega_{H}\) are slightly different from those in the "photon-to-atom" experiment, since the experiment was performed in the different cooling down of our dilution refrigerator (see Table 3 for details on the experimental parameters).
All microwave pulses in Fig. 4(a) are generated by single-sideband modulation and are shown in the intermediate frequencies (IFs), similarly to those in Fig. 3(a). A carrier microwave at \(\omega_{\rm c}/2\pi=10.308\) GHz and a Gaussian envelope with IF = \(\delta\omega_{H}/2\pi=0.045\) GHz (\(\delta\omega_{L}/2\pi=0.107\) GHz) are mixed by an IQ mixer, obtaining the photon pulse with \(\omega_{H}=\omega_{\rm c}-\delta\omega_{H}\) (\(\omega_{L}=\omega_{\rm c}-\delta\omega_{L}\)). The final photon-qubit pulses after reflection by the atom are measured by an analog-digital converter after downconverted at \(\omega_{\rm c}\) in order to extract the signals at \(\delta\omega_{H}\) (\(\delta\omega_{L}\)) as shown in the left panels in Figs. 4(b) and (c). The frequency-domain plots [right panels in Figs. 4(b) and (c)] are obtained by applying the fast Fourier transform (FFT) to the time-domain plots.
Figure 4: Atom-to-photon state transfer. (a) Pulse sequence depicted in the intermediate frequencies (IFs) and the energy diagrams before, during, and after the SWAP gate. In the energy diagrams, the initial atom (photon) qubit is assumed to be in the equator (polar) state, namely, \((\beta_{1},\beta_{2})=(1,e^{i\theta})/\sqrt{2}\) and \((\gamma_{1},\gamma_{2})=(0,1)\)D (b) Measured amplitude of the output photon-qubit pulse for its initial frequency set at \(\omega_{H}\): time domain (left) and frequency domain (right). The initial atom-qubit state (ground, equator, or excited) is indicated. Since the four equator states yield similar spectra, only one of them is plotted. The mean photon number in the input pulse is \(|\alpha|^{2}=0.101\). The time-domain amplitudes are measured by an analog-to-digital converter and are averaged over \(1.5\times 10^{4}\) times. (c) The same plots as (b) for the initial frequency of the photon-qubit pulse set at \(\omega_{L}\). (d) Estimated density matrix \(\hat{\rho}_{\rm p}^{(\rm o)}\) of the final photon qubit assuming the single-photon input. Positive (negative) values are indicated by red (blue) bars. The fidelity to the initial atom qubit is indicated.
Results and discussions
In Fig. 4(b), we plot the final amplitudes of the photon-qubit pulse in the time [\(\xi_{H}(t)\), Eq. (13)] and frequency [\(\widetilde{\xi}_{H}(\omega)\), Fourier transform of \(\xi_{H}(t)\)] domains, fixing its initial frequency at \(\omega_{H}\) and varying the initial atomic state. Predictions by Eq. (13) are as follows. (i) Putting \((\beta_{1},\beta_{2})=(0,1)\), we have \(\xi_{H}(t)=\alpha\psi_{H}(t)\). Namely, when the atom is in the excited state initially, the final amplitude is monochromatic at \(\omega_{H}\), unchanged from the initial one. (ii) Putting \((\beta_{1},\beta_{2})=(1,e^{i\theta})/\sqrt{2}\), we have \(\xi_{H}(t)=(\alpha/2)[e^{-i\theta}\psi_{L}(t)+\psi_{H}(t)]\). Namely, when the atom is in the equator state initially, the final amplitude is dichromatic at \(\omega_{L}\) and \(\omega_{H}\) with equal magnitudes. (iii) Putting \((\beta_{1},\beta_{2})=(1,0)\), we have \(\xi_{H}(t)=0\). Namely, when the atom is in the ground state initially, the final amplitude vanishes. We observe that the measured amplitudes in Fig. 4(b) are in qualitative agreement with these predictions. However, we also find discrepancies from these predictions, such as non-vanishing signal at \(\omega_{H}\) for the initial atom in \(|g\rangle\) and appearance of the \(\omega_{L}\) component for the initial atom in \(|e\rangle\). We attribute the principal reason for the former discrepancy to the imperfect constitution of an impedance-matched \(\Lambda\) system [namely, difference in the \(|\widetilde{4}\rangle\rightarrow|\widetilde{1}\rangle\) and \(|\widetilde{4}\rangle\rightarrow|\widetilde{2}\rangle\) decay rates in Fig. 2(b)] and the latter to the imperfect initialization of the atom qubit, both of which originate in the fluctuation in the transition frequency \(\omega_{\rm ge}\) of the superconducting atom. Note that the drastic attenuation of the final amplitude in (iii) does not mean the decrease of the reflected photon in Port 1: the input photon is mostly downconverted to \(\omega_{L}\) but its amplitude is unobservable in this experiment due to the inelasticity of scattering. This quantum process (single-photon Raman interaction [30; 31; 32; 33; 34; 35; 36; 37]) has been applied for the single microwave photon detection [16; 17]. Figure 4(c) shows the results for the initial frequency of the photon-qubit pulse tuned at \(\omega_{L}\). The results are contrastive to those in Fig. 4(b) and are in qualitative accordance with Eq. (11).
In Fig. 4(d), we present the density matrix \(\hat{\rho}_{\rm p}^{(\rm s)}\) of the final photon qubit assuming the single-photon input, estimated by the aforementioned procedures. More details on estimation are presented in Appendix B. The fidelity to the initial atom qubit, which is prepared to be in one of the six cardinal states, is fairly good and the averaged fidelities reaches \(0.801\). An exceptionally high fidelity is attained when the initial atom is in \(|g\rangle\) [first panel in Fig. 4(c)], because the atom remains in the ground state throughout the gate operation and is unaffected by the short \(T_{1}(\sim 0.9\)\(\mu\)s) of the superconducting atom. We observe that, when the initial atom qubit is in the equator states, the fidelities become substantially lower than those for the photon-to-atom state transfer [Fig. 3(c)]. Presently, we do not fully understand the reason for that. One possible reason might be that the amplitude of the frequency-converted component [\(\omega_{L/H}\) component in the right panel of Fig. 4(b/c)], which becomes observable only when the initial atom is in a superposition state, is more fragile against the pure dephasing than the amplitude of the unconverted component.
## IV Conclusion
We have demonstrated a deterministic SWAP gate between a superconducting qubit and a frequency-encoded microwave-photon qubit. More concretely, we have confirmed the bidirectional (photon-to-atom and atom-to-photon) transfer of the qubit state. The photon qubit for this gate is a single-photon pulse propagating in a waveguide, but we used a weak coherent-state pulse instead for demonstration.
To confirm the photon-to-atom qubit transfer, we applied a monochromatic or dichromatic photon-qubit pulse, which corresponds to one of the six cardinal states of the photon qubit, to the dressed atom-resonator coupled system (impedance-matched \(\Lambda\) system). After reflection of this pulse, we performed a state tomography of the final atom qubit. From the dependencies of the density matrix elements on the mean input photon number, we constructed the density matrix of the final atom qubit assuming the single-photon input. The fidelity to the initial photon qubit reaches \(0.829\) on average. On the other hand, to confirm the atom-to-photon qubit transfer, we prepared the initial atom qubit to be in one of the six cardinal states and applied a monochromatic photon-qubit pulse to the \(\Lambda\) system. From the measured amplitudes of the final photon-qubit pulse, we constructed the density matrix of the final photon qubit assuming the single-photon input. The fidelity to the initial atom qubit reaches \(0.801\) on average.
Although the fidelities of the qubit-state transfer here are still insufficient for practical application, the principal reason for these infidelities is the short lifetime of the superconducting atom, which can readily be overcome with the current qubit fabrication technology. We hope that the present scheme for the atom-photon SWAP gate, equipped with distinct merits such as simplicity of the setup and _in-situ_ gate tunability, would help the distributed quantum computation with superconducting qubits in near future.
###### Acknowledgements.
The authors are grateful to T. Shitara, S. Masuda, and A. Noguchi for fruitful discussions. This work was supported by JST Moonshot R&D (JPMJMS2061-2-1-2, JPMJMS2062-10, JPMJMS2067-3), JSPS KAKENHI (22K03494) and JST PRESTO (JPMJPR1761).
## Appendix A Density matrix estimation: photon to atom
Here, we present the details on the density matrix estimation for the photon-to-atom state transfer [Fig. 3(c)].
### State tomography of \(\hat{\rho}_{\rm a}^{\rm(c)}\)
We first discuss how to determine the density matrix \(\hat{\rho}_{\rm a}^{\rm(c)}\) of the final atom qubit from the measurement data. In the tomography stage of Fig. 3(a), we first perform one of the six kinds of one-qubit gates to the atom. The unitary matrices corresponding to these gates are
\[\hat{U}_{j} = \begin{cases}\hat{I}&(j=1)\\ \hat{\sigma}_{x}&(j=2)\\ (\hat{I}-i\cos(\frac{j\pi}{2})\hat{\sigma}_{x}-i\sin(\frac{j\pi}{2})\hat{ \sigma}_{y})/\sqrt{2}&(j=3,\cdots,6)\end{cases}. \tag{10}\]
We measure the excitation probability of the atom by dispersive readout. We denote the _measured_ probability after \(j\)th gate by \(\widetilde{p}_{j}\). We estimate the atomic density matrix \(\hat{\rho}_{\rm a}^{\rm(c)}\) from the measurement data set \((\widetilde{p}_{1},\cdots,\widetilde{p}_{6})\).
We parameterize \(\hat{\rho}_{\rm a}^{\rm(c)}\) as
\[\hat{\rho}_{\rm a}^{\rm(c)} = (\hat{I}+a_{x}\hat{\sigma}_{x}+a_{y}\hat{\sigma}_{y}+a_{z}\hat{ \sigma}_{z})/2, \tag{11}\]
where \(a_{x}\), \(a_{y}\), and \(a_{z}\) are the Bloch vector components, which are real and satisfy \(a_{x}^{2}+a_{y}^{2}+a_{z}^{2}\leq 1\). With this density matrix, the _expected_ excitation probability \(p_{j}\) after \(j\)th gate is given by \(p_{j}={\rm Tr}\{(\hat{1}+\hat{\sigma}_{z})\hat{U}_{j}\hat{\rho}_{\rm a}^{\rm( c)}\hat{U}_{j}^{\dagger}\}/2\). Using Eqs. (10) and (11), we have \(p_{1}=(1+a_{z})/2\), \(p_{2}=(1-a_{z})/2\), \(p_{3}=(1+a_{x})/2\), \(p_{4}=(1+a_{y})/2\), \(p_{5}=(1-a_{x})/2\), and \(p_{6}=(1-a_{y})/2\). We determine the parameters \(a_{x}\), \(a_{y}\), and \(a_{z}\) so as to minimize the sum of squared errors,
\[S(a_{x},a_{y},a_{z}) = \sum_{j=1}^{6}(p_{j}-\widetilde{p}_{j})^{2}. \tag{12}\]
This is rewritten as \(S(a_{x},a_{y},a_{z})=(a_{x}-\overline{a}_{x})^{2}+(a_{y}-\overline{a}_{y})^{2 }+(a_{z}-\overline{a}_{z})^{2}+\cdots\), where
\[\overline{a}_{x} = \widetilde{p}_{3}-\widetilde{p}_{5}, \tag{13}\] \[\overline{a}_{y} = \widetilde{p}_{4}-\widetilde{p}_{6},\] (14) \[\overline{a}_{z} = \widetilde{p}_{1}-\widetilde{p}_{2}. \tag{15}\]
Therefore, if the point \({\rm P}(\overline{a}_{x},\overline{a}_{y},\overline{a}_{z})\) is inside of the unit sphere, \(S\) is minimized at this point. In contrast, if the point \({\rm P}\) is out of the unit sphere, \(S\) is minimized at the projection of point \({\rm P}\) to the unit-sphere surface in the radial direction. Therefore,
\[(a_{x},a_{y},a_{z}) = \frac{(\overline{a}_{x},\overline{a}_{y},\overline{a}_{z})}{\max \left(1,\sqrt{\overline{a}_{x}^{2}+\overline{a}_{y}^{2}+\overline{a}_{z}^{2}} \right)}. \tag{16}\]
In Table 1, setting the initial photon qubit at \(|\psi_{\rm p}\rangle=(|\omega_{L}\rangle-i|\omega_{H}\rangle)/\sqrt{2}\) for example, we present the measurement data set \((\widetilde{p}_{1},\cdots,\widetilde{p}_{6})\) and the estimated Bloch vector components for various input photon number \(|\alpha|^{2}\).
### Estimation of \(\hat{\rho}_{\rm a}^{\rm(s)}\) from \(\hat{\rho}_{\rm a}^{\rm(c)}\)
Here, we discuss how to estimate the atomic density matrix \(\hat{\rho}_{\rm a}^{\rm(s)}\) assuming the single-photon input from the one \(\hat{\rho}_{\rm a}^{\rm(c)}\) for the coherent-state input. Similarly to Eq. (10), we parametrize the target density matrix as
\[\hat{\rho}_{\rm a}^{\rm(s)} = (\hat{I}+b_{x}\hat{\sigma}_{x}+b_{y}\hat{\sigma}_{y}+b_{z}\hat{ \sigma}_{z})/2. \tag{12}\]
Then, From Eqs. (8) and (9), we obtain
\[b_{x} = \frac{da_{x}}{d|\alpha|^{2}}, \tag{13}\] \[b_{y} = \frac{da_{y}}{d|\alpha|^{2}},\] (14) \[b_{z} = \frac{da_{z}}{d|\alpha|^{2}}-1. \tag{15}\]
Therefore, we can estimate \(b_{x}\) from the dependence of \(a_{x}\) on the mean input photon number \(|\alpha|^{2}\). Assuming a linear dependence and employing the least square method, we determine the slope \(\overline{b}_{x}\) from the data set of \(\{|\alpha|^{2(j)},a_{x}^{(j)}\}\) for \(j=1,\cdots,N\), where \(N\) is the number of the data set. \(\overline{b}_{x}\) is then given by
\[\overline{b}_{x} = \frac{NC_{2}-C_{3}C_{4}}{NC_{1}-C_{3}^{2}}, \tag{16}\]
where \(C_{1}=\sum_{j}(|\alpha|^{2(j)})^{2}\), \(C_{2}=\sum_{j}|\alpha|^{2(j)}a_{x}^{(j)}\), \(C_{3}=\sum_{j}|\alpha|^{2(j)}\), and \(C_{4}=\sum_{j}a_{x}^{(j)}\). \(\overline{b}_{y}\) and \(\overline{b}_{z}\) are determined similarly. If the point \({\rm Q}(\overline{b}_{x},\overline{b}_{y},\overline{b}_{z})\) is outside of the unit sphere, we project this point to the unit-sphere surface in the radial direction. Therefore,
\[(b_{x},b_{y},b_{z}) = \frac{(\overline{b}_{x},\overline{b}_{y},\overline{b}_{z})}{\max \left(1,\sqrt{\overline{b}_{x}^{2}+\overline{b}_{y}^{2}+\overline{b}_{z}^{2}} \right)}. \tag{17}\]
From the data set presented in Table 1, we have \(b_{x}=-0.0032\), \(b_{y}=0.6459\), and \(b_{z}=-0.3074\). Accordingly, \(\rho_{\rm a,ee}^{\rm(s)}=(1+b_{z})/2=0.3463\) and \(\rho_{\rm a,eg}^{\rm(s)}=(b_{x}+ib_{y})/2=-0.0016-0.3229i\) [the last panel in Fig. 3(c)].
## Appendix B Density matrix estimation: atom to photon
Here, we present the details on the density matrix estimation for the atom-to-photon state transfer [Fig. 4(d)].
### Estimation of \(\hat{\rho}_{\rm p}^{\rm(s)}\) from \(\hat{\eta}\)
According to the arguments in Sec. III.2, the matrix \(\hat{\eta}\) constructed directly from the experimental data [Eqs. (15) and (16)] is, in principle, identical to the target density matrix \(\hat{\rho}_{\rm p}^{\rm(s)}\). However, as we observe in Table 2, \(\hat{\eta}\) is non-Hermite in practice. We therefore estimate a proper density matrix \(\hat{\rho}_{\rm p}^{\rm(s)}\) from \(\hat{\eta}\) by the following procedures.
Similarly to Eq. (10), we parameterize the proper density matrix as
\[\hat{\rho}_{\rm p}^{\rm(s)} = (\hat{I}+c_{x}\hat{\sigma}_{x}+c_{y}\hat{\sigma}_{y}+c_{z}\hat{ \sigma}_{z})/2, \tag{18}\]
where \(c_{x}\), \(c_{y}\), and \(c_{z}\) are real and satisfy \(c_{x}^{2}+c_{y}^{2}+c_{z}^{2}\leq 1\). We choose \(c_{x}\), \(c_{y}\) and \(c_{z}\) so as to minimize the distance \(L\) between \(\hat{\eta}\) and \(\hat{\rho}_{\rm p}^{\rm(s)}\), which we quantify by
\[L(c_{x},c_{y},c_{z})\ =\ \sum_{j=x,y,z}|\langle\hat{\sigma}_{j}\rangle_{\eta}- \langle\hat{\sigma}_{j}\rangle_{\rho}|^{2}, \tag{10}\]
where \(\langle\hat{\sigma}_{j}\rangle_{\eta}={\rm Tr}\{\hat{\sigma}_{j}\hat{\eta}\}\) and \(\langle\hat{\sigma}_{j}\rangle_{\rho}={\rm Tr}\{\hat{\sigma}_{j}\hat{\rho}_{ \rm p}^{\rm(s)}\}\). Since \(\langle\hat{\sigma}_{x}\rangle_{\eta}=\eta_{LH}+\eta_{HL}\), \(\langle\hat{\sigma}_{y}\rangle_{\eta}=i(\eta_{LH}-\eta_{HL})\), \(\langle\hat{\sigma}_{z}\rangle_{\eta}=\eta_{LL}-\eta_{HH}\), \(\langle\hat{\sigma}_{x}\rangle_{\rho}=c_{x}\), \(\langle\hat{\sigma}_{y}\rangle_{\rho}=c_{y}\), and \(\langle\hat{\sigma}_{z}\rangle_{\rho}=c_{z}\), Eq. (10) is rewritten as \(L(x,y,z)=(c_{x}-\overline{c}_{x})^{2}+(c_{y}-\overline{c}_{y})^{2}+(c_{z}- \overline{c}_{z})^{2}+\cdots\), where
\[\overline{c}_{x} = {\rm Re}(\eta_{LH}+\eta_{HL}), \tag{11}\] \[\overline{c}_{y} = {\rm Im}(\eta_{HL}-\eta_{LH}),\] (12) \[\overline{c}_{z} = {\rm Re}(\eta_{LL}-\eta_{HH}). \tag{13}\]
Therefore, if the point \({\rm R}(\overline{c}_{x},\overline{c}_{y},\overline{c}_{z})\) is inside of the unit sphere, \(L\) is minimized at this point. On the other hand, if the point \({\rm R}\) is out of the unit sphere, \(L\) is minimized at the projection of point \({\rm R}\) to the unit sphere. Therefore,
\[(c_{x},c_{y},c_{z})\ =\ \frac{(\overline{c}_{x},\overline{c}_{y},\overline{c}_ {z})}{\max\left(1,\sqrt{\overline{c}_{x}^{2}+\overline{c}_{y}^{2}+\overline{c }_{z}^{2}}\right)}. \tag{14}\]
In Table 2, we present the matrix elements of \(\hat{\eta}\) and \(\hat{\rho}_{\rm p}^{\rm(s)}\) for various input photon number \(|\alpha|^{2}\). The initial atom-qubit state is chosen as \(|\psi_{\rm a}\rangle=(|g\rangle-i|e\rangle)/\sqrt{2}\) for example, and the fidelity is that between the initial atom and final photon qubits, \(F=\langle\psi_{\rm a}|\hat{\rho}_{\rm p}^{\rm(s)}|\psi_{\rm a}\rangle\). We observe that the estimated density matrix is mostly insensitive to the input photon number \(|\alpha|^{2}\). In Fig. 4(c), we employ the averaged density matrix over the four cases as \(\hat{\rho}_{\rm p}^{\rm(s)}\).
## Appendix C Experimental information
### Experimental setup
Figure 5 shows a schematic of the measurement setup composed of room-temperature microwave instruments and low-temperature wirings with microwave components in a cryogen-free \({}^{3}\)He/\({}^{4}\)He dilution refrigerator.
The photon-qubit (drive) pulses applied to Port 1 (2) are generated by mixing the continuous microwave from the RF source 1 (2) with pulses that have an intermediate frequency (IF) generated by a DAC (digital to analog converter) with a sampling rate of 1 GHz. The Gaussian pulses are used for the photon-qubit pulses, while the flat-top pulses, in which the rising and falling edges of the pulse envelope are smoothed by a Gaussian function with full width at half maximum of 40 ns in its voltage amplitude, are employed for the drive pulses.
The signal pulses are heavily attenuated by a series of attenuators implemented in the input microwave semi-rigid cable with a total attenuation of 68 dB and applied to the \(\lambda/2\) resonator through a circulator to separate the input and reflected signal. The reflected signal is led to a cryogenic HEMT amplifier mounted at a 4 K stage of the dilution refrigerator via low-pass and band-pass filters and three circulators with 50 \(\Omega\) terminations and amplified by \(\sim 38\) dB followed by further amplification with a room-temperature low-noise amplifier, whose gain is \(\sim 33\) dB. The output signal is down-converted to IF at an IQ mixer using a continuous microwave split the same signal used for the photon-qubit pulse generation. The I component of the reflected signals is sampled at 1.6 GHz/s by an ADC (analog to digital converter).
The drive pulses are applied through the input microwave semi-rigid cable with a total attenuation of 48 dB to control the states of an atom.
### Device and parameters
A device employed in our experiments is composed of a \(\lambda/2\) superconducting coplanar waveguide resonator and a superconducting flux qubit containing three Josephson junctions [Fig. 1(b)]. They are coupled capacitively and operated in the dispersive regime. We adopted the same design and fabrication processes for the device described in Ref. [17; 34].
In Table 3, we summarize the system parameters for the SWAP experiments described in the main text. Since each SWAP experiment was performed in the different cooling down of our dilution refrigerator, \(\omega_{\rm ge}\) and related parameters (\(\chi\), \(\omega_{\rm d}\), \(\omega_{L}\) and \(\omega_{H}\)) changed slightly. The other parameters (\(\omega_{r}\) and \(\kappa\)) are independent of \(\omega_{\rm ge}\) and remain unchanged.
Figure 5: Schematic of the experimental setup. |
2310.04759 | Shadow, quasinormal modes, greybody bounds, and Hawking sparsity of Loop
Quantum Gravity motivated non-rotating black hole | We consider Loop Quantum Gravity(LQG) motivated $4D$ polymerized black hole
and study shadow, quasinormal modes, and Hawking radiation. We obtain
analytical expressions of photonsphere radius and shadow radius and study their
qualitative and quantitative nature of variation with respect to the LQG
parameter $\alpha$. We also show shadows of the black hole for various values
of $\alpha$. Our study reveals that both radii increase with an increase in the
parameter value. We, then, study quasinormal modes for scalar and
electromagnetic perturbations using the $6th$ order WKB method. Our study
reveals that the LQG parameter impacts quasinormal modes. We observe that the
oscillation of gravitational wave(GW) and decay rate decrease as $\alpha$
increases. At the same time, the error associated with the $6th$ order WKB
method increases with an increase in $\alpha$. The ringdown waveform for
electromagnetic and scalar perturbations is shown. We also study greybody
bounds, power spectrum, and sparsity of Hawking radiation. Greybody bounds for
electromagnetic perturbations do not depend on $\alpha$. For scalar
perturbation, greybody bounds increase as the LQG parameter increases, but the
variation with $\alpha$ is very small. The peak of the power spectrum as well
as total power emitted decrease as we increase the value of $\alpha$. Also, the
sparsity of Hawking radiation gets significantly impacted by quantum
correction. Finally, we obtain the area spectrum of the black hole. It is found
to be significantly different than that for the Schwarzschild black hole. | Sohan Kumar Jha | 2023-10-07T09:42:28Z | http://arxiv.org/abs/2310.04759v1 | Shadow, quasinormal modes, greybody bounds, and Hawking sparsity of Loop Quantum Gravity motivated non-rotating black hole
###### Abstract
We consider Loop Quantum Gravity(LQG) motivated \(4D\) polymerized black hole and study shadow, quasinormal modes, and Hawking radiation. We obtain analytical expressions of photonsphere radius and shadow radius and study their qualitative and quantitative nature of variation with respect to the LQG parameter \(\alpha\). We also show shadows of the black hole for various values of \(\alpha\). Our study reveals that both radii increase with an increase in the parameter value. We, then, study quasinormal modes for scalar and electromagnetic perturbations using the \(6th\) order WKB method. Our study reveals that the LQG parameter impacts quasinormal modes. We observe that the oscillation of gravitational wave(GW) and decay rate decrease as \(\alpha\) increases. At the same time, the error associated with the \(6th\) order WKB method increases with an increase in \(\alpha\). The ringdown waveform for electromagnetic and scalar perturbations is shown. We also study greybody bounds, power spectrum, and sparsity of Hawking radiation. Greybody bounds for electromagnetic perturbations do not depend on \(\alpha\). For scalar perturbation, greybody bounds increase as the LQG parameter increases, but the variation with \(\alpha\) is very small. The peak of the power spectrum as well as total power emitted decrease as we increase the value of \(\alpha\). Also, the sparsity of Hawking radiation gets significantly impacted by quantum correction. Finally, we obtain the area spectrum of the black hole. It is found to be significantly different than that for the Schwarzschild black hole.
**Keywords**: Quantum gravity, Shadow, Quasinormal modes, Ringdown waveform, Hawking radiation, Sparsity of radiation, Area spectrum.
## I Introduction
The observation of shadows of supermassive black holes(BHs) \(M87^{*}\) and \(SgrA^{*}\) by event horizon telescope(EHT) [1; 2] has validated the remarkable accuracy of General Relativity(GR) given by Einstein [3]. The existence of BHs was predicted by GR. It was shown by Hawking and Penrose that BHs formed by the gravitational collapse of massive objects would eventually have a spacetime singularity [4]. The existence of such a singularity leads to the breakdown of physical laws and the divergence of scalar invariants. As a result, geodesics are incomplete. It is generally believed that no such singularity exists in nature and such singularities are unavoidable features of classical GR. Wheeler suggested that quantum gravity can help us resolve the spacetime singularity [5]. The first regular black hole(RBH) without singularity was proposed by Bardeen [6]. In Bardeen's black hole, we have de sitter-like region, resulting in a regular center. A significant number of studies have been devoted to studying various models of RBHs. Loop Quantum Gravity(LQG) is considered to be one of the viable models of the quantum theory of gravity [7; 8; 9]. LQG uses a non-perturbative technique and employs area and volume quantization to resolve singularity [10; 11; 12]. Research has till now been confined to spherically symmetric BHs due to the complexity involved in solving the entire system [10; 11; 15]. Discreteness of spacetime, suggested by LQG, is preserved by an elegant technique called phase space quantization or polymerization [13; 14; 15]. Based on works that have been done in this field [10-18], Peltola and Kunstatter [19] reported a static, spherically symmetric, single horizon, regular black hole. Unlike other RBHs, it has one horizon whereby the problem of mass inflation at the inner horizon is removed. At the same time, spacetime is hyperbolic globally and the geodesics are complete. Several studies have been made in non-rotating RBHs [20-22] and rotating RBHs [22-23]. The shadow of a black hole is a manifestation of its strong gravitation field. Black hole shadow has been a topic of intense research for quite some time now. The first image of the shadow of a supermassive black hole \(M87^{*}\) has only increased the interest of the research community in studying various aspects of shadow. Bardeen, Press, and Teukolsky studied the shadow of a Kerr BH in [24]. The shadow of the Schwarzschild black was studied by Synge in
[25]. The shadow of a BH surrounded by a bright accretion disk was studied by Luminet [26]. Narayan in his article [27] studied the shadow of a spherically accreting black hole. In article [28], authors have studied the shadow and photon rings of Reissner-Nordstrom(RN). A several studies have been made in [29-38] to use shadow for detection of dark matter.
Quasinormal modes are one of the significant aspects of black hole physics. They are related to the emission of GWs from perturbed BHs that eventually die down due to dissipation [39-41]. These modes are called quasinormal as they are transient. Quasinormal modes are complex numbers where the real part corresponds to the frequency of the GW and the imaginary part gives the decay rate. There are three phases that BHs experience after perturbation: inspiral, merger, and ringdown. Quasinormal modes, for remnant BHs, are related to the ringdown phase. Quasinormal modes bear the signature of the underlying spacetime. Thus, it is important to study quasinormal modes to gauge the impact of quantum correction. Several articles have been devoted to studying quasinormal modes of various BHs [42-73]. Another important phenomenon related to a BH is Hawking radiation. It was Hawking who showed that BHs emit radiation [74]. Hawking took into account quantum consequences to prove it. When a pair production occurs close to the event horizon, one of the particles enters the BH and the second particle moves away from BH. It is the second particle that forms the Hawking radiation [75-77]. A number of different ways can be employed to obtain Hawking temperature [78-80]. The greybody factor gives the probability of Hawking radiation reaching an asymptotic observer. Thus, it is an important quantity. Matching method [81-83] or WKB method [84, 85] can be used to calculate greybody bounds. Visser [99] gave an elegant method to find greybody bounds. This method has been used in [88, 100].
This manuscript is organized as follows. In Sec. (II), we introduce the LQG motivated \(4D\) polymerized black hole metric and obtain analytical expressions of radii of photonsphere and shadow. We also study the qualitative and quantitative variation of those radii with respect to the LQG parameter \(\alpha\). In the next section, we study the quasinormal modes of the black hole for scalar and electromagnetic perturbations and probe the effect of the LQG parameter on quasinormal modes. In Sec. (IV), we obtain the analytical expressions of Hawking temperature and greybody bounds and investigate the effect of quantum correction on them. In Sec. (V), we study the power spectrum and sparsity of Hawking radiation. In Sec. (VI), we obtain the area spectrum of the LQG-motivated black hole. We conclude our article in Sec. (VII) with a brief discussion of our results. Throughout the paper, we use \(G=c=M=\hbar=1\).
## II LQG motivated non-rotating black hole and its shadow
Peltola and Kunstatter, with the help of effective field theory technique, derived the following LQG motivated \(4D\) polymerized static and spherically symmetric black hole metric [19]
\[ds^{2}=-\left(\sqrt{1-\frac{\alpha^{2}}{z^{2}}}-\frac{2M}{z}\right)dt^{2}+ \frac{\left(1-\frac{\alpha^{2}}{z^{2}}\right)^{-1}}{\left(\sqrt{1-\frac{ \alpha^{2}}{z^{2}}}-\frac{2M}{z}\right)}dz^{2}+z^{2}(d\theta^{2}+\sin^{2} \theta d\phi^{2}). \tag{1}\]
The above metric is singular at \(\bar{y}=\alpha\). This singularity can be removed by using the transformation \(z=\sqrt{r^{2}+\alpha^{2}}\). With this transformation, the metric (1) becomes
\[ds^{2}=-\left(\frac{r-2M}{\sqrt{r^{2}+\alpha^{2}}}\right)dt^{2}+\frac{1}{ \left(\frac{r-2M}{\sqrt{r^{2}+\alpha^{2}}}\right)}dr^{2}+(r^{2}+\alpha^{2})( d\theta^{2}+\sin^{2}\theta d\phi^{2}). \tag{2}\]
The above metric reduces to the Schwarzschild metric when we put \(\alpha=0\). Here, the range of the radial coordinate \(r\) is \(0\leq r\leq\infty\). The event horizon of the black hole represented by the metric (2) is located at \(r=2M\), irrespective of the value of \(\alpha\). Ricci scalar R and Kretschmann scalar K for the metric (2) are given by
\[R = \frac{2\alpha^{2}\left(3M+\sqrt{\alpha^{2}+r^{2}}\right)-2r^{3}+ 2r^{2}\sqrt{\alpha^{2}+r^{2}}-5\alpha^{2}r}{\left(\alpha^{2}+r^{2}\right)^{5/ 2}},\] \[4\alpha^{6}+12M^{2}\left(3\alpha^{4}+4r^{4}-4\alpha^{2}r^{2} \right)+4Mr\left(-15\alpha^{4}-4r^{4}+14\alpha^{2}r^{2}\right.\] \[K = \frac{\left.+4\alpha^{2}r\sqrt{\alpha^{2}+r^{2}}+4r^{3}\sqrt{ \alpha^{2}+r^{2}}\right)+8r^{6}+12\alpha^{2}r^{4}+41\alpha^{4}r^{2}-8r^{5} \sqrt{\alpha^{2}+r^{2}}-8\alpha^{2}r^{3}\sqrt{\alpha^{2}+r^{2}}}{\left( \alpha^{2}+r^{2}\right)^{5}}.\]
These expressions reveal that the scalar invariants are finite everywhere. It implies that the metric (2) represents a regular spacetime globally and geodesics are complete.
We next study null geodesics in the background of LQG motivated black hole given by the metric (2). As the black hole we are considering is spherically symmetric, we, without loss of generality, can consider the equatorial plane with \(\theta=\frac{\pi}{2}\). For equatorial plane, the ansatz (2) reduces to
\[ds^{2}=-f(r)dt^{2}+\frac{dr^{2}}{f(r)}+h(r)d\phi^{2}, \tag{3}\]
where \(f(r)=\frac{r-2M}{\sqrt{r^{2}+\alpha^{2}}}\) is the lapse function and \(h(r)=r^{2}+\alpha^{2}\). As the polymerized black hole is static and spherically symmetric, the energy \(\mathcal{E}=-p_{\mu}\xi^{\mu}_{(t)}\) and the angular momentum \(\mathcal{L}=p_{\mu}\xi^{\mu}_{(\phi)}\) along the geodesics are conserved. Here, \(\xi^{\mu}_{(t)}\) and \(\xi^{\mu}_{(\phi)}\) are the Killing vectors due to time-translational and rotational invariance respectively [90]. Thus, \(\mathcal{E}=-p_{t}\) is the energy of a photon and \(\mathcal{L}=p_{\phi}\) is the angular momentum. The expressions of \(p_{t}\) and \(p_{\phi}\) are obtained from the Lagrangian corresponding to the metric (3). The Lagrangian is given by
\[\mathscr{L}=-f(r)\dot{t}^{2}+\frac{\dot{r}^{2}}{f(r)}+h(r)\dot{\phi}^{2}. \tag{4}\]
Now, we have \(p_{q}=\frac{\partial\mathscr{L}}{\partial q}\), where \(p_{q}\) is the conjugate momentum to the coordinate \(q\). With the help of this definition, we obtain
\[p_{t} = \frac{\partial\mathscr{L}}{\partial\dot{t}}=-f(r)\dot{t},\] \[p_{r} = \frac{\partial\mathscr{L}}{\partial\dot{r}}=\frac{\dot{r}}{f(r)},\] \[p_{\phi} = \frac{\partial\mathscr{L}}{\partial\dot{\phi}}=h(r)\dot{\phi}. \tag{5}\]
Here, the dot is differentiation with respect to an affine parameter \(\tau\). Differential equations of motion are
\[\frac{dt}{d\tau}=\frac{\mathcal{E}}{f(r)}\quad\text{and}\quad\frac{d\phi}{d \tau}=\frac{\mathcal{L}}{h(r)}. \tag{6}\]
Using Eq.(6) and Eq.(3), we obtain the differential equation for the null geodesics as
\[\left(\frac{dr}{d\tau}\right)^{2}\equiv\dot{r}^{2}=\mathcal{E}^{2}-V(r), \tag{7}\]
where \(V(r)\) is the potential given by
\[V(r)=\frac{\mathcal{L}^{2}f(r)}{h(r)}. \tag{8}\]
The unstable circular photon orbits are located at the peak of the above potential. In Fig. (1), we plot the potential with respect to r for various values of \(\alpha\). We observe that the peak of the potential shifts towards the right as we increase the value of the parameter \(\alpha\). It implies that the photon radius increases as we increase the value of the parameter. This finding will be confirmed in subsequent results.
Figure 1: Potential for various values of \(\alpha\). Here, we have taken \(\mathcal{L}\)=1.
For circular photon orbits of radius \(r_{p}\), we must have
\[V(r_{p})=0,\quad\frac{dV}{dr}|_{r=r_{p}}=0,\quad\text{and}\quad\frac{\partial^{2} V}{\partial r^{2}}|_{r=r_{p}}<0. \tag{9}\]
The middle equation yields
\[\frac{f^{\prime}(r_{p})}{f(r_{p})}=\frac{h^{\prime}(r_{p})}{h(r_{p})}. \tag{10}\]
The above equation, on simplification, produces
\[2r^{2}-6Mr-\alpha^{2}=0. \tag{11}\]
We obtain two roots from the above equation. Using the first and third conditions from Eq. (9), we exclude one solution and retain the other. The radius of the photonsphere is given by \(r_{p}=\frac{1}{2}\left(\sqrt{2\alpha^{2}+9M^{2}}+3M\right)\). This analytical expression shows that the LQG parameter \(\alpha\) has a significant impact on the radius of the photonsphere. For distant observers, the shadow radius is equal to the impact parameter. Thus, the shadow radius is given by
\[b_{p}=R_{s} = \frac{\mathcal{L}}{\mathcal{E}}=\sqrt{\frac{h(r_{p})}{f(r_{p})}} \tag{12}\] \[= \frac{3^{3/4}\sqrt{\frac{\left(\alpha^{2}+M\left(\sqrt{2\alpha^ {2}+9M^{2}}+3M\right)\right)^{3/2}}{\sqrt{2}\alpha^{2}+9M^{2}-M}}}{\sqrt[4]{2 }}.\]
In the limit \(\alpha\to 0\), we get the values of photon radius and shadow radius for the Schwarzschild black hole i.e., \(r_{p}=3M\) and \(R_{s}=3\sqrt{3}M\). To understand the qualitative nature of variation of photon and shadow radius with respect to LQG parameter, we plot \(r_{p}\) and \(R_{s}\) against \(\alpha\) in Fig. (2). Quantitative values of photon radius \(r_{p}\) and shadow radius \(R_{s}\) are given in Table (1) for different values of the black hole parameter.
One observation we can make from Fig. (2) and Table (1) is that both the radii increase as we increase the value of the parameter \(\alpha\). The variation of photon radius and shadow radius is significant with respect to the parameter \(\alpha\). To study the shadow of the black hole given by (2), we use two celestial coordinates:
\[x=\lim_{r_{o}\rightarrow\infty}\left[-\left.r_{o}^{2}\sin\theta _{o}\frac{d\phi}{dr}\right|_{\theta-\theta_{o}}\right],\] \[y=\lim_{r_{o}\rightarrow\infty}\left[r_{o}^{2}\frac{d\theta}{dr} \right|_{\theta-\theta_{o}}\right],\]
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \(\alpha\) & 0 & 0.2 & 0.4 & 0.6 & 0.8 & 1.0 & 1.2 & 1.4 & 1.6 & 1.8 & 2.0 \\ \hline \(r_{p}\) & 3.0 & 3.00665 & 3.02643 & 3.05885 & 3.10312 & 3.15831 & 3.22337 & 3.29722 & 3.37883 & 3.46723 & 3.56155 \\ \hline \(R_{s}\) & 5.19615 & 5.21343 & 5.26468 & 5.34832 & 5.46193 & 5.60259 & 5.76717 & 5.95258 & 6.15592 & 6.3746 & 6.60632 \\ \hline \end{tabular}
\end{table}
Table 1: Various values of photon radius and shadow radius for different values of \(\alpha\).
where \((r_{o},\theta_{o})\) is the observer's position at infinity. For an observer in the equatorial plane i.e., \(\theta_{0}=\pi/2\), we have
\[R_{s}\equiv\sqrt{x^{2}+y^{2}}.\]
Shadows for the black hole are shown below for various values of \(\alpha\). For \(\alpha=0\), we obtain the shadow for the Schwarzschild black hole. For non-zero values of the parameter, we obtain the shadow for the quantum-corrected black hole. Fig. (3) shows that the quantum correction has an observable impact on the shadow. From the plot, we also observe that the shadow radius increases with the parameter \(\alpha\).
This concludes our discussion of photonsphere and shadow for the LQG-motivated \(4D\) polymerized black hole.
## III Quasinormal modes of non-rotating Simpson-Visser black hole
In this section, We investigate quasinormal modes of LQG motivated \(4D\) polymerized black hole for scalar and electromagnetic perturbations. Here, the impact of the scalar field or the electromagnetic field on the background spacetime is considered to be negligible. To study quasinormal modes, we first consider the equation for the relevant field and then, reduce it to a Schr\(\ddot{o}\)dinger-like equation. For the scalar field, we will have the Klein-Gordon equation and for the electromagnetic field, we will consider Maxwell equations. For the massless scalar field, we have
\[\frac{1}{\sqrt{-g}}\partial\mu(\sqrt{-g}g^{\mu\nu}\partial_{\nu}\psi)=0, \tag{13}\]
and for the electromagnetic field, we have
\[\frac{1}{\sqrt{-g}}\partial\nu(F_{\rho\sigma}g^{\rho\mu}g^{\sigma\nu}\sqrt{-g })=0, \tag{14}\]
where \(F_{\rho\sigma}=\partial\rho A^{\sigma}-\partial\sigma A^{\rho}\), \(A_{\nu}\) being electromagnetic four-potential. We now introduce the tortoise coordinate:
\[\mathrm{d}r_{*}=\frac{\mathrm{d}r}{f(r)}. \tag{15}\]
With the help of tortoise coordinate, Eqs.(13) and (14) reduce to the following Schr\(\ddot{o}\)dinger-like form
\[-\frac{\mathrm{d}^{2}\phi}{\mathrm{d}r_{*}^{2}}+V_{\mathrm{eff}}(r)\phi=\omega ^{2}\phi, \tag{16}\]
Figure 3: Shadow for various values of \(\alpha\).
where the effective potential is given by
\[V_{\rm eff}(r) = \frac{(1-s^{2})f(r)}{r}\frac{{\rm d}f(r)}{{\rm d}r}+\frac{f(r)\ell( \ell+1)}{r^{2}}\] \[= \frac{(r-2M)\left(r^{2}\left(\ell(\ell+1)\sqrt{\alpha^{2}+r^{2}}-2 M\left(s^{2}-1\right)\right)+\alpha^{2}\left(\ell(\ell+1)\sqrt{\alpha^{2}+r^{2}}- rs^{2}+r\right)\right)}{r^{2}\left(\alpha^{2}+r^{2}\right)^{2}}.\]
Here, \(\ell\) is the angular momentum and s is the spin. For \(s=0\), we obtain the effective potential for scalar perturbation, and for \(s=1\), we obtain the effective potential for electromagnetic perturbation. Since the effective potential influences quasinormal modes, we briefly study the variation of the effective potential for various scenarios.
From Fig. (4) we observe that the peak of the potential increases as we increase the angular momentum but decreases with the increase in the parameter \(\alpha\). We also see that the position of the peak shifts towards the right as we increase the angular momentum \(\ell\) or decrease the parameter \(\alpha\).
Next, with the help of the \(6th\) order WKB method, we obtain quasinormal modes. WKB method to calculate quasinormal modes was first developed by Schutz and Will [91]. It was later extended to higher orders [92; 93; 94]. The \(6th\) order WKB method yields the following expression of quasinormal frequencies:
\[\frac{{\rm i}(\omega^{2}-V_{0})}{\sqrt{-2V_{0}^{\prime\prime}}}-\sum_{{\rm i} =2}^{6}\Omega_{{\rm i}}=n+\frac{1}{2}, \tag{18}\]
where \(V_{0}\) and \(V_{0}^{\prime\prime}\) represent the height of the effective potential and the second derivative with respect to the tortoise coordinate of the potential at its maximum, respectively. \(\Omega_{{\rm i}}\) are the correction terms given in [91; 92; 93; 94]. With the help of Eq.(18), we calculate some of the values of quasinormal frequencies of scalar and electromagnetic perturbations for various values of angular momentum \(\ell\) and parameter \(\alpha\). In Table 2, we show numerical values of quasinormal modes of scalar perturbation for different values of angular momentum \(\ell\) and parameter \(\alpha\) keeping overtone number \(n=0\)
Figure 4: Variation of effective potential with respect to tortoise coordinate \(r_{*}\). The upper ones are for various values of \(\alpha\) with \(\ell=1\) and the lower ones are for various values of angular momentum with \(\alpha=0.8M\). The left ones are for scalar perturbations and the right ones are for electromagnetic perturbations.
In Table 3, we show quasinormal modes of electromagnetic perturbation for different values of angular momentum and LQG parameter keeping overtone number \(n=0\). We also calculate the error associated with the \(6th\) order WKB method defined by
\[\Delta_{6}=\frac{|\omega_{7}-\omega_{5}|}{2}, \tag{19}\]
where \(\omega_{5}\) and \(\omega_{7}\) are quasinormal frequencies obtained using \(5th\) order and \(7th\) order WKB method respectively.
From Tables (2,3), we can infer that the real part of quasinormal frequencies decreases with an increase in parameter value \(\alpha\) for a particular value of \(\ell\). Additionally, it is observed for both perturbations that the real part of quasinormal modes increases as we increase the angular momentum \(\ell\). We can observe from the Tables (2,3) that the decay rate or damping rate increases as we decrease the value of parameter \(\alpha\) or increase the angular momentum for both perturbations. It is also observed that the error associated with the \(6th\) order WKB method increases with an increase in the LQG parameter value \(\alpha\). Next, we investigate the qualitative nature of variation of quasinormal frequency for various aspects.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \(\alpha\) & \multicolumn{2}{c|}{\(\ell=1\)} & \(\Delta_{6}\) & \multicolumn{2}{c|}{\(\ell\)=2} & \(\Delta_{6}\) & \multicolumn{2}{c|}{\(\ell\)=3} & \(\Delta_{6}\) \\ \hline
0 & 0.367984 & -0.0944086 i & 0.0000230659 & 0.532369 & -0.0953138 i & 3.28709 \(\times 10^{-6}\) & 0.711038 & -0.0957039 i & \(7.63696\times 10^{-7}\) \\ \hline
0.25 & 0.367427 & -0.0940281 i & 0.0000260222 & 0.531502 & -0.0949261 i & \(4.15212\times 10^{-6}\) & 0.709848 & -0.0953155 i & \(9.94250\times 10^{-7}\) \\ \hline
0.5 & 0.365787 & -0.0929174 i & 0.0000427638 & 0.528953 & -0.0937928 i & \(6.94819\times 10^{-6}\) & 0.706351 & -0.0941796 i & \(1.64054\times 10^{-6}\) \\ \hline
0.75 & 0.363152 & -0.0911605 i & 0.0000688479 & 0.524868 & -0.0919963 i & 0.0000112364 & 0.700755 & -0.0923778 i & \(2.63228\times 10^{-6}\) \\ \hline
1.0 & 0.359653 & -0.0888735 i & 0.0000987704 & 0.519462 & -0.0896536 i & 0.0000155716 & 0.693357 & -0.0900269 i & \(3.58075\times 10^{-6}\) \\ \hline
1.25 & 0.355447 & -0.0861843 i & 0.000124503 & 0.512981 & -0.0868957 i & 0.0000178107 & 0.684502 & -0.087259 i & \(3.94790\times 10^{-6}\) \\ \hline \end{tabular}
\end{table}
Table 2: Quasinormal frequencies for scalar field with \(n=0\).
Figure 5: It gives the variation of the imaginary part of the quasinormal frequency with respect to \(\alpha\) for various values of \(\ell\). The left one is for the scalar field and the right one is for the electromagnetic field.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \(\alpha\) & \multicolumn{2}{c|}{\(\ell=1\)} & \(\Delta_{6}\) & \multicolumn{2}{c|}{\(\ell\)=2} & \(\Delta_{6}\) & \multicolumn{2}{c|}{\(\ell\)=3} & \(\Delta_{6}\) \\ \hline
0 & 0.248191 & -0.092637 i & 0.000142597 & 0.457593 & -0.0950112 i & \(7.04358\times 10^{-6}\) & 0.656898 & -0.0956171 i & \(1.13864\times 10^{-6}\) \\ \hline
0.25 & 0.247877 & -0.0922998 i & 0.000170868 & 0.456865 & -0.094625 i & \(8.85698\times 10^{-6}\) & 0.655805 & -0.0952289 i & \(1.45442\times 10^{-6}\) \\ \hline
0.5 & 0.246938 & -0.0913156 i & 0.000284307 & 0.454725 & -0.0934984 i & 0.0000147938 & 0.652596 & -0.0940947 i & \(2.4338\times 10^{-6}\) \\ \hline
0.75 & 0.245407 & -0.0897485 i & 0.000462204 & 0.4513 & -0.0917176 i & 0.0000236744 & 0.647466 & -0.0922994 i & \(3.88534\times 10^{-6}\) \\ \hline
1.0 & 0.24337 & -0.087629 i & 0.00064242 & 0.446776 & -0.0894034 i & 0.0000323796 & 0.640697 & -0.0899635 i & \(5.2812\times 10^{-6}\) \\ \hline
1.25 & 0.240962 & -0.0851176 i & 0.000710422 & 0.441366 & -0.0866884 i & 0.0000357733 & 0.632612 & -0.087222 i & \(5.83228\times 10^{-6}\) \\ \hline \end{tabular}
\end{table}
Table 3: Quasinormal frequencies for electromagnetic field with \(n=0\).
Fig.(5) and Fig.(6) reinforce the findings we have drawn from Tables (2, 3). We can also observe from the Fig. (7) that the real part of quasinormal modes is larger for scalar perturbation, whereas, the imaginary part is larger for electromagnetic perturbation. It implies that the damping rate or decay rate is larger for scalar perturbation. We next study the convergence of the WKB method for various values of \((n,\ell)\) pair.
From the above figure (8) we observe that quasinormal frequencies fluctuate even for higher order when we consider the pair \((3,0)\). This confirms the finding in the article [96] where it is observed that WKB approximation is reliable when the angular momentum is high and the overtone number is low.
Figure 8: Variation of the real and imaginary part of quasinormal frequencies with respect to WKB order for various values of \((n,\ell)\) pair. The left one is for \((3,0)\) pair and the right one is for \((2,4)\) pair. In each plot, the blue line is for the real part and the orange line is for the imaginary part of the quasinormal mode. Here, we have taken \(\alpha=0.4M\).
Figure 6: It gives the variation of the real part of the quasinormal frequency with respect to \(\alpha\) for various values of \(\ell\). The left one is for the scalar field and the right one is for the electromagnetic field.
Figure 7: Left one gives the variation of the imaginary part of the quasinormal frequency with respect to \(\alpha\) for scalar and electromagnetic fields and the right one gives that for the real part. Here, we have taken \(\ell=1\).
Ringdown waveform
In this section, we study the time evolution of the scalar and electromagnetic perturbations. For this purpose, we numerically solve the time-dependent wave equation using the time domain integration method given by Gundlach et al. in their article [95] using initial conditions \(\psi(r_{*},t)=\exp\left[-\dfrac{(r_{*}-\hat{r}_{*})^{2}}{2\sigma^{2}}\right]\) and \(\psi(r_{*},t)|_{t<0}=0\), where we have taken \(r_{*}=5\), \(\hat{r}_{*}=0.4\). The values of \(\Delta t\) and \(\Delta r_{*}\) are taken such that the Von Neumann stability condition, \(\frac{\Delta t}{\Delta r_{*}}<1\), is satisfied.
In Fig. (9) we provide the ringdown waveform for various values of the parameter \(\alpha\) keeping \(\ell=2\) and in Fig. (10), we provide the waveform for various values of \(\ell\) keeping \(\alpha=0.8M\). From Fig. (9) we can clearly conclude that the frequency decreases as we increase the parameter \(\alpha\). It can also be inferred from the figure that the decay rate, given by the magnitude of the slope of the maxima in the log graph, decreases as we increase \(\alpha\). From Fig. (10) we can conclude that the frequency as well as the decay rate increases as we increase \(\ell\). These are consistent with the conclusions drawn from Tables (2, 3).
Figure 10: Time domain profile for various values of \(\ell\). Left one is for scalar perturbation and the right one is for electromagnetic perturbation. Here, we have taken \(\alpha=0.8M\).
Figure 9: Time domain profile for various values of \(\alpha\). Left one is for scalar perturbation and the right one is for electromagnetic perturbation. Here, we have taken \(\ell=2\).
Hawking temperature and bounds of the greybody factor
In this section, we intend to calculate the Hawking temperature and greybody bounds for the black hole under consideration. Hawking in his article [74] showed that black holes emit radiation. That radiation is known as Hawking radiation. Bekenstein in his article [97] and Keifer in his article [98] showed that it was necessary to associate a temperature with the horizon for consistency with thermodynamics. The Hawking temperature is given by
\[T_{H}=\frac{1}{4\pi\sqrt{-g_{tt}g_{rr}}}\frac{dg_{tt}}{dr}|_{r=r_{h}}. \tag{20}\]
For the metric in consideration, we have \(g_{tt}=-f(r)\) and \(g_{rr}=\frac{1}{f(r)}\). Putting these values in the above equation, we get
\[T_{H}=\frac{1}{4\pi\sqrt{\alpha^{2}+4M^{2}}}. \tag{21}\]
The dependence of the Hawking temperature on the parameter \(\alpha\) is evident from the above equation. We recover the value of the Hawking temperature for the Schwarzschild black hole if we put \(\alpha=0\) in the above equation. To show the dependence graphically, we plot the Hawking temperature against \(\alpha\).
We can observe that the Hawking temperature decreases as we increase the value of the parameter \(\alpha\). The Hawking radiation observed by an asymptotic observer is different from the original radiation near the horizon of the black hole due to the redshift factor. Greybody distribution describes the Hawking radiation that is observed by an asymptotic observer. Here, we try to obtain the lower bound of the greybody factor for LQG motivated \(4D\) polymerized black hole. A lot of research has been dedicated to bound greybody factor. Visser and Boonserm in their articles [99; 100; 101] gave an elegant way to lower bound the greybody factor. A rigorous bound of the transmission probability, which is the same as that of the graybody factor, is given by
\[T\geq sech^{2}(\frac{1}{2\omega}\int_{-\infty}^{\infty}|V_{\text{eff}}(r_{*})| dr_{*}), \tag{22}\]
where \(r_{*}\) is the tortoise coordinate defined in Eq.(15) and \(V_{\text{eff}}(r_{*})\) is the potential given in Eq.(18). In terms of normal coordinate r, the above equation becomes
\[T\geq sech^{2}(\frac{1}{2\omega}\int_{r_{h}}^{\infty}|V_{\text{eff}}(r)|\frac{ dr}{f(r)}). \tag{23}\]
If we use Eq.(18), then, the above equation reduces to
\[T\geq sech^{2}\left(\frac{-\frac{4M\left(s^{2}-1\right)\left(1-\frac{2M}{ \sqrt{\alpha^{2}+4M^{2}}}\right)}{\alpha^{2}}+\frac{2\left(s^{2}-1\right)\left( \alpha\sqrt{\alpha^{2}+4M^{2}}-\left(\alpha^{2}+4M^{2}\right)\sinh^{-1}\left( \frac{\alpha}{2M}\right)\right)}{\alpha(\alpha^{2}+4M^{2})}+\frac{\ell^{2}}{M}+ \frac{\ell}{M}}{4\omega}\right). \tag{24}\]
Figure 11: Variation of Hawking temperature with respect to \(\alpha\).
The greybody bound for the scalar perturbation, \(T_{s}\), is obtained when we put \(s=0\) and the bound for the electromagnetic perturbation, \(T_{em}\), is obtained by taking \(s=1\). From Eq. (24) we see that \(T_{s}\) depends on the LQG parameter \(\alpha\) but \(T_{em}\) is independent of it. The qualitative nature of variation of \(T_{s}\) and \(T_{em}\) are shown in Fig. (12) and Fig. (13). In Fig. (12), we observe that the greybody bound decreases as we increase the angular momentum. It signifies that the probability of detecting Hawking radiation by an asymptotic observer decreases with \(\ell\). Fig. (13) shows that the greybody bound for scalar perturbation increases with the LQG parameter, but the amount of variation is small. It is also observed that the greybody bound approaches its maximum value of 1 faster for smaller values of angular momentum.
where \(dA\) is the surface element, \(\hat{n}\) is unit normal to \(dA\), and \(T\) is the greybody factor given by Eq. (24). Since for massless particles we have \(\left|k\right|=\omega\), the above equation for massless particles becomes
\[P_{tot}=\sum_{\ell}\int_{0}^{\infty}P_{\ell}\left(\omega\right)d\omega. \tag{26}\]
Here, \(P_{\ell}\) is power spectrum in the \(\ell th\) mode given by
\[P_{\ell}\left(\omega\right)=\frac{A}{8\pi^{2}}T(\omega)\frac{\omega^{3}}{e^{ \omega/T_{H}}-1}. \tag{27}\]
Although \(A\) is a multiple of the horizon area, here, we have taken \(A\) to be the horizon area as it will not affect the qualitative result [102]. Power spectrum \(P_{\ell}(\omega)\) is important to study the sparsity of Hawking radiation. In Fig. (14), we study the qualitative nature of variation of power spectrum \(P_{\ell}\) for different parameter values of the black hole. Here, we have plotted \(P_{\ell}\) with respect to \(\omega\) for various values of the LQG parameter \(\alpha\).
We observe from Fig. (14) that for both perturbations, the maximum value of the power spectrum diminishes as we increase the value of \(\alpha\) but the frequency, \(\omega_{max}\), at which we have a maximum value of \(P_{\ell}\) decreases with \(\alpha\).
To have a better understanding of Hawking radiation emitted by black holes, we introduce a dimensionless parameter, \(\eta\), that defines the sparsity of Hawking radiation as [102; 103; 104; 105; 106]
\[\eta=\frac{\tau_{gap}}{\tau_{emission}}. \tag{28}\]
Here, \(\tau_{gap}\) is the average time gap between two successive radiation quanta. It is defined by
\[\tau_{gap}=\frac{\omega_{max}}{P_{tot}}. \tag{29}\]
The time that is taken by a radiation quantum for emission, \(\tau_{emission}\), is defined by
\[\tau_{emission}\geq\tau_{localisation}=\frac{2\pi}{\omega_{max}}, \tag{30}\]
where \(\tau_{localisation}\) is the time period of the emitted wave of frequency \(\omega_{max}\). Thus, we will have a continuous flow of Hawking radiation when \(\eta\ll 1\). A large value of \(\eta\) signifies that the emission of radiation quanta is not continuous and the time gap between two radiation quanta is larger than the required time of radiation emission. The quantitative values of \(\omega_{max}\), \(P_{max}\), \(P_{tot}\), and \(\eta\), for scalar perturbations, are given in Table (4) and for electromagnetic perturbations, those values are given in Table (5). From these tables, we observe that the peak of the power spectrum and total power emitted decrease as we increase the value of the LQG parameter for both types of perturbations. On the other hand, the sparsity increases as we increase the value of \(\alpha\). It means that the time interval between two radiation quanta increases as we increase the LQG parameter. If we compare the values of Table (4) and Table (5), we can infer that the sparsity of black hole is larger for electromagnetic perturbation. Since the variation of sparsity is significant for both perturbations, Hawking radiation may be used in the future to validate LQG.
Figure 14: Power spectrum of the black hole for various values of \(\alpha\). Left one is for scalar perturbation and the right one is for electromagnetic perturbation. Here, we have taken \(\ell=2\).
## VII Area spectrum for the LQG motivated black hole from adiabatic invariance
In this section, we obtain the area spectrum of the LQG motivated \(4D\) polymerized black hole from adiabatic invariance with the help of works [107; 108]. First, we euclideanize the metric (2) by using the transformation \(t\rightarrow-i\tau\). It produces
\[ds^{2}=\left(\frac{r-2M}{\sqrt{r^{2}+\alpha^{2}}}\right)d\tau^{2}+\frac{1}{ \left(\frac{r-2M}{\sqrt{r^{2}+\alpha^{2}}}\right)}dr^{2}+(r^{2}+\alpha^{2})(d \theta^{2}+\sin^{2}\theta d\phi^{2}). \tag{31}\]
The radial null geodesics for the above metric are given by
\[\dot{r}=\pm i\frac{r-2M}{\sqrt{r^{2}+\alpha^{2}}}. \tag{32}\]
Using the adiabatic invariant quantity given in [107] and following the same procedure thereof, we have
\[I=\int p_{j}dq_{j}=\int_{0}^{M}\frac{dM}{T_{H}}. \tag{33}\]
Here, \(p_{j}\) is the momentum conjugate to the coordinate \(q_{j}\) where j has two values, \(0\) and \(1\). We have \(q_{0}=\tau\) and \(q_{1}=r\). \(T_{H}\) is the Hawking temperature given by Eq. (21). Using above equation and Eq. (21), we obtain
\[I = 4\pi\left(\frac{1}{2}M\sqrt{\alpha^{2}+4M^{2}}+\frac{1}{4} \alpha^{2}\log\left(\sqrt{\alpha^{2}+4M^{2}}+2M\right)\right) \tag{34}\] \[\approx 4\pi\left(M^{2}+\frac{1}{8}\alpha^{2}(2\log(4M)+1)\right)\] \[= 4\pi\left(\frac{1}{8}\alpha^{2}\left(\log\left(\frac{A}{\pi} \right)+1\right)+\frac{A}{16\pi}\right).\]
Using the quantization rule of Bohr-Sommerfeld \(I=2\pi n\) with \(n=0,1,2,...\) we obtain
\[A_{n}=2\pi\alpha^{2}W\left(\frac{e^{\frac{4n}{\alpha^{2}}-1}}{2\alpha^{2}} \right), \tag{35}\]
where \(W(z)\) is the Lambert W function. The area spectrum is given by
\[\Delta A=A_{n}-A_{n-1}. \tag{36}\]
We observe that the area spectrum of the black hole is significantly different from that of the Schwarzschild black hole. The above equation reduces to the Schwarzschild case in the limit \(\alpha\to 0\). Above equation along with Eq. (14) shows that the power spectrum for the LQG-motivated black hole is quantized and the quantization rule is different from the Schwarzschild case.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \(\alpha\) & \(0.2\) & \(0.4\) & \(0.6\) & \(0.8\) & \(1.0\) & \(1.2\) & \(1.4\) & \(1.6\) & \(1.8\) \\ \hline \(\omega_{max}\) & \(0.287926\) & \(0.285118\) & \(0.280054\) & \(0.274884\) & \(0.268825\) & \(0.262115\) & \(0.255269\) & \(0.241694\) & \(0.234996\) \\ \hline \(P_{max}\) & \(5.36194\times 10^{-7}\) & \(1.83117\times 10^{-7}\) & \(1.08259\times 10^{-7}\) & \(2.25856\times 10^{-7}\) & \(2.47758\times 10^{-7}\) & \(1.81044\times 10^{-7}\) & \(2.28218\times 10^{-7}\) & \(8.4317\times 10^{-8}\) & \(8.01674\times 10^{-8}\) \\ \hline \(P_{tot}\) & \(1.13948\times 10^{-7}\) & \(1.01417\times 10^{-7}\) & \(8.40331\times 10^{-8}\) & \(6.53514\times 10^{-8}\) & \(1.81631\times 10^{-8}\) & \(3.39774\times 10^{-8}\) & \(8.31655\times 10^{-8}\) & \(1.53955\times 10^{-8}\) & \(1.00469\times 10^{-8}\) \\ \hline \(\eta\) & \(115791.41\) & \(127573.91\) & \(148544.85\) & \(184019.17\) & \(238805.64\) & \(321821.27\) & \(447686.27\) & \(603892.31\) & \(874806.76\) \\ \hline \end{tabular}
\end{table}
Table 5: Values of \(\omega_{max}\), \(P_{max}\), \(P_{tot}\), and \(\eta\) for electromagnetic perturbation for various values of \(\alpha\) for \(\ell=1\) mode.
## VIII Conclusions
In this article, we have used LQG motivated \(4D\) polymerized black hole to study shadow, quasinormal modes, greybody bounds, and Hawking sparsity in the background spacetime. Due to quantum gravity correction, the black hole becomes regular. This was confirmed by finite values of scalar invariants everywhere. To calculate the shadow radius of the black hole, we first write down the corresponding Lagrangian for the metric and find out the differential equation of motion. There, we get the potential that dictates the motion of a particle. Imposing conditions on the potential, first, and second derivatives of the potential, we get the analytical expressions of the radius of the photon sphere \(r_{p}\) and then, the radius of the black hole shadow \(R_{s}\). The analytical expressions clearly show that the quantum correction impacts photon and shadow radii. To have qualitative as well as quantitative idea of the impact of the quantum correction on them, we plot \(r_{p}\) and \(R_{s}\) against \(\alpha\) in Fig. (2) and we give numerical values of \(r_{p}\) and \(R_{s}\) for different values of the LQG parameter in Table (I). Fig. (2) and Table (I) show that both radii increase as we increase the value of \(\alpha\) and their nature of variation against \(\alpha\) is similar. We, then, plot shadows of the quantum corrected black hole for various values of \(\alpha\) in Fig. (3). Fig. (2), Fig. (3), and Table (I) conclusively show that quantum correction has a significant impact on the shadow of the black hole.
Next, we study the quasinormal modes of the black hole for two types of perturbations: scalar and electromagnetic using the \(6th\) order WKB method. We plot the effective potential in Fig. (4) with respect to normal coordinate r and briefly discuss the qualitative nature of the potential. Then, quantitative values of quasinormal modes for scalar and electromagnetic perturbations are given in Table (II) and Table (III). In Fig. (6), we have plotted the real part of quasinormal frequency against \(\alpha\) for various values of angular momentum \(\ell\). However, the variation of the oscillation frequency with respect to \(\alpha\) is small. In Fig. (5), we have shown the variation of the imaginary part of quasinormal modes with respect to \(\alpha\) for different \(\ell\). We can infer from them that the oscillation frequency of GWs decreases as we increase the value of \(\alpha\), but increases with an increase in angular momentum. We also observe that the damping rate decreases with \(\alpha\), but increases with \(\ell\). In Fig. (7), we compare real and imaginary parts of quasinormal modes for both oscillations. We observe that the oscillation frequency as well as damping rate is larger for scalar perturbation than electromagnetic perturbation. In Fig. (8), we show the convergence of the WKB method for various \((n,\ell)\) pairs. It shows that when \(n<\ell\), quasinormal frequency fluctuates even for higher order. In the next section, we show the ringdown waveform for both perturbations using the time domain integration method in Figs. (9, 10). We observe from Fig. (9) that the frequency, as well as decay rate, decreases as we increase the parameter value of \(\alpha\). We conclude from Fig. (10) that the frequency as well as the decay rate increases as we increase \(\ell\). These conclusions and the conclusions drawn from Tables (II, III) are consistent.
Then, we calculate Hawking temperature and greybody bounds for the quantum-corrected black hole. We observe that the Hawking temperature decreases with an increase in \(\alpha\). We calculate analytical expressions of greybody bounds. It shows that greybody bounds for electromagnetic perturbations do not depend on \(\alpha\). Fig. (13) shows the dependence of greybody bounds for scalar perturbation on \(\alpha\). We observe that though the probability of detecting Hawking radiation at spatial infinity increases with the LQG parameter, the variation of probability is very small with respect to \(\alpha\). In Fig. (12), we have plotted greybody factors for both perturbations with different angular momentum. We infer that the transmission probability of Hawking radiation decreases with \(\ell\).
Next, we study the power spectrum and sparsity of Hawking radiation. Our study finds that the maximum value of the power spectrum decreases and frequency where the power spectrum is maximum shifts towards the left as we increase the value of \(\alpha\). It is also observed that the total power emitted decreases as we increase the value of the LQG parameter. We, then, study the sparsity of Hawking radiation using a dimensional less quantity \(\eta\). Quantitative values of \(\eta\) are given in Table (IV) and Table (V). Our study shows that the radiation becomes more sparse i.e., the time gap between successive radiation quanta becomes larger as we increase the LQG parameter. At the same time, we observe that Hawking radiation for electromagnetic perturbation is more sparse than scalar perturbation. Finally, with the help of the Bohr-Sommerfeld quantization rule, we obtain the area spectrum for the quantum-corrected black hole. LQG parameter \(\alpha\) significantly impacts the area spectrum. We hope that in the future, we will have sufficient experimental results which will help us decide the fate of quantum gravity.
**Declaration of competing interests**
The author declares that the work is not influenced by any competing financial interest or personal relationship.
|
2308.06304 | Electrostatic models for zeros of Laguerre-Sobolev polynomials | Let {$\{S_n\}_{n\geqslant 0}$} be the sequence of orthogonal polynomials with
respect to the Laguerre-Sobolev inner product $$ \langle f,g\rangle_S
=\!\int_{0}^{+\infty}\! f(x)
g(x)x^{\alpha}e^{-x}dx+\sum_{j=1}^{N}\sum_{k=0}^{d_j}\lambda_{j,k}
f^{(k)}(c_j)g^{(k)}(c_j), $$ where $\lambda_{j,k}\geqslant 0$, $\alpha >-1$ and
$c_i \in (-\infty, 0)$ for $i=1,2,\dots,N$. We provide a formula that relates
the Laguerre-Sobolev polynomials $S_n$ to the standard Laguerre orthogonal
polynomials. We find the ladder operators for the polynomial sequence
$\{S_n\}_{n\geqslant 0}$ and a second-order differential equation with
polynomial coefficients for $\{S_n\}_{n\geqslant 0}$. We establish a sufficient
condition for an electrostatic model of the zeros of orthogonal
Laguerre-Sobolev polynomials. Some examples are given where this condition is
either satisfied or not. | Abel Díaz-González, Héctor Pijeira-Cabrera, Javier Quintero-Roba | 2023-08-11T15:02:19Z | http://arxiv.org/abs/2308.06304v1 | # Electrostatic models for zeros of Laguerre-Sobolev polynomials
###### Abstract
Let \(\{S_{n}\}_{n\geqslant 0}\) be the sequence of orthogonal polynomials with respect to the Laguerre-Sobolev inner product
\[\langle f,g\rangle_{S}=\!\int_{0}^{+\infty}\!f(x)g(x)x^{\alpha}e^{-x}dx+\sum_{ j=1}^{N}\sum_{k=0}^{d_{j}}\lambda_{j,k}f^{(k)}(c_{j})g^{(k)}(c_{j}),\]
where \(\lambda_{j,k}\geqslant 0\), \(\alpha>-1\) and \(c_{i}\in(-\infty,0)\) for \(i=1,2,\ldots,N\). We provide a formula that relates the Laguerre-Sobolev polynomials \(S_{n}\) to the standard Laguerre orthogonal polynomials. We find the ladder operators for the polynomial sequence \(\{S_{n}\}_{n\geqslant 0}\) and a second-order differential equation with polynomial coefficients for \(\{S_{n}\}_{n\geqslant 0}\). We establish a sufficient condition for an electrostatic model of the zeros of orthogonal Laguerre-Sobolev polynomials. Some examples are given where this condition is either satisfied or not.
**Mathematics Subject Classification.** 30C15 \(\cdot\) 42C05 \(\cdot\) 33C45 \(\cdot\) 33C47 \(\cdot\) 82B23
**Keywords.** Laguerre polynomials Sobolev orthogonality Second order differential equation Electrostatic model;
## 1 Introduction
The monic Laguerre polynomials \(\{L_{n}^{\alpha}\}_{n\geqslant 0}\), \(\alpha>-1\) (see [19, Ch. 5]) is a family of classical orthogonal polynomials defined by the orthogonality relations
\[\langle L_{n}^{\alpha},x^{k}\rangle_{\alpha}:=\int_{0}^{+\infty}L_{n}^{\alpha} (x)x^{k}d\mu^{\alpha}(x)=0,\quad\text{ for }k=0,1,\ldots,n-1,\]
where \(d\mu^{\alpha}(x)=x^{\alpha}e^{-x}dx\), \(\alpha>-1\). For the reader's convenience, we review some relevant notions and properties without proofs, which makes our exposition self-contained. From [19, (5.1.8), (5.1.6), (5.1.1), (5.1.7), (5.1.14) and (5.1.10)], we have
\[L_{n}^{\alpha}(x) =(-1)^{n}n!\sum_{k=0}^{n}\binom{n+\alpha}{n-k}\frac{(-x)^{k}}{k!},\] \[(x-\beta_{n})L_{n}^{\alpha}(x) =L_{n+1}^{\alpha}(x)+\gamma_{n}L_{n-1}^{\alpha}(x),\quad n\geq 1,\] \[h_{n}^{\alpha} =\|L_{n}^{\alpha}\|_{\alpha}^{2}=\int_{0}^{+\infty}\left(L_{n}^{ \alpha}(x)\right)^{2}d\mu^{\alpha}(x)=n!\Gamma(n+\alpha+1), \tag{1}\]
where \(L_{0}^{\alpha}(x)=1\), \(L_{1}^{\alpha}(x)=x-(\alpha+1)\), \(\beta_{n}=2n+\alpha+1\), \(\gamma_{n}=n\,(n+\alpha)\), and \(\Gamma\) denotes the Gamma function
\[\Gamma(z)=\int_{0}^{\infty}t^{z-1}e^{-t}dt,\quad\Re(z)>0.\]
Let \(\mathcal{I}\) be the identity operator, and define the two ladder Laguerre differential operators on the linear space of polynomials as
\[\widetilde{\mathcal{L}}_{n}^{\downarrow} :=\frac{x}{\gamma_{n}}\;\frac{d}{dx}-\frac{n}{\gamma_{n}}\; \mathcal{I}\] (Lowering Laguerre differential operator), \[\widetilde{\mathcal{L}}_{n}^{\uparrow} :=-x\;\frac{d}{dx}+(x-n-\alpha)\;\mathcal{I}\] (Raising Laguerre differential operator).
From [19, (5.1.14)], for all \(n\geq 1\), we have
\[\widetilde{\mathcal{L}}_{n}^{\downarrow}\left[L_{n}^{\alpha}(x)\right]=L_{n-1 }^{\alpha}(x)\quad\text{and}\quad\widetilde{\mathcal{L}}_{n}^{\uparrow}\left[ L_{n-1}^{\alpha}(x)\right]=L_{n}^{\alpha}(x). \tag{3}\]
Classical orthogonal polynomials (Jacobi, Laguerre, and Hermite) satisfy a second order differential equation with polynomial coefficients. Based on this fact, Stieltjes gave a very interesting interpretation of the zeros of the classical orthogonal polynomials, as a solution to an electrostatic equilibrium problem of \(n\) movable unit charges in the presence of a logarithmic potential (see [20, Sec. 3] and [21, Sec. 2]). This is one of the main reasons for the importance of such polynomials for applications to boundary value problems in classical physics and quantum mechanics.
In the Laguerre case, \(Y=L_{n}^{\alpha}(x)\) is a solution of the differential equation
\[xY^{\prime\prime}+(\alpha+1-x)Y^{\prime}+nY=0\qquad\text{[1, Ch. V, (2.20)]}. \tag{4}\]
We will first analyze the Stieltjes electrostatic interpretation for the zeros of the Laguerre polynomials. Let us consider a system of \(n\) unit charges distributed at points \(\omega_{1},\omega_{2},\ldots,\omega_{n}\) in \((0,+\infty)\) and add one fixed positive charge of mass \((\alpha+1)/2\) at \(0\). In addition, we also consider the following potential, which describes the interaction between the charges in the presence of an external field:
\[\mathbf{E}\left(\omega_{1},\omega_{2},\ldots,\omega_{n}\right)=\sum_{1\leq i<j \leq n}\log\frac{1}{\left|\omega_{i}-\omega_{j}\right|}+\frac{\alpha+1}{2} \sum_{j=1}^{n}\log\frac{1}{\omega_{j}}+\frac{1}{2}\sum_{j=1}^{n}\omega_{j}. \tag{5}\]
It is obvious, that the minimum of (5) gives the electrostatic equilibrium of the system. The points \(x_{1},\ldots,x_{n}\), where this minimum is achieved, are the places where the charges will settle down. Therefore, all the \(x_{j}\) are different from each other as well as from \(0\) and \(+\infty\).
From the necessary condition for the existence of minimum \(\frac{\partial E_{s}}{\partial\omega_{j}}=0\) (\(1\leqslant j\leqslant n\)), it follows that the polynomial \(Y=P_{n}(x)=\prod_{j=1}^{n}(x-x_{j})\) satisfies the differential equation (4), which is the differential equation for the monic Laguerre polynomial, and then \(P_{n}(x)=L_{n}^{\alpha}(x)\).
On the other hand, the Hessian matrix of this energy function at \(\overline{x}=(x_{1},x_{2},\ldots,x_{n})\) is given by
\[\nabla_{\overline{\omega}\overline{\omega}}^{2}\mathbf{E}( \overline{x})=\begin{cases}\frac{\partial^{2}\mathbf{E}}{\partial\omega_{k} \partial\omega_{j}}(\overline{x})=-\frac{1}{(x_{k}-x_{j})^{2}},&\text{if} \;\;k\neq j,\\ \frac{\partial^{2}\mathbf{E}}{\partial\omega_{k}^{2}}(\overline{x})=\sum_{ \genfrac{}{}{0.0pt}{}{i=1}{i\neq k}}^{n}\frac{1}{(x_{k}-x_{i})^{2}}+\frac{ \alpha+1}{2x_{k}^{2}},&\text{if}\;\;k=j.\end{cases} \tag{6}\]
Since (6) is a symmetric real matrix, its eigenvalues are real. Therefore, using Gershgorin's Theorem [7, Th. 6.1.1], the eigenvalues \(\lambda\) of the Hessian at \(\overline{x}\) satisfy
\[\left|\lambda-\sum_{\genfrac{}{}{0.0pt}{}{i=1}{i\neq k}}^{n}\frac{1}{(x_{k}-x_ {i})^{2}}-\frac{\alpha+1}{2x_{k}^{2}}\right|\leq\sum_{\genfrac{}{}{0.0pt}{}{ i=1}{i\neq k}}^{n}\frac{1}{(x_{k}-x_{i})^{2}},\]
for some \(k=1,2,\ldots,n\). Then, we have \(\lambda\geq\frac{\alpha+1}{2x_{k}^{2}}>0\).
Consequently, (6) is positive definite, which implies that (5) is a strictly convex function. Since \(\nabla\mathbf{E}(\overline{x})=0\), we conclude that \(\overline{x}\) is the only global minimum of (5). The methods used in this proof can be found in [9, SS2.3], [12] or [14]. In conclusion, the global minimum of (5) is reached when each of the \(n\) charges is located on a zero of the \(n\)th Laguerre polynomial \(L_{n}^{\alpha}(x)\).
In [12] you will find a survey of the results achieved up to fifteen years ago, on the electrostatic interpretation of the zeros of some well-known families of polynomials. For more recent results, we refer the reader to the introductions of [6, 8, 16].
Let \(N,d_{j}\in\mathbb{Z}_{+}\), \(\lambda_{jk}\geqslant 0\), for \(j=1,\ldots,N\), and \(k=0,1,\ldots,d_{j}\), and let the set \(\{c_{1},\ldots,c_{N}\}\subset\mathds{R}\setminus[0,\infty)\), where \(c_{i}\neq c_{j}\) if \(i\neq j\) and \(I_{+}=\{(j,k):\lambda_{jk}>0\}\). Denote by \(\mathds{P}\) the linear space of all polynomials with real coefficients. On \(\mathds{P}\), we consider the following Sobolev-type inner product, which we will call _Laguerre-Sobolev inner product_
\[\langle f,g\rangle_{\mathbf{s}} =\langle f,g\rangle_{\alpha}+\sum_{j=1}^{N}\sum_{k=0}^{d_{j}} \lambda_{jk}f^{(k)}(c_{j})g^{(k)}(c_{j})\] \[=\!\int_{0}^{+\infty}\!\!f(x)g(x)d\mu_{\alpha}(x)+\!\!\!\!\!\!\!\! \sum_{(j,k)\in I_{+}}\!\!\!\!\!\!\!\!\lambda_{jk}f^{(k)}(c_{j})g^{(k)}(c_{j}), \tag{7}\]
where \(f^{(k)}\) denotes the \(k\)th derivative of the polynomial \(f\). Without loss of generality, we also assume \(\{(j,d_{j})\}_{j=1}^{N}\subset I_{+}\) and \(d_{1}\leqslant d_{2}\leqslant\cdots\leqslant d_{N}\). For \(n\in\mathbb{Z}_{+}\) we shall denote by \(S_{n}\) the monic
polynomial of lowest degree satisfying
\[\langle S_{n},x^{k}\rangle_{\mathfrak{s}}=0,\quad\text{for}\ \ k=0,1,\ldots,n-1. \tag{8}\]
It is easy to see that for all \(n\geqslant 0\), there exists such a unique polynomial \(S_{n}\) of degree \(n\). In fact, it is deduced by solving a homogeneous linear system with \(n\) equations and \(n+1\) unknowns. Uniqueness follows from the minimality of the degree for the polynomial solution. The polynomial \(S_{n}\) is called the \(n\)th monic Laguerre-Sobolev orthogonal polynomial, for brevity, _\(n\)th monic Laguerre-Sobolev orthogonal polynomial_.
It is known that, in general, the properties of classical Laguerre polynomials differ from those of Laguerre-Sobolev polynomials. In particular, unlike Laguerre polynomials, the zeros of Laguerre-Sobolev polynomials can be complex or, if they are real, they can lie outside \([0,\infty)\). For example, given the inner product
\[\langle f,g\rangle_{\mathfrak{s}}=\int_{0}^{+\infty}f(x)g(x)e^{-x}dx+f^{\prime }(-2)g^{\prime}(-2),\]
the corresponding second-degree monic Laguerre-Sobolev polynomial is \(S_{2}(z)=z^{2}-2\), whose zeros are \(\pm\sqrt{2}\). Note that \(-\sqrt{2}\notin[0,+\infty)\).
The aim of this paper is to provide an electrostatic model for the zeros of the Laguerre-Sobolev polynomials, following an approach based in the works [8, 16], the main result of M.E.H. Ismail in [9, 10], and the original ideas of Stieltjes in [17, 18]. Our results extend those achieved in [5, 8, 13] to a more general context and complement those obtained in [16] for the Jacobi-Sobolev case.
In the next section, we review a connection formula that allows us to express the polynomial \(S_{n}\), as a linear combination of the Laguerre polynomials, whose coefficients are rational functions. Sections 3 and 4 deal with the extension of the ladder (raising and lowering) differential operators (2), and the second-order differential equation with polynomial coefficients (4), for the Laguerre-Sobolev polynomials.
In the last section, we give sufficient conditions for an electrostatic model for the distribution of the zeros of \(\{S_{n}\}_{n\geqslant 0}\). Such model is expressed as the logarithmic potential interaction of unit positive charges in the presence of an external field. Some examples are given where this condition is either satisfied or not.
## 2 Auxiliary results
In this section, for the reader's convenience, we repeat some results from [3, (21)-(22)] and [16, Sec. 2] without proofs, which makes our exposition self-contained.
We first recall the well-known Christoffel-Darboux formula for \(K_{n}(x,y)\), the kernel polyno
mials associated with \(\{L_{n}^{\alpha}\}_{n\geq 0}\) (see [19, (5.1.8) and (5.1.11)]).
\[K_{n-1}(x,y)=\sum_{k=0}^{n-1}\frac{L_{k}^{\alpha}(x)L_{k}^{\alpha}(y)}{h_{k}^{ \alpha}}=\begin{cases}\frac{L_{n}^{\alpha}(x)L_{n-1}^{\alpha}(y)-L_{n}^{\alpha} (y)L_{n-1}^{\alpha}(x)}{h_{n-1}^{\alpha}\left(x-y\right)},&\text{if \ }x\neq y;\\ \\ \left(\frac{\left(L_{n}^{\alpha}(x)\right)^{\prime}L_{n-1}^{\alpha}(x)-L_{n}^{ \alpha}(x)\left(L_{n-1}^{\alpha}(x)\right)^{\prime}}{h_{n-1}^{\alpha}},&\text{ if \ }x=y.\end{cases} \tag{9}\]
We will denote by \(K_{n}^{(j,k)}\left(x,y\right)=\frac{\partial^{j+k}K_{n}\left(x,y\right)}{ \partial x^{j}\partial y^{k}}\) the partial derivatives of (9). Then, from the Christoffel-Darboux formula and Leibniz's rule, it is not difficult to verify that
\[K_{n-1}^{(0,k)}(x,y)= \sum_{i=0}^{n-1}\frac{L_{i}^{\alpha}(x)\left(L_{i}^{\alpha}(y) \right)^{(k)}}{h_{i}^{\alpha}}\] \[= \frac{k!\left(T_{k}(x,y;L_{n-1}^{\alpha})L_{n}^{\alpha}(x)-T_{k}( x,y;L_{n}^{\alpha})L_{n-1}^{\alpha}(x)\right)}{h_{n-1}^{\alpha}\left(x-y\right)^{k+1}}, \tag{10}\]
where \(T_{k}(x,y;f)=\sum_{v=0}^{k}\frac{f^{(v)}(v)}{v!}(x-y)^{v}\) is the Taylor polynomial of degree \(k\) of \(f\) centered at \(y\). Note that (9) is a particular case of (10) when \(k=0\).
From (7), if \(i<n\)
\[\langle S_{n},L_{i}^{\alpha}\rangle_{\alpha}=\langle S_{n},L_{i}^{\alpha} \rangle_{\text{s}}-\sum_{(j,k)\in I_{+}}\lambda_{j,k}S_{n}^{\left(k\right)}(c_ {j})\left(L_{i}^{\alpha}\right)^{(k)}(c_{j})=-\sum_{(j,k)\in I_{+}}\lambda_{j, k}S_{n}^{\left(k\right)}(c_{j})\left(L_{i}^{\alpha}\right)^{(k)}(c_{j}). \tag{11}\]
Therefore, from the Fourier expansion of \(S_{n}\) in terms of the basis \(\{L_{n}^{\alpha}\}_{n\geq 0}\) and using (11), we get
\[S_{n}(x) =L_{n}^{\alpha}(x)+\sum_{i=0}^{n-1}\langle S_{n},L_{i}^{\alpha} \rangle_{\alpha}\frac{L_{i}^{\alpha}(x)}{h_{i}^{\alpha}}=L_{n}^{\alpha}(x)- \sum_{(j,k)\in I_{+}}\lambda_{j,k}S_{n}^{\left(k\right)}(c_{j})\sum_{i=0}^{n-1 }\frac{L_{i}^{\alpha}(x)\left(L_{i}^{\alpha}\right)^{(k)}(c_{j})}{h_{i}^{ \alpha}}\] \[=L_{n}^{\alpha}(x)-\sum_{(j,k)\in I_{+}}\lambda_{j,k}S_{n}^{\left( k\right)}(c_{j})K_{n-1}^{\left(0,k\right)}(x,c_{j}). \tag{12}\]
Now, replacing (10) in (12), we have the _connection formula_
\[S_{n}(x) =F_{1,n}(x)L_{n}^{\alpha}(x)+G_{1,n}(x)L_{n-1}^{\alpha}(x), \tag{13}\] \[\text{where}\quad F_{1,n}(x) =1-\sum_{(j,k)\in I_{+}}\frac{\lambda_{j,k}k!\,S_{n}^{\left(k \right)}(c_{j})}{h_{n-1}^{\alpha}}\frac{T_{k}(x,c_{j};L_{n-1}^{\alpha})}{(x-c_ {j})^{k+1}}\] \[\text{and}\quad G_{1,n}(x) =\sum_{(j,k)\in I_{+}}\frac{\lambda_{j,k}k!\,S_{n}^{\left(k \right)}(c_{j})}{h_{n-1}^{\alpha}}\frac{T_{k}(x,c_{j};L_{n}^{\alpha})}{(x-c_{j} )^{k+1}}.\]
Deriving equation (12) \(\ell\)-times and evaluating then at \(x=c_{i}\) for each ordered pair \((i,\ell)\in I_{+}\) we obtain the following system of \(d^{*}=\#(I_{+})\) linear equations and \(d^{*}\) unknowns \(S_{n}^{\left(k\right)}(c_{j})\), where
the symbol \(\#(A)\) denote the cardinality of a given set \(A\).
\[\big{(}L_{n}^{\alpha}(c_{i})\big{)}^{(\ell)}=\Big{(}1+\lambda_{i,\ell}K_{n-1}^{( \ell,\ell)}(c_{i},c_{i})\Big{)}\,S_{n}^{(\ell)}(c_{i})\,\,+\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
## 3 Differential operators
Let
\[\rho(x) :=\prod_{j=1}^{N}\left(x-c_{j}\right)^{d_{j}+1}\quad\text{ and } \tag{16}\] \[\rho_{jk}(x) :=\frac{\rho(x)}{(x-c_{j})^{k+1}}=(x-c_{j})^{d_{j}-k}\prod_{ \begin{subarray}{c}i=1\\ i\neq j\end{subarray}}^{N}(x-c_{i})^{d_{i}+1},\quad\text{for every }(j,k)\in I_{+}. \tag{17}\]
If \(p\) is a polynomial of degree \(n\) (i.e., \(p(x)=a_{n}x^{n}+\cdots+a_{0}\)), we denote by \(\text{DgCo}(p)\) the ordered pair whose first component is the degree of \(p\) and second component is its leading coefficient, i.e.,
\[\text{DgCo}(p)=\left(n,\underset{n}{Coef}(p)\right)\]
where \(\underset{k}{Coef}(p)=a_{k}\) is the coefficient of \(x^{k}\) in \(p\).
**Lemma 3.1**.: _The sequences of polynomials \(\{S_{n}\}_{n\geq 0}\) and \(\{L_{n}^{\alpha}\}_{n\geq 0}\), satisfy_
\[\rho(x)S_{n}(x) = F_{2,n}(x)\;L_{n}^{\alpha}(x)+G_{2,n}(x)\;L_{n-1}^{\alpha}(x), \tag{18}\] \[x\left(\rho(x)S_{n}(x)\right)^{\prime} = F_{3,n}(x)L_{n}^{\alpha}(x)+G_{3,n}(x)L_{n-1}^{\alpha}(x), \tag{19}\]
_where_
\[F_{2,n}(x)= \rho(x)F_{1,n}(x)=\rho(x)-\sum_{(j,k)\in I_{+}}\left(\frac{k! \lambda_{jk}S_{n}^{(k)}(c_{j})}{h_{n-1}^{\alpha}}\,T_{k}\left(x,c_{j};L_{n-1}^{ \alpha}\right)\right)\rho_{jk}(x),\] \[G_{2,n}(x)= \rho(x)G_{1,n}(x)=\sum_{(jk)\in I_{+}}\left(\frac{k!\lambda_{jk} S_{n}^{(k)}(c_{j})}{h_{n-1}^{\alpha}}\,T_{k}\left(x,c_{j};L_{n}^{\alpha} \right)\right)\rho_{jk}(x),\] \[F_{3,n}(x)= xF_{2,n}^{\prime}(x)+nF_{2,n}(x)-G_{2,n}(x),\] \[G_{3,n}(x)= xG_{2,n}^{\prime}(x)+\gamma_{n}F_{2,n}(x)-(n+\alpha-x)G_{2,n}(x),\]
_where \(F_{2,n}\), \(G_{2,n}\), \(F_{3,n}\), and \(G_{3,n}\) are polynomials with the following degrees and leading coefficients_
\[\text{DgCo}(F_{2,n}) =(d,1),\] \[\text{DgCo}(G_{2,n}) =(d-1,\sigma_{n}),\] \[\text{DgCo}(F_{3,n}) =(d,d+n),\] \[\text{DgCo}(G_{3,n}) =(d,\gamma_{n}+\sigma_{n});\]
_and_
\[\sigma_{n}=\frac{1}{h_{n-1}^{\alpha}}\sum_{(j,k)\in I_{+}}\lambda_{jk}S_{n}^{ (k)}(c_{j})\left(L_{n}^{\alpha}\right)^{(k)}(c_{j})>0.\]
Proof.: From (13)-(17), equation (18) is immediate. To prove (19), we can take derivatives with respect to \(x\) at both sides of (18) and then multiply by \(x\)
\[x\left(\rho(x)S_{n}(x)\right)^{\prime}= xF_{2,n}^{\prime}L_{n}^{\alpha}(x)+xF_{2,n}\left(L_{n}^{\alpha}(x) \right)^{\prime}\] \[+xG_{2,n}^{\prime}L_{n-1}^{\alpha}(x)+xG_{2,n}\left(L_{n-1}^{ \alpha}(x)\right)^{\prime}.\]
Using (3), we obtain (19) as follows:
\[x\left(\rho(x)S_{n}(x)\right)^{\prime}= \left[xF_{2,n}^{\prime}(x)+nF_{2,n}(x)-G_{2,n}(x)\right]L_{n}^{ \alpha}(x)\] \[+\left[xG_{2,n}^{\prime}(x)+\gamma_{n}F_{2,n}(x)-(n+\alpha-x)G_{ 2,n}(x)\right]L_{n-1}^{\alpha}(x).\]
From the expressions for \(F_{2,n}\), we directly get that \(F_{2,n}\) is monic with degree \(d\). It is also straightforward that \(deg(G_{2,n})\leq d-1\). Let us continue by finding the coefficient of \(G_{2,n}\) of the power \(x^{d-1}\)
\[G_{2,n}(x) =\sum_{(j,k)\in I_{*}}\left(\frac{k!\lambda_{j,k}S_{n}^{(k)}(c_{j })}{h_{n-1}^{\alpha}}\,T_{k}\left(x,c_{j};L_{n}^{\alpha}\right)\right)\rho_{j, k}(x)\] \[=\sum_{(j,k)\in I_{*}}\left(\frac{k!\lambda_{j,k}S_{n}^{(k)}(c_{ j})}{h_{n-1}^{\alpha}}\,\sum_{v=0}^{k}\frac{\left(L_{n}^{\alpha}\right)^{(v)}(c_{ j})}{v!}(x-c_{j})^{v}\right)\rho_{j,k}(x).\]
Since \(deg(\rho_{j,k}(x))=d-1-k\), the coefficient of \(G_{2,n}\) of the power \(x^{d-1}\) is given by
\[\sum_{(j,k)\in I_{*}}\left(\frac{k!\lambda_{j,k}S_{n}^{(k)}(c_{j})}{h_{n-1}^{ \alpha}}\,\frac{\left(L_{n}^{\alpha}\right)^{(k)}(c_{j})}{k!}\right)=\frac{1} {h_{n-1}^{\alpha}}\sum_{(j,k)\in I_{*}}\lambda_{j,k}S_{n}^{(k)}(c_{j})\left(L_ {n}^{\alpha}\right)^{(k)}(c_{j})=\sigma_{n}.\]
Notice that \(\sigma_{n}\) is positive. Otherwise, we get
\[0 =\langle S_{n},S_{n}-L_{n}^{\alpha}\rangle_{8}=\langle S_{n},S_{n} -L_{n}^{\alpha}\rangle_{\alpha}+\sum_{(j,k)\in I_{*}}\lambda_{j,k}S_{n}^{(k)}( c_{j})\left(S_{n}-L_{n}^{\alpha}\right)^{(k)}(c_{j})\] \[=\|S_{n}\|_{\alpha}^{2}-\|L_{n}^{\alpha}\|_{\alpha}^{2}+\sum_{(j, k)\in I_{*}}\lambda_{j,k}S_{n}^{(k)}(c_{j})\big{)}^{2}-h_{n-1}^{\alpha} \sigma_{n}\] \[\geq\|S_{n}\|_{\alpha}^{2}-\|L_{n}^{\alpha}\|_{\alpha}^{2},\]
which contradicts the minimality of the norm of the Laguerre polynomials. Then, \(G_{2,n}\) has degree \(=d-1\) and a positive leading coefficient \(\sigma_{n}\). Finally, the degrees and leading coefficients of \(F_{3,n}\) and \(G_{3,n}\) follows directly from the degrees and leading coefficients of \(F_{2,n}\) and \(G_{2,n}\).
**Lemma 3.2**.: _The sequences of monic polynomials \(\{S_{n}\}_{n\geq 0}\) and \(\{L_{n}^{\alpha}\}_{n\geq 0}\) are also related by the equations_
\[\rho(x)S_{n-1}(x) =V_{2,n}(x)L_{n}^{\alpha}(x)+W_{2,n}(x)L_{n-1}^{\alpha}(x), \tag{20}\] \[x\left(\rho(x)S_{n-1}(x)\right)^{\prime} =V_{3,n}(x)L_{n}^{\alpha}(x)+W_{3,n}(x)L_{n-1}^{\alpha}(x), \tag{21}\]
_where_
\[V_{2,n}(x) =-\frac{G_{2,n-1}(x)}{\gamma_{n-1}},\qquad W_{2,n}(x) =F_{2,n-1}(x)+G_{2,n-1}(x)\left(\frac{x-\beta_{n-1}}{\gamma_{n-1}} \right),\] \[V_{3,n}(x) =-\frac{G_{3,n-1}(x)}{\gamma_{n-1}},\qquad W_{3,n}(x) =F_{3,n-1}(x)+G_{3,n-1}(x)\left(\frac{x-\beta_{n-1}}{\gamma_{n-1}} \right),\]
_where \(V_{2,n}\), \(W_{2,n}\), \(V_{3,n}\) and \(W_{3,n}\) are polynomials with the following degrees and leading coefficients_
\[\mathrm{DgCo}(V_{2,n}) =(d-1,-\sigma_{n-1}/\gamma_{n-1}),\] \[\mathrm{DgCo}(W_{2,n}) =(d,1+\sigma_{n-1}/\gamma_{n-1}),\] \[\mathrm{DgCo}(V_{3,n}) =(d,-(1+\sigma_{n-1}/\gamma_{n-1})),\] \[\mathrm{DgCo}(W_{3,n}) =(d+1,1+\sigma_{n-1}/\gamma_{n-1}).\]
Proof.: The proof of (20)-(21) is a straightforward consequence of Lemma 3.1 and the three term-recurrence relation (1).
**Lemma 3.3**.: _The monic orthogonal Laguerre polynomials \(\{L_{n}^{\alpha}\}_{n\geqslant 0}\) can be expressed in terms of the monic Sobolev-type polynomials \(\{S_{n}\}_{n\geqslant 0}\) in the following way:_
\[L_{n}^{\alpha}(x) =\frac{\rho(x)}{\Delta_{n}(x)}\left(W_{2,n}(x)S_{n}(x)-G_{2,n}(x) S_{n-1}(x)\right), \tag{22}\] \[L_{n-1}^{\alpha}(x) =\frac{\rho(x)}{\Delta_{n}(x)}\left(F_{2,n}(x)S_{n-1}(x)-V_{2,n}( x)S_{n}(x)\right); \tag{23}\]
_where_
\[\Delta_{n}(x)=\det\begin{pmatrix}F_{2,n}(x)&G_{2,n}(x)\\ V_{2,n}(x)&W_{2,n}(x)\end{pmatrix}=F_{2,n}(x)W_{2,n}(x)-V_{2,n}(x)G_{2,n}(x) \tag{24}\]
_is a polynomial with a degree and leading coefficient_
\[\mathrm{DgCo}(\Delta_{n})=(2d,1+\sigma_{n-1}/\gamma_{n-1}).\]
Proof.: Note that equations (18) and (20) can be seen as a system of two linear equations with two unknowns \(L_{n}^{\alpha}(x)\) and \(L_{n-1}^{\alpha}(x)\). Then, from Cramer's rule, we get (22) and (23).
The degree of \(\Delta_{n}\) can be computed easily from the degrees of \(F_{2,n}\), \(G_{2,n}\), \(V_{2,n}\), and \(W_{2,n}\) given in Lemmas 3.1 and 3.2.
The following ladder equations follow from the last three lemmas.
**Remark 3.1**.: _Since the zeros of \(L_{n}^{\alpha}\) lie on \((0,\infty)\), from (22) (or (23)) we have that \(\rho\) divides \(\Delta_{n}\); this is, there exists a polynomial \(\delta_{n}\) such that \(\Delta_{n}=\rho\delta_{n}\). Hence, from (24) and Lemma 3.1 we obtain_
\[\delta_{n}(x)=\frac{\Delta_{n}(x)}{\rho(x)}=\frac{F_{2,n}(x)W_{2,n}(x)-V_{2,n}( x)G_{2,n}(x)}{\rho(x)}=F_{1,n}(x)W_{2,n}(x)-G_{1,n}(x)V_{2,n}(x).\]
_In addition, from \(\Delta_{n}=\rho\delta_{n}\) we obtain_
\[\mathrm{DgCo}(\delta_{n})=(d,1+\sigma_{n-1}/\gamma_{n-1}).\]
**Theorem 1** (Ladder equations).: _Under the above assumptions, we have the following ladder equations._
\[F_{4,n}(x)S_{n}(x)+G_{4,n}(x)S^{\prime}_{n}(x) =S_{n-1}(x), \tag{25}\] \[V_{4,n}(x)S_{n-1}(x)+W_{4,n}(x)S^{\prime}_{n-1}(x) =S_{n}(x), \tag{26}\]
_where_
\[F_{4,n}(x) =\frac{q_{2,n}(x)}{q_{1,n}(x)},\ G_{4,n}(x)=\frac{q_{0,n}(x)}{q_{1,n}(x)}\ V_{4,n}(x)=\frac{q_{3,n}(x)}{q_{4,n}(x)},\ W_{4,n}(x)=\frac{q_{0,n}(x)} {q_{4,n}(x)};\] \[q_{0,n}(x) =x\Delta_{n}(x),\quad\mathrm{DgCo}(q_{0,n})=\left(2d+1,1+\frac{ \sigma_{n-1}}{\gamma_{n-1}}\right);\] \[q_{1,n}(x) =G_{3,n}(x)F_{2,n}(x)-F_{3,n}(x)G_{2,n}(x),\quad\mathrm{DgCo}(q_{ 1,n})=(2d,\gamma_{n}+\sigma_{n});\] \[q_{2,n}(x) =x\rho^{\prime}(x)\delta_{n}(x)+G_{3,n}(x)V_{2,n}(x)-F_{3,n}(x)W_ {2,n}(x),\quad\mathrm{DgCo}(q_{2,n})=\left(2d,-n\left[1+\frac{\sigma_{n-1}}{ \gamma_{n-1}}\right]\right);\] \[q_{3,n}(x) =x\rho^{\prime}(x)\delta_{n}(x)+V_{3,n}(x)G_{2,n}(x)-W_{3,n}(x)F_ {2,n}(x),\quad\mathrm{DgCo}(q_{3,n})=\left(2d+1,-\left[1+\frac{\sigma_{n-1}}{ \gamma_{n-1}}\right]\right);\] _and_ \[q_{4,n}(x) =V_{3,n}(x)W_{2,n}(x)-W_{3,n}(x)V_{2,n}(x),\quad\mathrm{DgCo}(q_{ 4,n})=\left(2d,-\left[1+\frac{\sigma_{n-1}}{\gamma_{n-1}}\right]\right).\]
Proof.: Replacing (22) and (23) in (19) and (21), the two ladder equations (25) and(26) follow. Notice that
\[F_{4,n}(x) =\frac{x\left(\rho(x)\right)^{-1}\rho^{\prime}(x)\Delta_{n}(x)- \left(F_{3,n}(x)W_{2,n}(x)-G_{3,n}(x)V_{2,n}(x)\right)}{G_{3,n}(x)F_{2,n}(x)- F_{3,n}(x)G_{2,n}(x)},\] \[G_{4,n}(x) =\frac{x\Delta_{n}(x)}{G_{3,n}(x)F_{2,n}(x)-F_{3,n}(x)G_{2,n}(x)},\] \[V_{4,n}(x) =\frac{x\left(\rho(x)\right)^{-1}\rho^{\prime}(x)\Delta_{n}(x)- \left(W_{3,n}(x)F_{2,n}(x)-V_{3,n}(x)G_{2,n}(x)\right)}{V_{3,n}(x)W_{2,n}(x)- W_{3,n}(x)V_{2,n}(x)},\] \[W_{4,n}(x) =\frac{x\Delta_{n}(x)}{V_{3,n}(x)W_{2,n}(x)-W_{3,n}(x)V_{2,n}(x)}.\]
The computations for the degrees and leading coefficients are a direct consequence of the expressions obtained, Remark 3.1, and Lemmas 3.1,3.2,3.3.
In the previous theorem, the polynomials \(q_{k,n}\) have been defined. Note that these polynomials are closely related to certain determinants. The following result summarizes some of its properties that will be of interest later. For brevity, we introduce the following notations
\[\Delta_{1,n}(x) =G_{3,n}(x)F_{2,n}(x)-F_{3,n}(x)G_{2,n}(x);\] \[\Delta_{2,n}(x) =G_{3,n}(x)V_{2,n}(x)-F_{3,n}(x)W_{2,n}(x);\] \[\Delta_{3,n}(x) =G_{2,n}(x)V_{3,n}(x)-F_{2,n}(x)W_{3,n}(x).\]
**Lemma 3.4**.: _Let \(\rho_{N}(x)=\prod_{j=1}^{N}\left(x-c_{j}\right)\) and \(\rho_{d-N}(x)=\prod_{j=1}^{N}\left(x-c_{j}\right)^{d_{j}}=\dfrac{\rho(x)}{\rho_ {N}(x)}\). Then, the above polynomial determinants admit the following decompositions_
\[\begin{split}&\Delta_{1,n}(x)=\rho_{d-N}(x)\ \varphi_{1,n}(x),\\ &\Delta_{2,n}(x)=\rho_{d-N}(x)\ \varphi_{2,n}(x),\\ &\Delta_{3,n}(x)=\rho_{d-N}(x)\ \varphi_{3,n}(x),\end{split}\]
_where_
\[\begin{split}\mathrm{DgCo}(\Delta_{1,n})&=(2d,\gamma _{n}+\sigma_{n}),\\ \mathrm{DgCo}(\Delta_{2,n})&=(2d,-(d+n)(1+\sigma_{n- 1}/\gamma_{n-1})),\\ \mathrm{DgCo}(\Delta_{3,n})&=(2d+1,-(1+\sigma_{n- 1}/\gamma_{n-1})),\\ \mathrm{DgCo}(\varphi_{1,n})&=(d+N,\gamma_{n}+ \sigma_{n}),\\ \mathrm{DgCo}(\varphi_{2,n})&=(d+N,-(d+n)(1+\sigma_ {n-1}/\gamma_{n-1})),\quad\text{and}\\ \mathrm{DgCo}(\varphi_{3,n})&=(d+N+1,-(1+\sigma_{n- 1}/\gamma_{n-1})).\end{split} \tag{27}\]
Proof.: Multiplying (18) by \(G_{3,n}\), (19) by \(G_{2,n}\) and taking their difference, we have
\[\begin{split}\Delta_{1,n}(x)L_{n}^{\alpha}(x)&=\rho (x)G_{3,n}(x)S_{n}(x)-xG_{2,n}(x)\left(\rho^{\prime}(x)S_{n}(x)+\rho(x)S_{n}^{ \prime}(x)\right)\\ &=\rho_{d-N}(x)\Big{(}\rho_{N}(x)[G_{3,n}(x)S_{n}(x)-xG_{2,n}(x)S _{n}^{\prime}(x)]\\ &\quad-xG_{2,n}(x)S_{n}(x)\sum_{j=1}^{N}(d_{j}+1)\ \prod_{i\neq j}(x-c_{i})\,\Big{)}.\end{split}\]
Since the zeros of \(L_{n}^{\alpha}\) lie on \((0,\infty)\), we have that \(\rho_{n-N}\) divides \(\Delta_{1,n}\), i.e., there exists a polynomial \(\varphi_{1,n}\) such that \(\Delta_{1,n}=\rho_{d-N}\varphi_{1,n}\). Using Lemma 3.1, we obtain
\[\begin{split}\mathrm{DgCo}(\Delta_{1,n})&=(2d,\gamma _{n}+\sigma_{n}),\\ \mathrm{DgCo}(\varphi_{1,n})&=(d+N,\gamma_{n}+ \sigma_{n}).\end{split}\]
For the decomposition of \(\Delta_{2,n}\) (\(\Delta_{3,n}\)), the procedure of the proof is analogous, using the linear system of (19)-(20) ((18)-(21)) and Lemmas 3.1, 3.2 for the degrees and leading coefficients.
**Definition 3.1** (Ladder Laguerre-Sobolev differential operators).: _Let \(\mathcal{I}\) be the identity operator. We define the two ladder differential operators on \(\mathds{P}\) as_
\[\begin{split}\mathcal{L}_{n}^{\downarrow}&:=F_{4,n} (x)\mathcal{I}+G_{4,n}(x)\dfrac{d}{dx}\quad\text{(Lowering Laguerre-Sobolev differential operator)},\\ \mathcal{L}_{n}^{\uparrow}&:=V_{4,n}(x)\mathcal{I}+W_ {4,n}(x)\dfrac{d}{dx}\quad\text{(Raising Laguerre-Sobolev differential operator)}.\end{split}\]
**Remark 3.2**.: _Assuming in (7) that \(\lambda_{j,k}\equiv 0\) for all pair \((j,k)\), it is not difficult to verify that \(\mathcal{L}_{n}^{\downarrow}\) and \(\mathcal{L}_{n}^{\uparrow}\) simplify to the expressions given in (3)._
Now, we can rewrite the ladder equations (25) and (26) as
\[\mathcal{L}_{n}^{\downarrow}\left[S_{n}(x)\right]= \left(F_{4,n}(x)\mathcal{I}+G_{4,n}(x)\frac{d}{dx}\right)S_{n}(x)=S _{n-1}(x), \tag{28}\] \[\mathcal{L}_{n}^{\uparrow}\left[S_{n-1}(x)\right]= \left(V_{4,n}(x)\mathcal{I}+W_{4,n}(x)\frac{d}{dx}\right)S_{n-1}(x )=S_{n}(x). \tag{29}\]
## 4 Differential equation
In this section, we state several consequences of the equations (28) and (29), which generalize known results for classical Laguerre polynomials to the Laguerre-Sobolev case. First, we are going to obtain a second-order differential equation with polynomials coefficients for \(S_{n}\). The procedure is well-known, and consists in applying the raising operator \(\mathcal{L}_{n}^{\uparrow}\) to both sides of formula \(\mathcal{L}_{n}^{\downarrow}\left[S_{n}\right]=S_{n-1}\). Thus, we have
\[0= \mathcal{L}_{n}^{\uparrow}\left[\mathcal{L}_{n}^{\downarrow} \left[S_{n}(x)\right]\right]-S_{n}(x)\] \[= G_{4,n}(x)W_{4,n}(x)S_{n}^{\prime\prime}(x)\] \[+\left(F_{4,n}(x)W_{4,n}(x)+G_{4,n}(x)V_{4,n}(x)+W_{4,n}(x)G_{4,n }^{\prime}(x)\right)S_{n}^{\prime}(x)\] \[+\left(F_{4,n}(x)V_{4,n}(x)+W_{4,n}(x)F_{4,n}^{\prime}(x)-1 \right)S_{n}(x)\] \[= \frac{q_{0,n}^{2}(x)}{q_{1,n}(x)q_{4,n}(x)}\;S_{n}^{\prime\prime} (x)\] \[+\frac{q_{0,n}(x)\left(q_{1,n}(x)q_{2,n}(x)+q_{1,n}(x)q_{3,n}(x)+ q_{0,n}^{\prime}(x)q_{1,n}(x)-q_{0,n}(x)q_{1,n}^{\prime}(x)\right)}{q_{4,n}(x)q_{1, n}^{2}(x)}\;S_{n}^{\prime}(x)\] \[+\left(\frac{q_{1,n}(x)q_{2,n}(x)q_{3,n}(x)+q_{0,n}(x)\left(q_{2, n}^{\prime}(x)q_{1,n}(x)-q_{2,n}(x)q_{1,n}^{\prime}(x)\right)}{q_{4,n}(x)q_{1, n}^{2}(x)}-1\right)S_{n}(x);\]
from where we conclude the following result.
**Theorem 2**.: _The nth monic orthogonal polynomial with respect to the inner product (7) is a polynomial solution of the second-order linear differential equation, with polynomial coefficients_
\[\mathcal{P}_{2,n}(x)S_{n}^{\prime\prime}(x)+\mathcal{P}_{1,n}(x)S_{n}^{\prime} (x)+\mathcal{P}_{0,n}(x)S_{n}=0, \tag{30}\]
_where_
\[\mathcal{P}_{2,n}(x)= q_{1,n}(x)q_{0,n}^{2}(x), \tag{31}\] \[\mathcal{P}_{1,n}(x)= q_{0,n}(x)\left(q_{1,n}(x)q_{2,n}(x)+q_{1,n}(x)q_{3,n}(x)+q_{0,n}^{ \prime}(x)q_{1,n}(x)-q_{0,n}(x)q_{1,n}^{\prime}(x)\right),\] \[\mathcal{P}_{0,n}(x)= q_{1,n}(x)q_{2,n}(x)q_{3,n}(x)+q_{0,n}(x)\left(q_{2,n}^{ \prime}(x)q_{1,n}(x)-q_{2,n}(x)q_{1,n}^{\prime}(x)\right)\] \[-q_{4,n}(x)q_{1,n}^{2}(x),\]
_and_
\[\mathrm{DgCo}(\mathcal{P}_{2,n}) =\left(6d+2,(\gamma_{n}+\sigma_{n})\left(1+\frac{\sigma_{n-1}}{ \gamma_{n-1}}\right)^{2}\right).\] \[\mathrm{DgCo}(\mathcal{P}_{1,n}) =\left(6d+2,-(\gamma_{n}+\sigma_{n})\left(1+\frac{\sigma_{n-1}}{ \gamma_{n-1}}\right)^{2}\right).\] \[\mathrm{DgCo}(\mathcal{P}_{0,n}) =\left(6d+1,n(\gamma_{n}+\sigma_{n})\left(1+\frac{\sigma_{n-1}}{ \gamma_{n-1}}\right)^{2}\right).\]
**Remark 4.1** (The classical Laguerre differential equation).: _Note that here, \(F_{1,n}(x)\equiv 1\), \(G_{1,n}(x)=0\), and \(\rho(x)\equiv 1\). For the rest of the expressions involved in the coefficients of the differential equation (30), we have_
\[\rho(x) \equiv 1,\ F_{1,n}(x)\equiv F_{2,n}(x)\equiv W_{2,n}=1,\ G_{1,n}(x) \equiv G_{2,n}(x)\equiv V_{2,n}\equiv 0,\] \[\Delta_{n}(x) \equiv 1,\ F_{3,n}(x)=n,G_{3,n}(x)=n(n+\alpha),\ V_{3,n}(x)=-1\ \text{ and }\] \[W_{3,n}(x) =x-n-\alpha.\]
_Thus,_
\[q_{0,n}(x) =x,\ q_{1,n}(x)=n(n+\alpha),\ \ \ q_{2,n}(x)=-n,\] \[q_{3,n}(x) =n+\alpha-x\text{ and } \tag{32}\] \[q_{4,n}(x) =-1.\]
_Substituting (32) in (31), the reader can verify that the differential equation (30) becomes (4), this is_
\[\mathcal{P}_{2,n}(x)=\gamma_{n}x^{2},\ \mathcal{P}_{1,n}(x)=\gamma_{n}x( \alpha+1-x)\ \text{ and }\ \mathcal{P}_{0,n}(x)=n\gamma_{n}x.\]
Secondly, we can obtain the polynomial \(n\)th degree of the sequence \(\{S_{n}\}_{n\geqslant 0}\) as the repeated action (\(n\) times) of the raising differential operator on the first Sobolev-type polynomial of the sequence (i.e., the polynomial of degree zero).
**Theorem 3**.: _The \(n\)th Laguerre-Sobolev polynomial of \(\{S_{n}\}_{n\geqslant 0}\) is given by_
\[S_{n}(x)=\left(\mathcal{L}_{n}^{\uparrow}\mathcal{L}_{n-1}^{ \uparrow}\mathcal{L}_{n-2}^{\uparrow}\cdots\mathcal{L}_{1}^{\uparrow}\right)S_ {0}(x),\quad\text{ where }S_{0}(x)=1.\]
Proof.: Using (29), the theorem follows for \(n=1\). Next, the expression for \(S_{n}\) is a straightforward consequence of the definition of the raising operator.
To conclude this section, we prove an interesting three-term recurrence relation, with polynomial coefficients for the Laguerre-Sobolev monic polynomials.
**Theorem 4**.: _Under the assumptions of Theorem 2, we have the recurrence relation_
\[q_{4,n+1}(x)q_{0,n}(x)S_{n+1}(x)= \left[q_{3,n+1}(x)q_{0,n}(x)-q_{2,n}(x)q_{0,n+1}(x)\right]S_{n}(x) \tag{33}\] \[+q_{1,n}(x)q_{0,n+1}(x)S_{n-1}(x),\]
_where the explicit formulas of the coefficients are given in Theorem 1._
Proof.: From (25) and (26) for \(n+1\), we have
\[F_{4,n}(x)S_{n}(x)+G_{4,n}(x)S^{\prime}_{n}(x) =S_{n-1}(x),\] \[V_{4,n+1}(x)S_{n}(x)+W_{4,n+1}(x)S^{\prime}_{n}(x) =S_{n+1}(x).\]
Substituting the coefficients by its explicit expressions given in Theorem 1, we obtain
\[q_{2,n}(x)S_{n}(x)+q_{0,n}(x)S^{\prime}_{n}(x) =q_{1,n}(x)S_{n-1}(x);\] \[q_{3,n+1}(x)S_{n}(x)+q_{0,n+1}(x)S^{\prime}_{n}(x) =q_{4,n+1}(x)S_{n}(x).\]
Multiplying the equations by \(q_{0,n+1}(x)\) and \(q_{0,n}(x)\) respectively and subtracting the first equation from the second one to eliminate the derivative term, we get
\[\big{(}q_{3,n+1}(x)q_{0,n}(x)-q_{2,n}(x)q_{0,n+1}(x)\big{)}S_{n}(x)=q_{4,n+1}( x)q_{0,n}(x)S_{n+1}(x)-q_{1,n}(x)q_{0,n+1}(x)S_{n-1}(x),\]
which is the required formula.
**Remark 4.2** (The classical Laguerre three-term recurrence relation).: _Under the assumptions of Remark 3.2, substituting (32) in (33), the reader can verify that the three-term recurrence relation (33) becomes (1), this is_
\[\frac{q_{3,n+1}(x)q_{0,n}(x)-q_{2,n}(x)q_{0,n+1}(x)}{q_{4,n+1}(x) q_{0,n}(x)} =x-2n-\alpha-1\quad\text{and}\] \[\frac{q_{1,n}(x)q_{0,n+1}(x)}{q_{4,n+1}(x)q_{0,n}(x)} =-n(n+\alpha).\]
## 5 Electrostatic model
Let \(\rho(x)\) be as in (16) and \(d\mu_{\rho}(x)=\rho(x)d\mu^{\alpha}(x)\). Note that \(\rho\) is a polynomial of degree \(d=\sum_{j=1}^{N}(d_{j}+1)\) which is positive on \((0,+\infty)\). If \(n>d\), from (8), \(\{S_{n}\}_{n\geqslant 0}\) satisfies the following quasi-orthogonality relations with respect to \(\mu_{\rho}\)
\[\langle S_{n},f\rangle_{\mu_{\rho}}=\langle S_{n},\rho f\rangle_{\alpha}= \int_{0}^{+\infty}S_{n}(x)f(x)\rho(x)d\mu^{\alpha}(x)=\langle S_{n},\rho f \rangle_{\sf S}=0,\]
for \(f\in\mathds{P}_{n-d-1}\), where \(\mathds{P}_{n}\) is the linear space of polynomials with real coefficients and degree less than or equal to \(n\in\mathds{Z}_{+}\). Hence, _the polynomial \(S_{n}\) is quasi-orthogonal of order \(d\) with respect to \(\mu_{\rho}\)_ and by this argument, we get that \(S_{n}\) has at least \((n-d)\) changes of sign in \((0,+\infty)\).
From the results obtained in [11, (1.10)], [2], and [3], it seems that the number of zeros located in the interior of the support of the measure is closely related to \(d^{*}=\#(I_{+})\). Recall that \(d^{*}\) is the number of terms in the discrete part of \(\langle\cdot,\cdot\rangle_{\sf S}\) (i.e., \(\lambda_{j,k}>0\)).
Let us continue by giving the definition of sequentially-ordered Sobolev inner product, which was first introduced in [2, Def. 1] and then refined in [3, Def. 1].
**Definition 5.1** (Sequentially-ordered Sobolev inner product).: _Consider a discrete Sobolev inner product in the general form (7) and assume that \(d_{1}\leq d_{2}\leq\cdots\leq d_{N}\) without loss of generality. We say that a discrete Sobolev inner product is sequentially ordered if the conditions_
\[\Delta_{k}\cap\mathbf{Co}\!\left(\cup_{i=0}^{k-1}\Delta_{i}\right)^{\mathrm{o}} =\emptyset,\qquad k=1,2,\ldots,d_{N},\]
_hold, where_
\[\Delta_{k}=\begin{cases}\mathbf{Co}\!\left(\mathrm{supp}\,\mu\cup\{c_{j}: \lambda_{j,0}>0\}\right),&\text{if }\ k=0;\\ \\ \mathbf{Co}\!\left(\{c_{j}:\lambda_{j,k}>0\}\right),&\text{if }\ 1\leq k\leq d_{N}. \end{cases}\]
Note that \(\Delta_{k}\) is the convex hull of the support of the measure associated with the \(k\)-th order derivative in the Sobolev inner product (7).
Hereinafter, we will focus on sequentially ordered discrete Sobolev inner products that have only one derivative order at each mass point. This means that (7) takes the form
\[\langle f,g\rangle_{\mathfrak{s}}=\!\!\int_{0}^{+\infty}\!f(x)g(x)x^{\alpha}e ^{-x}dx+\sum_{j=1}^{N}\lambda_{j}f^{(d_{j})}(c_{j})g^{(d_{j})}(c_{j}), \tag{34}\]
where \(\lambda_{j}:=\lambda_{j,d_{j}}>0\), and \(c_{j}<0\), for \(j=1,2,\ldots,N\).
The following lemma shows our reasons for this assumption.
**Lemma 5.1** ([3, Cor 2-1]-[4, Prop. 4]).: _If (34) is a sequentially-ordered discrete Sobolev inner product, then the following statements hold:_
1. _Every point_ \(c_{j}\) _attracts exactly one zero of_ \(S_{n}\) _for sufficiently large_ \(n\)_, while the remaining_ \(n-N\) _zeros are contained in_ \((0,+\infty)\)_. This means: For every_ \(r>0\)_, there exists a natural value_ \(\mathcal{N}\) _such that if_ \(n\geq\mathcal{N}\)_, then the_ \(n\) _zeros of_ \(S_{n}\left\{\xi_{i}\right\}_{i=1}^{n}\) _satisfy_ \(\xi_{j}\in B\left(c_{j},r\right)\) _for_ \(j=1,\ldots,N\) _and_ \(\xi_{i}\in(0,+\infty)\) _for_ \(i=N+1,N+2,\ldots,n\)_._
2. _The zeros of_ \(S_{n}\) _are real and simple for large-enough values of_ \(n\)_._
In the rest of this section we will assume that the zeros of \(S_{n}\) are simple. Observe that sequentially-ordered Sobolev inner products provide us with a wide class of Sobolev inner products such that the zeros of the corresponding orthogonal polynomials are simple. Therefore, for all \(n\) sufficiently large, we have
\[S^{\prime}_{n}(x)=\sum_{i=1}^{n}\prod_{\begin{subarray}{c}j=1,\\ j\neq i\end{subarray}}^{n}(x-x_{n,j}),\qquad S^{\prime\prime}_{n}(x)=\sum_{ \begin{subarray}{c}i,j=1,\\ j\neq i\end{subarray}}^{n}\prod_{\begin{subarray}{c}l=1,\\ l\neq i,j\end{subarray}}^{n}(x-x_{n,l}),\]
\[S^{\prime}_{n}(x_{n,k})=\prod_{\begin{subarray}{c}j=1,\\ j\neq k\end{subarray}}^{n}(x_{n,k}-x_{n,j}),\qquad S^{\prime\prime}_{n}(x_{n,k} )=2\sum_{\begin{subarray}{c}i=1,\\ i\neq k\end{subarray}}^{n}\prod_{\begin{subarray}{c}l=1,\\ l\neq i,k\end{subarray}}^{n}(x_{n,k}-x_{n,l}).\]
Now, we evaluate the polynomials \(\mathcal{P}_{2,n}(x),\ \mathcal{P}_{1,n}(x)\) and \(\mathcal{P}_{0,n}(x)\) in (30) at \(x_{n,k}\), where \(\{x_{n,k}\}_{k=1}^{n}\) are the zeros of \(S_{n}(x)\) arranged in an increasing order. Then, for \(k=1,2,\ldots,n\); we get
\[0= \mathcal{P}_{2,n}(x_{n,k})S_{n}^{\,\prime\prime}(x_{n,k})+\mathcal{ P}_{1,n}(x_{n,k})S_{n}^{\,\prime}(x_{n,k})+\mathcal{P}_{0,n}(x_{n,k})S_{n}(x_{n,k})\] \[= \mathcal{P}_{2,n}(x_{n,k})S_{n}^{\,\prime\prime}(x_{n,k})+\mathcal{ P}_{1,n}(x_{n,k})S_{n}^{\,\prime}(x_{n,k}).\] \[0= \frac{S_{n}^{\,\prime\prime}(x_{n,k})}{S_{n}^{\,\prime}(x_{n,k}) }+\frac{\mathcal{P}_{1,n}(x_{n,k})}{\mathcal{P}_{2,n}(x_{n,k})}=2\sum_{i=1 \atop i\neq k}^{n}\frac{1}{x_{n,k}-x_{n,i}}+\frac{\mathcal{P}_{1,n}(x_{n,k})}{ \mathcal{P}_{2,n}(x_{n,k})}. \tag{35}\]
From Theorem 1, Theorem 2, and Lemma 3.4
\[\frac{\mathcal{P}_{1,n}(x)}{\mathcal{P}_{2,n}(x)} =\frac{q_{1,n}(x)q_{2,n}(x)+q_{1,n}(x)q_{3,n}(x)+q_{0,n}^{\, \prime}(x)q_{1,n}(x)-q_{0,n}(x)q_{1,n}^{\prime}(x)}{q_{1,n}(x)q_{0,n}(x)}\] \[=\frac{q_{2,n}(x)+q_{3,n}(x)}{q_{0,n}(x)}+\frac{q_{0,n}^{\, \prime}(x)}{q_{0,n}(x)}-\frac{q_{1,n}^{\,\prime}(x)}{q_{1,n}(x)}\] \[=2\frac{\rho^{\prime}(x)}{\rho(x)}+\frac{\Delta_{2,n}(x)+\Delta_{ 3,n}(x)}{x\rho(x)\delta_{n}(x)}+\frac{\Delta_{n}^{\prime}(x)}{\Delta_{n}(x)}+ \frac{1}{x}-\frac{\Delta_{1,n}^{\prime}(x)}{\Delta_{1,n}(x)}\] \[=3\frac{\rho^{\prime}(x)}{\rho(x)}+\frac{\varphi_{2,n}(x)+\varphi _{3,n}(x)}{x\rho_{N}(x)\delta_{n}(x)}+\frac{\delta_{n}^{\,\prime}(x)}{\delta_{ n}(x)}+\frac{1}{x}-\frac{\varphi_{1,n}^{\,\prime}(x)}{\varphi_{1,n}(x)}-\frac{ \rho_{d-N}^{\prime}(x)}{\rho_{d-N}(x)}. \tag{36}\]
Let us write, \(\frac{\rho^{\prime}(x)}{\rho(x)}=\sum_{j=1}^{N}\frac{d_{j}+1}{x-c_{j}},\qquad \frac{\rho_{d-N}^{\prime}(x)}{\rho_{d-N}(x)}=\sum_{j=1}^{N}\frac{d_{j}}{x-c_{j}}\).
From (27) and Remark 3.1, \(\varphi_{2,n}(x)+\varphi_{3,n}(x)\) and \(x\rho_{N}(x)\delta_{n}(x)\) are polynomials of degree \(d+N+1\) and leading coefficient \(\mp(1+\sigma_{n-1}/\gamma_{n-1})\) respectively. Then, their quotient can be rewritten as
\[\frac{\varphi_{2,n}(x)+\varphi_{3,n}(x)}{x\rho_{N}(x)\delta_{n}(x)}=-1+\frac{ \psi_{1}(x)}{\psi_{2}(x)};\]
where \(\psi_{2}(x)=x\rho_{N}(x)\delta_{n}(x)\) and \(\psi_{1}\) is a polynomial of degree at most \(d+N\).
Based on the results of our numerical experiments, in the remainder of the section, we will assume certain restrictions with respect to some functions and parameters involved in (36). In that sense, we suppose that
1. The zeros of \(\delta_{n}\) are real simple and different from the zeros of \(S_{n}\), the mass points and zero, i.e., \(u_{i}\in\mathbb{R}\setminus\left(\{x_{n,k}\}_{k=1}^{n}\cup\{c_{j}\}_{j=1}^{N} \cup\{0\}\right)\) for \(i=1,2,\ldots,d\) and \(u_{i}\neq u_{j}\) unless \(i=j\). Therefore, \[\delta_{n}(x)=\left(1+\frac{\sigma_{n-1}}{\gamma_{n-1}}\right) \prod_{i=1}^{d}(x-u_{i}),\] \[\frac{\delta_{n}^{\,\prime}(x)}{\delta_{n}(x)}=\sum_{i=1}^{d} \frac{1}{x-u_{i}}.\]
Thus,
\[\frac{\psi_{1}(x)}{\psi_{2}(x)}=\frac{r(0)}{x}+\sum_{j=1}^{N}\frac{r(c_{j})}{x-c_{ j}}+\sum_{i=1}^{d}\frac{r(u_{i})}{x-u_{i}},\quad\text{where }r(x)=\frac{\psi_{1}(x)}{\psi_{2}^{\prime}(x)}.\]
2. Let \(\varphi_{1,n}(x)=(\gamma_{n}+\sigma_{n})\prod_{j=1}^{N_{1}}(x-e_{j})^{\ell_{4,j}}\), where \(e_{j}\in\mathds{C}\setminus\mathbf{Co}([0,+\infty)\cup\{c_{1},\ldots,c_{N}\})\) for all \(j=1,\ldots,N-1\); and \(\sum_{j=1}^{N_{1}}\ell_{4,j}=d+N\). Therefore, \(\frac{\varphi_{1,n}^{\prime}(x)}{\varphi_{1,n}(x)}=\sum_{j=1}^{N_{1}}\frac{ \ell_{4,j}}{x-e_{j}}\).
3. The function \(r\) satisfies \(r(0),r(u_{i})>-1\) for \(i=1,2,\ldots d\) and \(r(c_{j})>-2d_{j}-3\) for \(j=1,2,\ldots,N\). Substituting the previous decompositions into (36), we have \[\frac{\mathcal{P}_{1,n}(x)}{\mathcal{P}_{2,n}(x)}=-1+\frac{\ell_{1}}{x}+\sum_ {j=1}^{N}\frac{\ell_{2,j}}{x-c_{j}}+\sum_{j=1}^{d}\frac{\ell_{3,j}}{x-u_{j}}- \sum_{j=1}^{N_{1}}\frac{\ell_{4,j}}{x-e_{j}},\] where, \(\ell_{1}=1+r(0)\), \(\ell_{2,j}=2d_{j}+r(c_{j})+3\) and \(\ell_{3,j}=r(u_{j})+1\) are positive values. From (35), for \(k=1,\ldots,n\), we get \[0 =\sum_{\begin{subarray}{c}i=1\\ i\neq k\end{subarray}}^{n}\frac{1}{x_{n,k}-x_{n,i}}-\frac{1}{2}+\frac{\ell_{1 }}{2x_{n,k}}\] \[\quad+\frac{1}{2}\sum_{j=1}^{N}\frac{\ell_{2,j}}{x_{n,k}-c_{j}}+ \frac{1}{2}\sum_{j=1}^{d}\frac{\ell_{3,j}}{x_{n,k}-u_{j}}+\frac{1}{2}\sum_{j=1 }^{N_{1}}\frac{\ell_{4,j}}{e_{j}-x_{n,k}}.\] (37) Let \(\overline{\omega}=(\omega_{1},\omega_{2},\cdots,\omega_{n})\), \(\overline{x}_{n}=(x_{n,1},x_{n,2},\cdots,x_{n,n})\) and denote \[\mathbf{E}(\overline{\omega}) :=\sum_{1\leq k<j\leq n}\log\frac{1}{\omega_{j}-\omega_{k}}+ \mathbf{F}(\overline{\omega})+\mathbf{G}(\overline{\omega}),\] (38) \[\mathbf{F}(\overline{\omega}) :=\] \[\mathbf{G}(\overline{\omega}) :=\]
Let us introduce the following electrostatic interpretation:
_Consider the system of \(n\) movable positive unit charges at \(n\) distinct points of the real line, \(\{\omega_{1},\omega_{2},\cdots,\omega_{n}\}\), where their interaction follows the logarithmic potential law (that is, the force is inversely proportional to the relative distance) in the presence of the total external potential \(\mathbf{H}_{n}(\overline{\omega})=\mathbf{F}(\overline{\omega})+\mathbf{G}( \overline{\omega})\). Then, \(\mathbf{E}(\overline{\omega})\) is the total energy of this system._
Following the notations introduced in [9, Sec. 2], the Laguerre-Sobolev inner product creates two external fields. One is a long-range field whose potential is \(\mathbf{F}(\overline{\omega})\). The other is a short-range field whose potential is \(\mathbf{G}(\overline{\omega})\). So, the total external potential \(\mathbf{H}_{n}(\overline{\omega})\) is the sum of the short and long-range potentials, which is dependent on \(n\) (varying external potential).
Therefore, for each \(k=1,\ldots,n\); we have \(\dfrac{\partial\mathbf{E}}{\partial\omega_{k}}(\overline{x}_{n})=0\), this is, the zeros of \(S_{n}\) are the zeros of the gradient of the total potential of energy \(\mathbf{E}(\overline{\omega})\) (\(\nabla\mathbf{E}(\overline{x}_{n})=0\)).
**Theorem 5**.: _The zeros of \(S_{n}(x)\) are a local minimum of \(\mathbf{E}(\overline{\omega})\), if for all \(k=1,\ldots,n\);_
1. \(\dfrac{\partial\mathbf{E}}{\partial\omega_{k}}(\overline{x}_{n})=0\)_._
2. \(\dfrac{\partial^{2}\mathbf{H}_{n}}{\partial\omega_{k}^{2}}( \overline{x}_{n})=\dfrac{\partial^{2}\mathbf{F}}{\partial\omega_{k}^{2}}( \overline{x}_{n})+\dfrac{\partial^{2}\mathbf{G}}{\partial\omega_{k}^{2}}( \overline{x}_{n})\)_\(=\)__\(-\dfrac{1}{2}\left(\dfrac{\mathcal{P}_{1,n}(x)}{\mathcal{P}_{2,n}(x)}\right)^{ \prime}>0\)_._
Proof.: The Hessian matrix of \(\mathbf{E}\) at \(\overline{x}_{n}\) is given by
\[\nabla_{\overline{\omega}\,\omega}^{2}\mathbf{E}(\overline{x}_{n})=\begin{cases} \dfrac{\partial^{2}\mathbf{E}}{\partial\omega_{k}\partial\omega_{j}}( \overline{x}_{n})=-\dfrac{1}{(x_{n,k}-x_{n,j})^{2}},&\text{if \ $k\neq j$,}\\ \dfrac{\partial^{2}\mathbf{E}}{\partial\omega_{k}^{2}}(\overline{x}_{n})= \sum_{{i=1}\atop{i\neq k}}^{n}\dfrac{1}{(x_{n,k}-x_{n,i})^{2}}+\dfrac{ \partial^{2}\mathbf{H}_{n}}{\partial\omega_{k}^{2}}(\overline{x}_{n}),&\text{ if \ $k=j$.}\end{cases} \tag{39}\]
Since (39) is a symmetric real matrix, its eigenvalues are real. Therefore, using Gershgorin's Theorem [7, Th. 6.1.1], the eigenvalues \(\lambda\) of the Hessian at \(\overline{x}\) satisfy
\[\left|\lambda-\sum_{{i=1}\atop{i\neq k}}^{n}\dfrac{1}{(x_{n,k}-x_{n,i})^{2}}- \dfrac{\partial^{2}\mathbf{H}_{n}}{\partial\omega_{k}^{2}}(\overline{x}_{n}) \right|\leq\sum_{{i=1}\atop{j\neq k}}^{n}\left|\dfrac{\partial^{2}\mathbf{E}}{ \partial\omega_{k}\partial\omega_{j}}(\overline{x}_{n})\right|=\sum_{{i=1}\atop {i\neq k}}^{n}\dfrac{1}{(x_{n,k}-x_{n,i})^{2}}.\]
for some \(k=1,2,\ldots,n\). Then, we have
\[\lambda\geq\dfrac{\partial^{2}\mathbf{H}_{n}}{\partial\omega_{k}^{2}}( \overline{x}_{n})>0.\]
The computations of the following examples have been performed by using the symbolic computer algebra system _Maxima_[15].
In all cases, we fixed \(n=12\) and consider sequentially-ordered Sobolev inner product (see Definition 5.1 and Lemma 5.1). From (37), it is obvious that \(\nabla\mathbf{E}(\overline{x}_{12})=0\), where \(\overline{x}_{12}=(x_{12,1},x_{12,2},\cdots,x_{12,n})\) and \(S_{12}(x_{12,k})=0\) for \(k=1,\ 2,\ldots,\,12\). Under the above condition, \(\overline{x}_{12}\) is a local minimum (maximum) of \(\mathbf{E}\) if the corresponding Hessian matrix at \(\overline{x}_{12}\) is positive (negative) definite, in any other case \(\overline{x}_{12}\) is said to be a saddle point. We recall that a square matrix is positive (negative) definite if all its eigenvalues are positive (negative).
**Example 1** (Case in which the conditions of the Theorem 5 are satisfied).
1. _Laguerre-Sobolev inner product_ \(\langle f,g\rangle_{\rm s}=\int_{0}^{+\infty}f(x)g(x)x^{11}e^{-x}dx+f^{\prime}(-2 )g^{\prime}(-2)\)_._
2. _Zeros of_ \(S_{12}(x)\)_._ \[\overline{x}_{12}= \left(3.0537,\ 5.16053,\ 7.53124,\ 10.2434,\ 13.3451,\ 16.8869,\right.\] \[\left.20.9337,\ 25.5751,\ 30.9455,\ 37.2657,\ 44.9569,\ 55.0972 \right).\]
3. _Total potential of energy_ \(\mathbf{E}(\overline{\omega})=\sum_{1\leq k<j\leq 12}\log\frac{1}{| \omega_{j}-\omega_{k}|}+\mathbf{F}(\overline{\omega})+\mathbf{G}(\overline{ \omega})\)_, where_ \[\mathbf{F}(\overline{\omega})= \frac{1}{2}\sum_{k=1}^{12}\left(|\omega_{k}|+12\log\frac{1}{| \omega_{k}|}+\log\frac{1}{|\omega_{k}+2|^{3}}\right),\] \[\mathbf{G}(\overline{\omega})= \frac{1}{2}\sum_{k=1}^{12}\log|(\omega_{k}+0.528573)(\omega_{k}+ 1.7501)(\omega_{k}-1.40334)|\.\]
4. _From_ (37)_,_ \(\frac{\partial\mathbf{E}}{\partial\omega_{j}}(\overline{x}_{12})=0\)_, for_ \(j=1,\ldots,12\)_._
5. _Computing the corresponding Hessian matrix at_ \(\overline{x}_{12}\)_, we have that the approximate eigenvalues are_ \[\left\{0.0127\ 0.0304\ 0.0517\ 0.0778\ 0.1102\ 0.1509\right.\] \[\left.0.2033\ 0.2722\ 0.3653\ 0.495\ 0.6825\ 0.9661\right\}.\]
_Thus, Theorem 5 holds for this example, and we have the required local electrostatic equilibrium distribution._
**Example 2** (Case in which the conditions of the Theorem 5 are satisfied).:
1. _Laguerre-Sobolev inner product_ \(\langle f,g\rangle_{\rm s}=\int_{0}^{+\infty}f(x)g(x)x^{14}e^{-x}dx+f^{\prime}( -2)g^{\prime}(-2)\)_._
2. _Zeros of_ \(S_{12}(x)\)_._ \[\overline{x}_{12}= \left(4.7832,\ 7.23584,\ 9.92786,\ 12.9448,\ 16.3404,\ 20.1693,\right.\] \[\left.24.4992,\ 29.4232,\ 35.0794,\ 41.6941,\ 49.6983,\ 60.1956 \right).\]
3. _Total potential of energy_ \(\mathbf{E}(\overline{\omega})=\sum_{1\leq k<j\leq 12}\log\frac{1}{| \omega_{j}-\omega_{k}|}+\mathbf{F}(\overline{\omega})+\mathbf{G}(\overline{ \omega})\)_, where_ \[\mathbf{F}(\overline{\omega})= \frac{1}{2}\sum_{k=1}^{12}\left(|\omega_{k}|+15\log\frac{1}{| \omega_{k}|}+\log\frac{1}{|\omega_{k}+2|^{3}}\right),\] \[\mathbf{G}(\overline{\omega})= \frac{1}{2}\sum_{k=1}^{12}\log|(\omega_{k}+1.87468)\tau(\omega_{ k})|\ \ \text{and}\ \ \tau(x)=4.25297+4.10532x+x^{2}>0.\]
4. _From (_37_),_ \(\dfrac{\partial\mathbf{E}}{\partial\omega_{j}}(\overline{x}_{12})=0,\) _for_ \(j=1,\ldots,12.\)__
5. _Computing the corresponding Hessian matrix at_ \(\overline{x}_{12}\)_, we have that the approximate eigenvalues are_ \[\begin{array}{c}\{0.0152,\ 0.0344,\ 0.0576,\ 0.0861,\ 0.1219,\ 0.1678,\\ 0.2279,\ 0.3094,\ 0.4241,\ 0.5942,\ 0.8665,\ 1.3566\}\,.\end{array}\]
_Thus, Theorem 5 holds for this example, and we have the required local electrostatic equilibrium distribution._
**Example 3** (Case in which the conditions of the Theorem 5 are satisfied ).:
1. _Laguerre-Sobolev inner product_ \(\langle f,g\rangle_{8}=\int_{0}^{+\infty}f(x)g(x)x^{11}e^{-x}dx+f^{\prime \prime}(-2)g^{\prime\prime}(-2).\)__
2. _Zeros of_ \(S_{12}(x).\)__ \[\begin{array}{c}\overline{x}_{12}=(3.35093,\ 5.41033,\ 7.75809,\ 10.456,\ 13.5478,\ 1 7.0825,\\ 21.1239,\ 25.7612,\ 31.1283,\ 37.4459,\ 45.1347,\ 55.2729)\,.\end{array}\]
3. _Total potential of energy_ \(\mathbf{E}(\overline{\omega})=\sum_{1\leq k<j\leq 12}\log\dfrac{1}{| \omega_{j}-\omega_{k}|}+\mathbf{F}(\overline{\omega})+\mathbf{G}(\overline{ \omega})\)_, where_ \[\begin{array}{c}\mathbf{F}(\overline{\omega})=&\dfrac{1}{2}\sum_{k=1}^{12} \left(|\omega_{k}|+12\log\dfrac{1}{|\omega_{k}|}+\log\dfrac{1}{|\omega_{k}+2| ^{4}}\right),\\ \mathbf{G}(\overline{\omega})=&\dfrac{1}{2}\sum_{k=1}^{12}\log|( \omega_{k}+0.0989292)(\omega_{k}+1.64715)\tau(\omega_{k})|\\ &\text{and}\ \tau(x)=5.72898-1.63056x+x^{2}>0.\end{array}\]
4. _From (_37_),_ \(\dfrac{\partial\mathbf{E}}{\partial\omega_{j}}(\overline{x}_{12})=0,\) _for_ \(j=1,\ldots,12.\)__
5. _Computing the corresponding Hessian matrix at_ \(\overline{x}_{12}\)_, we have that the approximate eigenvalues are_ \[\begin{array}{c}\{0.0126\ 0.0303\ 0.0516\ 0.0777\ 0.1101\ 0.151\\ 0.2038\ 0.2737\ 0.3689\ 0.5042\ 0.7066\ 1.0321\}\,.\end{array}\]
_Thus, Theorem 5 holds for this example, and we have the required local electrostatic equilibrium distribution._
**Example 4** (Case in which the conditions of the Theorem 5 are satisfied).:
1. _Laguerre-Sobolev inner product_ \[\langle f,g\rangle_{\mathsf{s}}=\int_{0}^{+\infty}f(x)g(x)x^{14}e^{-x}dx+f^{ \prime}(-1)g^{\prime}(-1)+f^{\prime\prime}(-2)g^{\prime\prime}(-2).\]
2. _Zeros of_ \(S_{12}(x)\)_._ \[\overline{x}_{12}= \left(4.78339,\,7.23607,\,9.9281,\,12.9451,\,16.3407,\,20.1695,\right.\] \[\left.24.4995,\,29.4235,\,35.0797,\,41.6944,\,49.6986,\,60.196 \right).\]
3. _Total potential of energy_ \(\mathbf{E}(\overline{\omega})=\sum_{1\leq k<j\leq 12}\log\frac{1}{| \omega_{j}-\omega_{k}|}+\mathbf{F}(\overline{\omega})+\mathbf{G}(\overline{ \omega})\)_, where_ \[\mathbf{F}(\overline{\omega})= \frac{1}{2}\sum_{k=1}^{12}\left(\log\frac{1}{|\omega_{k}|^{15}}+ \log\frac{1}{|\omega_{k}+1|^{3}}+\log\frac{1}{|\omega_{k}+2|^{4}}\right),\] \[\mathbf{G}(\overline{\omega})= \frac{1}{2}\sum_{k=1}^{12}\log|(\omega_{k}+0.933652)\tau_{1}( \omega_{k})\tau_{2}(\omega_{k})\tau_{3}(\omega_{k})|\] \[\tau_{1}(x)=1.07168+2.06058x+x^{2}>0,\] \[\text{and}\hskip 14.226378pt\tau_{2}(x)=3.19751+3.56774x+x^{2}>0,\] \[\tau_{3}(x)=4.99621+4.42482x+x^{2}>0.\]
4. _From (_35_),_ \(\frac{\partial\mathbf{E}}{\partial\omega_{j}}(\overline{x}_{12})=0,\) _for_ \(j=1,\ldots,12.\)__
5. _Computing the corresponding Hessian matrix at_ \(\overline{x}_{12}\)_, we have that the approximate eigenvalues are_ \[\left\{0.0117,\,0.0278,\,0.0469,\,0.0699,\,0.0978,\,0.1322,\right.\] \[\left.0.1752,\,0.2301,\,0.3016,\,0.3973,\,0.5292,\,0.7179\right\}.\]
_Thus, Theorem 5 holds for this example, and we have the required local electrostatic equilibrium distribution._
**Example 5** (Case in which the conditions of the Theorem 5 are not satisfied).:
1. _Laguerre-Sobolev inner product_ \[\langle f,g\rangle_{\mathsf{s}}=\int_{0}^{+\infty}f(x)g(x)e^{-x}dx+f^{\prime}( -1)g^{\prime}(-1)+f^{\prime\prime}(-2)g^{\prime\prime}(-2).\]
_._
2. _Zeros of_ \(S_{12}(x)\)_._ \[\overline{x}_{12}= \left(-2.86242,\,-1.69526,\,0.284629,\,1.36447,\,3.03668,\,5.23686,\right.\] \[\left.7.98826,\,11.3572,\,15.4574,\,20.4841,\,26.8154,\,35.422 \right).\]
3. _Total potential of energy_ \(\mathbf{E}(\overline{\omega})=\sum_{1\leq k<j\leq 12}\log\frac{1}{|\omega_{j}- \omega_{k}|}+\mathbf{F}(\overline{\omega})+\mathbf{G}(\overline{\omega})\)_, where_ \[\mathbf{F}(\overline{\omega})= \frac{1}{2}\sum_{k=1}^{12}\left(\log\frac{1}{|\omega_{k}-1|}+ \log\frac{1}{|\omega_{k}+1|}+\log\frac{1}{|\omega_{k}-2|^{3}}\right),\] \[\mathbf{G}(\overline{\omega})= \frac{1}{2}\sum_{k=1}^{12}\log|(\omega_{k}+1.44591)(\omega_{k}+ 1.78194)(\omega_{k}+2.7333)\tau_{1}(\omega_{k})\tau_{2}(\omega_{k})|\] \[\text{and} \tau_{1}(x)=0.440466+1.23097x+x^{2}>0,\] \[\tau_{2}(x)=2.76889+3.19398x+x^{2}>0.\]
4. _From_ (37)_,_ \(\frac{\partial\mathbf{E}}{\partial\omega_{j}}(\overline{x}_{12})=0,\) _for_ \(j=1,\ldots,12.\)__
5. _Computing the corresponding Hessian matrix at_ \(\overline{x}_{12}\)_, we have that the approximate eigenvalues are_ \[\left\{-45.8083,\,-27.1075,\,0.0188,\,0.0473,\,0.0853,\,0.1377,\right.\] \[\left.0.213,\,0.3272,\,0.5154,\,0.8688,\,1.7428,\,7.4559\right\}.\]
_Then, \(\overline{x}_{12}\) is a saddle point of \(\mathbf{E}(\overline{\omega})\)._
**Remark 5.1**.: _As can be noticed, in some cases the configuration given by the external field includes complex points, they correspond to \(e_{j}\). Specifically, in the examples, these points are given as the zeros of \(\tau(x)\). Since \(\phi_{1,n}(x)\) is a polynomial of real coefficients, its non-real zeros arise as complex conjugate pairs. Note that_
\[\frac{a}{x-z}+\frac{a}{x-\overline{z}}=a\frac{2x+2\Re z}{x^{2}+2\Re z+|z|^{2}};\]
_where \(\Re z\) denotes the real part of \(z\). The anti derivative of the previous expression is a \(\ln(x^{2}+2\Re z+|z|^{2})\). This means in our current case that the presence of complex roots does not change the formulation of the energy function._
**Remark 5.2**.: _Theorem 5 gives a general condition to determine whether the electrostatic model is an extension of the classical cases. However, in Example 5, the Hessian has two negatives eigenvalues corresponding to the first two variables \(\omega_{1}\) and \(\omega_{2}\)._
_Therefore, we do not have the nice interpretation given in Theorem 5. However, note that the rest of the eigenvalues are positive, which means that the number_
\[\frac{\partial^{2}\mathbf{H}_{n}}{\partial\omega_{k}^{2}}(\overline{x}_{n})\]
_remains positive for \(k=3,\ldots,12\). In this case, the potential function exhibits a saddle point. The presence of the saddle point is somehow justified by the attractor points \(-1.44591\), \(-1.78194\) and \(-2.7333\) having two zeros in its vicinity \(x_{12,1}\approx-2.86242\) and \(x_{12,2}\approx-1.69526\). In this case, we are able to give an interpretation of the position of the zeros by considering a problem of conditional extremes._
_Assume that when checking the Hessian we obtained that the eigenvalues \(\lambda_{n,i}\), for the indexes \(i\in\mathcal{E}\subset\{1,2,\ldots,n\}\), are negative or zero. Without loss of generality, assume that this happens for the first \(m_{\mathcal{E}}=|\mathcal{E}|\) variables. This is a saddle point. However, the rest of the eigenvalues are positive, which means that the truncated Hessian \(\nabla^{2}_{\omega_{m_{\mathcal{E}}}\omega_{m_{\mathcal{E}}}}\mathbf{E}\) formed by taking the last \(n-m_{\mathcal{E}}\) rows and columns of \(\nabla^{2}_{\overline{\omega}\omega}\mathbf{E}_{R}\) is a positive definite matrix by the same arguments used in the proof of Theorem 5._
_Thus, let us define the following conditional extremes problem with the following notation \(\overline{\omega}=\overline{\omega}_{n}=(\omega_{1},\omega_{2},\ldots,\omega _{n})\in\mathds{R}^{n}\)_
\[\min_{\overline{\omega}_{n}\in\mathds{R}^{n}}\mathbf{E}(\overline{ \omega}_{n})\] \[\text{subject to }\omega_{k}-x_{k}=0,\text{ for all }\ k=1,\ldots,m_{\mathcal{E}}.\]
_Note that this problem is equivalent to solve_
\[\min_{\overline{\omega}_{n-m_{\mathcal{E}}}\in\mathds{R}^{n-m_{\mathcal{E}}}} \mathbf{E}_{R}(x_{1},\ldots,x_{m_{\mathcal{E}}},\overline{\omega}_{n-m_{ \mathcal{E}}}).\]
_Let us prove that \(\overline{x}_{n-m_{\mathcal{E}}}\) is a minimum of this problem. Note that the gradient of this function corresponds to the last \(n-m_{\mathcal{E}}\) conditions of (37), and the second order condition is given by the truncated Hessian \(\nabla^{2}_{\omega_{m_{\mathcal{E}}}\omega_{m_{\mathcal{E}}}}\mathbf{E}( \overline{x}_{m_{\mathcal{E}}})\), which is, by hypothesis, positive definite._
_Therefore, the configuration \(\overline{x}_{n}\) corresponds to the local equilibrium of the energy function (38) once the first \(m_{\mathcal{E}}\) charges are fixed._
|
2302.05642 | Resonance-facilitated three-channel p-wave scattering | Feshbach resonances of arbitrary width are typically described in terms of
two-channel models. Within these models, one usually considers a single dressed
resonance, with the option to extend the analysis by including resonant
open-channel features that can drastically change the observed threshold
effects. For the strong $^{40}\mathrm{K}$ p-wave resonance studied in Ref.
\cite{ahmed2021}, the interplay between an open-channel shape resonance and the
Feshbach resonance could explain the unexpected nonlinear variation of the
binding energy with magnetic field. However, the presented two-channel
treatment relies on the introduction of two independent fitting parameters,
whereas the typical Breit-Wigner expression would only account for one. This
results in an effective magnetic moment that acquires a nonphysical value,
which is an indication of a major shortcoming of the two-channel model
treatment. In this study, we observe how the presence of a closed-channel shape
resonance explains the physical mechanism behind the observations and
demonstrates the need of a three-channel treatment. We introduce our novel
model as \textit{resonance facilitated}, where all coupling is mediated by the
Feshbach state, while there is no direct coupling between the additional
channel and the open channel. Notably, the resonance-facilitated structure
greatly reduces the complexity of the full three-channel model. The typical
Breit-Wigner form of the two-channel Feshbach formalism is retained and the
full effect of the added channel can be captured by a single resonance dressing
factor, which describes how the free propagation in the Feshbach state is
dressed by the added channel. | Denise Ahmed-Braun, Paul Julienne, Servaas Kokkelmans | 2023-02-11T10:07:53Z | http://arxiv.org/abs/2302.05642v1 | # Resonance-facilitated three-channel p-wave scattering
###### Abstract
Feshbach resonances of arbitrary width are typically described in terms of two-channel models. Within these models, one usually considers a single dressed resonance, with the option to extend the analysis by including resonant open-channel features that can drastically change the observed threshold effects. For the strong \({}^{40}\)K p-wave resonance studied in Ref. [1], the interplay between an open-channel shape resonance and the Feshbach resonance could explain the unexpected nonlinear variation of the binding energy with magnetic field. However, the presented two-channel treatment relies on the introduction of two independent fitting parameters, whereas the typical Breit-Wigner expression would only account for one. This results in an effective magnetic moment that acquires a nonphysical value, which is an indication of a major shortcoming of the two-channel model treatment. In this study, we observe how the presence of a closed-channel shape resonance explains the physical mechanism behind the observations and demonstrates the need of a three-channel treatment. We introduce our novel model as _resonance facilitated_, where all coupling is mediated by the Feshbach state, while there is no direct coupling between the additional channel and the open channel. Notably, the resonance-facilitated structure greatly reduces the complexity of the full three-channel model. The typical Breit-Wigner form of the two-channel Feshbach formalism is retained and the full effect of the added channel can be captured by a single resonance dressing factor, which describes how the free propagation in the Feshbach state is dressed by the added channel.
## I Introduction
Feshbach resonances have given experimentalists unprecedented control over two-body interactions in quantum degenerate fluids, greatly adding to the versatility of quantum gases. The tunability of Feshbach resonances follows from the magnetic moment difference \(\delta\mu\) between hyperfine channels. Whereas the multichannel nature of these resonances is essential, they are complex in nature. The easiest approach to retain the magnetic-field dependence is to use a two-channel model. Especially for pairs formed in s-wave channels, these models have been successful in describing both resonance width-dependent as well as universal behavior [2].
Following the experimental observation of p-wave resonances, [3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16], recent studies have revealed the existence of p-wave universal behavior [17; 18; 19; 20; 21]. These systems are considerably different from their s-wave counterparts. Despite the presence of strong three-body losses, they exhibit interesting features. P-wave interactions allow for the existence of multiple superfluid phases related to different projections of the angular momentum of Cooper pairs, some of which are similar to the phases of superfluid He-3 [22], and exhibit a phase transition if tuned from BCS to BEC rather than a smooth crossover [23]. In addition, the prospects of duality between strongly interacting odd waves and weakly interacting even waves in one-dimensional systems with suppressed three-body losses [24; 25; 26; 27] and the topological phase transitions in two-dimensional systems [28; 29] and engineered states [30; 31; 32] explain the interest in understanding the details of p-wave interactions.
The explicit two-channel treatment of p-wave resonances has been the topic of several theoretical studies [33; 11; 34]. Similar to their s-wave counterparts, these two-channel models allow for the inclusion of resonant open-channel features and can correctly capture the interplay between open-channel and Feshbach resonances [35; 1]. However, since the Feshbach channel in these models can comprise multiple hyperfine channels, resonant features in these hyperfine channels cannot be treated explicitly in this formalism. The breakdown of the two-channel model can be observed in the \({}^{40}\)K Feshbach resonance studied in Ref.[1]. Here, the Feshbach part of the two-channel scattering matrix, or S-matrix, element \(S_{\text{FB}}\) had to be redefined in order to match coupled-channels (CC) data. The typical Breit-Wigner form of the S-matrix element [36; 37] was replaced by a function where the resonance width and shift could be fitted to the CC data independently, such that
\[S_{\text{FB}} =1-\frac{i\Gamma(E)}{E-\delta\mu(B-B_{n})-\Delta_{\text{res}}(E)+ \frac{i}{2}\Gamma(E)}\] \[\to 1-\frac{igk^{3}}{E-c+\frac{i}{2}gk^{3}} \tag{1}\]
Whereas the two-channel model can capture the correct low-energy scaling, the differential magnetic moment \(\delta\mu\approx 0.5\)G/MHz that can be extracted from the fit by mapping the artificial model back onto the Breit-Wigner form, results in a poor quantitative match of the realistic two-channel model with the CC data as shown in Fig. 3(c) of Ref. [1] and reproduced here in Fig 1.
In this paper we relate this discrepancy to the presence of a near-threshold shape resonance in one of the hyperfine channels other than the entrance channel. The need to treat this channel explicitly and hence upgrade to a three-channel model to correctly capture the physics
is not specific to Feshbach resonances. Other fields of physics, including two-color photo-association experiments, Stimulated Raman adiabatic passage (STIRAP) [38; 39] as well as electromagnetic induced transparency (EIT) [40; 41; 42; 43] similarly reveal new physics that cannot be explained by reduced two-channel models.
The inclusion of extra channels to the scattering problem rapidly increases the complexity of the analysis and risks obscuring the intuitive understanding of the physics that two-channel models offer. As motivated in Sec. II.2, the magnetic-field dependence of the inelastic loss in the \(\left|b\right\rangle\) channel presented in Fig. 1 leads us to expect that the system is well-described in terms of a model where all interactions are facilitated by the Feshbach resonance. This resonance-facilitated structure reduces the complexity significantly. It allows the effect of the third channel to be fully captured by a single dressing factor, later defined as D. Physically, this factor describes how the free propagation in the Feshbach state is dressed by the third channel. As such, we retain the overall Breit-Wigner form of the two-channel S-matrix, allowing for a relatively straightforward physical interpretation of the results as compared to full CC output. We find that the resonance-facilitated three-channel model with shape resonances in the entrance channel and the added third channel contains the correct low-energy physics and, as outlined in Sec. II.1, allows for the direct implementation of values for \(\delta\mu\) consistent with CC data.
This paper is outlined as follows. In Sec. II we study the CC structure of the relevant \({}^{40}\)K system and use full CC data to motivate the reduction to the resonance-facilitated model. Next, in Sec. III we analyse how the presence of open-channel resonances can generally affect the threshold behavior of a ramping p-wave Feshbach state. We then proceed with the derivation of the resonance facilitated three-channel model in Sec. IV, and discuss how we use a Gamow expansion to account for the shape resonances. In Sec. V, we present the field dependence of the resonance scattering parameters. These parameters will then be used to interpret the results in Sec. VI, compute the resonance width and to form the conclusions in Sec. VII.
## II Coupled-channels calculations
We study fermionic \({}^{40}\)K, which has a nuclear spin of four and ground state \({}^{2}\)S\({}_{\frac{1}{4}}\). Hence, the single particle hyperfine ground state manifold contains the total spin states \(f=9/2\) and \(f=7/2\), with respectively ten and eight spin components with projections \(m_{f}\). We label these states as \(\left|a\right\rangle,\left|b\right\rangle,\left|c\right\rangle,...\) in order of increasing energy. As \({}^{40}\)K has an inverted hyperfine structure, the entrance channel of interest in this paper with two atoms in the \(\left|f,m_{f}\right\rangle=\left|9/2,-7/2\right\rangle\) state at zero \(B\)-field corresponds to the \(\left|bb\right\rangle\) channel. Apart from the Feshbach state, which is a magnetic-field dependent combination of hyperfine channels, we explicitly include the \(\left|ac\right\rangle\) channel in the three-channel model.
### Coupled-channels structure
To account for the relative angular momentum between two interacting atoms, we extend the two-particle hyperfine basis by including the partial wave quantum numbers \(L\) and \(M_{L}\). Since we are interested in the collision between two atoms which are both in the \(\left|b\right\rangle\) state, the antisymmetry requirement for fermions implies that these atoms can only collide with odd \(L\) values. In the low-energy limit, this results in the dominance of p-wave interactions with \(L=1\) and \(M_{L}=-1,0\) or \(1\). At small inter-particle separations, the \(\left|b\right\rangle\) atoms experience direct spin-exchange interactions that couple hyperfine channels with conserved \(m_{f_{1}}+m_{f_{2}}\). However, the anisotropy of the p-wave interaction additionally results in a non-zero dipole-dipole interaction that couples additional channels with conserved \(M_{\rm tot}=m_{f_{1}}+m_{f_{2}}+M_{L}\). This means that the collision channels with \(M_{\rm tot}=-8,-7\) and \(-6\) couple \(8,13\) and \(20\) channels respectively. Figure 2 shows the channel potentials that are coupled for interactions with \(M_{\rm tot}=-7\). Whereas the inset reveals that the interaction potential \(V_{\rm int}\) is much larger for direct spin-exchange interactions, the dipole-dipole interaction can generally not be neglected and results in the experimentally observable splitting of the Feshbach resonance for different values of \(\left|M_{L}\right|\)[44].
Figure 1: **Binding energy and inelastic loss in the \(\left|bb\right\rangle\) channel** The below threshold binding energy \(E_{b}\) (full red line) and above threshold inelastic loss (contour plot) of the \(\left|bb\right\rangle\) state as a function of magnetic field \(B\) are extracted from CC calculations. The green dashed line follows from the artificial two-channel model mapped back onto the original Breit-Wigner form. Whereas it captures the bending of the dimer energy near-threshold consistent with CC data, it fails to match the data quantitatively due to the poorly fitted differential magnetic moment \(\delta\mu\).
For all values of \(M_{\rm tot}\), there are five channels that couple through direct spin-exchange interactions. As can be seen in Fig. 2, three of these channels, \(|bb\rangle\), \(|rr\rangle\) and \(|ac\rangle\), are predominantly triplet in character, whereas the channels \(|aq\rangle\) and \(|br\rangle\) are mostly singlet. The inset reveals that the coupling between the singlet channels and the \(|bb\rangle\) state is about three times larger than the coupling between the triplet channels and the \(|bb\rangle\) state. This is consistent with the spin-exchange interaction following from the energy difference between the singlet and triplet potentials, such that it should be larger for states with predominantly different spin symmetry.
Similarly to the \(|bb\rangle\) state, the \(|ac\rangle\) channel has a near-threshold shape resonance. It is the interplay between this additional shape resonance and the Feshbach state which we aim to capture by going from a two- to a three-channel model.
### The differential magnetic moment and inelastic losses
Around resonance (\(B\approx 198.3/8\) G) the channels \(|bb\rangle\) and \(|ac\rangle\) are separated by an asymptotic energy difference of \(E_{\rm th}/h\approx 2.4\) MHz. Defining the energy \(E\) of the incoming state with respect to the \(|bb\rangle\) threshold, this means that \(|ac\rangle\) is energetically closed for \(E<E_{\rm th}\) and open for \(E\geq E_{\rm th}\). Figure 1 shows the rapid increase in the \(|bb\rangle\) channel inelastic loss once the \(|ac\rangle\) channel opens. The shape resonances in the \(|bb\rangle\) and \(|ac\rangle\) are magnetic-field independent and the energy difference \(E_{\rm th}\) scales with the magnetic moment difference between \(|bb\rangle\) and \(|ac\rangle\), which is small [45]. This means that loss caused by direct coupling between the two shape resonances should be largely magnetic-field independent. However, Fig. 1 instead shows that the observed loss feature is strongly magnetic-field dependent. This implies the importance of the ramping Feshbach state and motivates the usage of the resonance facilitated three-channel model where \(|bb\rangle\) and \(|ac\rangle\) are solely coupled to the Feshbach state.
The magnetic-field dependence of the Feshbach channel is quantified by \(\delta\mu\). Away from resonance, where the dressing of the (quasi)-bound state by the \(|bb\rangle\) and \(|ac\rangle\) channels is negligible, \(\delta\mu\) can be calculated from the slope of the (quasi)-bound state energy versus magnetic-field. Connecting the below- and above- threshold regions as visualized in Fig. 3(a) through interpolation, we find the magnetic-field dependent \(\delta\mu\) as presented in Fig. 3(b) [46].
The interpolated values for \(\delta\mu\) will be used in the comparison of the resonance facilitated three-channel model to the CC data. Hence, contrary to the method followed in Ref. [1], \(\delta\mu\) is not a free parameter in our analysis and has a physically realistic value.
Figure 3: **Computing the differential magnetic moment** (a) The below-threshold binding energy (red line) and above threshold value of \(|S_{\rm ab,ab}|^{2}\). The \(|ab\rangle\) state is weakly interacting with \(|bb\rangle\) and is therefore an excellent probe of \(\delta\mu\). The blue circles indicate the localization of the binding energy/loss maxima. The dashed green line indicates how the slope \(dE/dB\) is significantly different below/above threshold. (b) The dashed red line shows the differential magnetic moment computed using the binding energy values and loss maxima as a function of B-field. The blue line represents a quadratic fit of the data points. The values of this fit are used to define \(\delta\mu(B)\).
Figure 2: **Channel and interaction- potentials.** Channel potentials \(V_{\rm ch,ch}\) in the \(M_{\rm tot}=-7\) subspace expressed in units of the van der Waals (vdW) energy \(E_{\rm vdW}/h=23.375\)MHz as a function of the inperparticle distance in terms of the vdW length \(r_{vdW}=65.0223a_{0}\), with Bohr radius \(a_{0}\). Channels are either coupled to the \(|bb\rangle\) entrance channel (black line) through spin-exchange interactions (red lines) or dipole-dipole interactions (blue lines). All channels are listed on the right side of the figure in order of decreasing asymptotic energy and on the left side of the figure in order of potential energy in the plotted regime. Here, the top group of potentials is predominantly triplet, whereas the bottom group of potentials is predominantly singlet. Using the same color coding, we indicate the interaction potentials to the \(|bb\rangle\) channel \(V_{\rm bb,ch}\) in the inset. Only channels directly coupled to the \(|bb\rangle\) state through spin exchange interactions have a distinguishable value on this scale.
## III Poles and resonances of the S-matrix
Considering the effect of the \(|bb\rangle\) shape resonance, the Feshbach coupling and the dipole-dipole interaction separately, we express the total S-matrix as
\[S_{\rm tot}=S_{\rm P}S_{\rm FB}S_{\rm dip}, \tag{2}\]
where \(S_{\rm P}\) represents the direct scattering part and where \(S_{\rm dip}\) represents the dipole-dipole contribution. Using the Born approximation presented in Sec. IV C of Ref. [1] to factor out the dipole-dipole contribution, the remaining S-matrix \(S=S_{\rm tot}S_{\rm dip}^{-1}\) should follow the typical effective range approximation (ERA) for p-wave interactions as stated in Eq. (31) and will be used in the remainder of this paper. In the following two subsections, we proceed with the separate analysis of the remaining contributions \(S_{\rm P}\) and \(S_{\rm FB}\).
### Single channel
#### iii.1.1 Factorizing the S-matrix
Solving the radial Schodinger equation one can generally define \(\psi_{\ell}(k,r)\) as the physical solution which satisfies two boundary conditions and which represents the radial part of the wave function of the normalized state \(|E,\ell,m\rangle\). Whereas physically correct, the two boundary conditions imply that the normalization of the wave function is a function of the potential at all points. For physically realistic potentials this is often complex and one has to rely on numerical methods to compute the radial wave function.
Hence, one can alternatively define a different class of solutions which satisfy single point boundary conditions. Two of such solutions are the regularized wave function \(\phi_{\ell}(k,r)\) and the Jost solutions \(f_{\ell}(k,r)^{\pm}\), respectively defined as [47]
\[\lim_{r\to 0}(2\ell+1)!!r^{-\ell-1}\phi_{\ell}(k,r)=1 \tag{3}\] \[\lim_{r\to\infty}e^{\pm ikr}f_{\ell}^{\pm}(k,r)=i^{\pm l} \tag{4}\]
The previous two expressions indicate that \(\phi_{\ell}(k,r)\) is defined to be regular at the origin whereas the Jost solutions \(f_{\ell}(k,r)^{\pm}\) are set to define purely incoming/outgoing waves, but are non-regular at the origin.
As both classes of solutions solve the Schrodinger equation, they can be matched by considering their asymptotic limits, finding that \(\psi_{\ell}(k,r)=\phi_{\ell}(k,r)/\mathcal{F}_{\ell}(k)\) and more notably [37]
\[S_{\ell}(k)=\frac{\mathcal{F}_{\ell}(-k)}{\mathcal{F}_{\ell}(k)}, \tag{5}\]
where we have introduced the Jost function
\[\mathcal{F}_{\ell}(k)=\lim_{r\to 0}\frac{(-kr)^{\ell}}{(2\ell-1)!!}f_{\ell}^{ \pm}(k,r). \tag{6}\]
Equation (5) is particularly useful as the single point boundary condition which is used to define the Jost functions ensures that these functions can be solved iteratively for any potential \(V(r)\) and allows for a Hadamard expansion [48]
\[f_{\ell}(k,r)=f_{\ell}(0,r)e^{ikr_{\rm c}}\prod_{\rm n}\left(1-\frac{k_{\rm n }}{k}\right), \tag{7}\]
with momentum-space poles \(k_{\rm n}\) and undetermined constants \(f_{\ell}(0,r)\) and \(r_{\rm c}\) which are set by non-resonant background scattering processes. Substituting Eq. (7) into Eq. (5), we find the Ning-Hu representation of the S-matrix [49]
\[S_{\ell}(k)=e^{-2ikr_{\rm c}}\prod_{\rm n}\left(\frac{k_{\rm n}+k}{k_{\rm n}-k }\right). \tag{8}\]
Equation 8 enables us to express the full S-matrix in terms of its resonant poles. These resonant poles can be divided into four categories depending on their position in the complex momentum plane [50].
First of all, the poles located on the positive imaginary axis (\(k_{n}=i\beta\), with \(\beta>0\)) correspond to bound states. The wave function of these states decays exponentially in the asymptotic regime, such that \(\psi_{\ell}(k_{\rm n},r)\underset{r\to\infty}{\sim}e^{-\beta r}\). Of all the four classes of poles, only these poles correspond to physical states. The second class of poles are located the negative imaginary axis (\(k_{n}=-i\beta\)) and are referred to as virtual, or anti-bound states. Contrary to the bound states, the asymptotic part of the wave function of these virtual states increases exponentially, such that \(\psi_{\ell}(k_{\rm n},r)\underset{r\to\infty}{\sim}e^{\beta r}\). The third and fourth classes of poles are related as, according to the Schwarz reflection principle [47], they always occur as twin poles located at \(k=\pm\alpha-i\beta\) with \(\alpha>0\). The resonance states (\(k=\alpha-i\beta\)) represent outgoing waves whose amplitude increase exponentially, such that \(\psi_{\ell}(k_{n},r)\underset{r\to\infty}{\sim}e^{i\alpha r}e^{\beta r}\) and are hence also referred to as decaying states. The anti-resonance states on the other hand (\(k=-\alpha-i\beta\)) represent incoming waves with an asymptotic wave function of the form \(\psi_{\ell}(k_{n},r)\underset{r\to\infty}{\sim}e^{-i\alpha r}e^{\beta r}\) and are hence also referred to as capturing states.
Whereas all four classes of poles can occur for all partial wave states, the presence of a centrifugal barrier impacts the pole locations. As first studied for square-wells in Ref. [51] and as indicated in Fig. 4, particularly the decaying and capturing poles are affected by the centrifugal barrier. For s-wave collisions, these poles collide in the lower half of the complex momentum plane upon increasing the potential interaction strength and turn into virtual states. For higher partial waves on the other hand, the centrifugal barrier shifts the collision point to the origin. This means that the capturing state turns into a virtual state, whereas the decaying state immediately turns into a bound state. Physically we can regard this process as the decaying state corresponding to a quasi
bound state, or shape resonance, trapped behind the centrifugal barrier and transforming into a true bound state upon an increase in the potential depth.
Since only the poles that are sufficiently close to the real axis of the complex momentum plane produce experimentally observable sudden changes in the phase shift \(\delta_{\ell}(k)\), it is generally possible to truncate the infinite product over the n states in Eq. (8). As, contrary to s-wave scattering, the complex poles for \(\ell\neq 0\) scattering processes propagate close to the real momentum axis before they collide at the origin, they generally provide a non-negligible contribution to the phase shift. This emphasizes the importance of the inclusion of p-wave shape resonances in the analysis of the S-matrix.
#### iii.1.2 Gamow states
As presented in Eq. (8), the S-matrix can be fully expanded in terms of its poles. As such, it is convenient to introduce the set of wave functions \(\Omega_{\ell,n}(r)\) that are the eigenstates of the Schrodinger equation with eigenvalues \(k_{\mathrm{n}}\), such that
\[\Omega_{\ell,n}(r)=\lim_{k\to k_{\mathrm{n}}}\psi_{\ell}(k,r), \tag{9}\]
which satisfy the following set of boundary conditions
\[\Omega_{\ell,n}(0)=0\qquad\text{and} \tag{10}\] \[\frac{d\Omega_{\ell,n}(R)}{dr}=ik_{\mathrm{n}}\Omega_{\ell,n}(R), \tag{11}\]
where \(R\) is an arbitrary distance chosen in the asymptotic regime. The wave functions \(\Omega_{\ell,n}(r)\) are more commonly referred to as Gamow states, named after G. Gamow who first studied the decaying states as introduced in Sec. III.1.1 in the context of alpha decay. Since the wave number \(k_{\mathrm{n}}\) is complex for decaying states (\(k=\alpha-i\beta\)), the amplitude of the corresponding Gamow state grows exponentially and the solutions are non-Hermitian. This is not problematic since the decay states (as well as the virtual and capturing states) are located on the second, non-physical, energy sheet. On this sheet the Hamiltonian is not necessarily Hermitian [50]. It is however possible to form a biorthogonal set by including the dual states \(|\Omega_{\ell,n}^{D}\rangle\) that satisfy purely incoming boundary conditions and correspond to the capturing poles (\(k=-\alpha-i\beta\)), such that \(|\Omega_{\ell,n}^{D}\rangle\equiv|\Omega_{\ell,n}\rangle^{*}\) and \(\langle\Omega_{\ell,n}^{D}|\Omega_{\ell,n^{\prime}}\rangle_{R}=\delta_{\mathrm{ n},n^{\prime}}\)[52].
In the limit where \(\alpha\to 0\), the (dual) Gamow state reduces to the (virtual) bound state wave function. The Gamow and its dual state thus provides a useful set of eigenfunctions of the Schrodinger equation in the entire complex momentum plane. Their usefulness is particularly clear upon using the Mittag-Leffler thereom [53] and upon realizing that the Green's function shares poles with the S-matrix, such that we can expand the Green's function in the following convergent series [47; 54]
\[G_{\ell}^{+}(E,r,r^{\prime})= \sum_{\mathrm{n}=1}^{\mathrm{N}}\frac{\Omega_{\ell,n}(r)\Omega_{ \ell,n}^{D,*}(r^{\prime})}{k_{\ell,n}(k-k_{\ell,n})}+\frac{1}{2}\sum_{ \mathrm{n}=N+1}^{\infty}\left[\frac{\Omega_{\ell,n}(r)\Omega_{\ell,n}^{D,*}(r ^{\prime})}{k_{\ell,n}(k-k_{\ell,n})}-\frac{\Omega_{\ell,n}^{D}(r)\Omega_{ \ell,n}^{*}(r^{\prime})}{k_{\ell,n}^{*}(k+k_{\ell,n}^{*})}\right], \tag{12}\]
where N is the (finite) number of bound and virtual states. As discussed in Sec. III.1.1 it is generally possible to truncate the summation over the infinite number of complex poles to a finite number of poles with observable contributions to the phase shift.
### Two-channel
Whereas the previous section focused on single channel resonances, the multichannel nature of scattering can result in the presence of additional (near) resonant states. In this section we will treat the interplay between these single channel and multi channel (Feshbach) resonances.
Figure 4: **Poles of the S-matrix** of the p-wave square-well in the complex momentum plane. Increasing color intensity corresponds to increasing depth of the potential well. The arrows indicate the direction in which the poles move. The inset shows the poles of the S-matrix of the s-wave square-well.
#### iii.3.1 Feshbach resonances
For small interparticle spacings, central interactions between two particles can couple different hyperfine channels. This allows for the presence of Feshbach resonances. Contrary to single channel resonances, these Feshbach resonances are magnetic-field dependent due to the difference in the magnetic moment between hyperfine states and can hence be tuned through the variation of an externally applied magnetic-field.
Retaining the multichannel nature of these resonances but limiting the complexity of the analysis, these Feshbach resonances are generally treated in a two-channel model. Here, the coupling of a (near) resonant state in a closed channel subspace \(\mathcal{Q}\) to the open channel subspace \(\mathcal{P}\) results in the desired resonance. The S-matrix in the open channel subspace \(\mathcal{P}\) for these two-channel models been the topic of many studies and can generally be expressed as
\[S=S_{\mathrm{p}}\left(1-\frac{i\Gamma(E)}{E-\epsilon_{c}-\Delta_{\mathrm{res}}( E)+\frac{i}{2}\Gamma(E)}\right), \tag{13}\]
where \(\Gamma(E)\) represents the resonance width, \(\epsilon_{c}\) represents the bare resonance energy, \(\Delta_{\mathrm{res}}(E)\) represents the shift of the resonance energy due to the dressing of the Feshbach state and where \(S_{\mathrm{P}}\) represents the direct open channel scattering matrix that can contain background as well as resonant effects as treated in Sec. III.1.1.
Short-range p-wave interactions allow for an effective range expansion in the low-energy regime. Using this expansion (stated explicitly for the phase shift in Eq. (31)), one can observe the bound-state poles located in the two-channel part of the S-matrix in Eq. (13) to vary linearly with magnetic-field. However, in the presence of (near) resonant open channel interactions, the poles in the direct part of the S-matrix \(S_{\mathrm{p}}\), contained in the Ning-Hu representation of Eq. (8), interact with the Feshbach state and alter its magnetic-field variation. As indicated in Figs. 5(a)-(b) the presence of a near-threshold open-channel bound or virtual state is only observable in a relatively small magnetic-field regime and its effect on experimental observables is hence limited. This is caused by the naturally narrow character of the p-wave interactions which is caused by the presence of the centrifugal barrier. On the other hand, the effect of the decaying and the capturing state in Fig. 5(c) has a relatively large effect compared to the bound and virtual states. These poles contain a real as well as an imaginary energy part and consequently add a width to the resonance. The apparent wide character of the resulting Feshbach state is clearly visible in Fig. 5(c) and is qualitatively consistent with the observed coupled-channels structure as presented in Fig. 1.
## IV Resonance Facilitated Scattering
Whereas the two-channel model presented in Sec. III.2.1 qualitatively captures the physics observed in Fig. 1, its features do not match the coupled-channels data quantitatively and one obtains non-physical values of the differential magnetic moment. In this section we investigate a three-channels model and study how its reduction to a resonance facilitated form improves on the quantitative correspondence with the CC data.
### Full three-channel model
We consider a three-channel system with two-atom states \(\ket{bb}\) and \(\ket{ac}\) coupled to a ramping Feshbach state \(\ket{c}\) that consists of a (magnetic-field dependent) combination of hyperfine states. Whereas the entrance channel \(\ket{bb}\) is always energetically open, the channel \(\ket{ac}\), which has a threshold energy of \(E_{\mathrm{th}}\approx 2.4-0.014(B-198)\) MHz in the relevant magnetic-field regime, can be either open (\(E>E_{\mathrm{th}}\)) or closed (\(E<E_{\mathrm{th}}\)).
The three-channel model satisfies the following Schrodinger equation
\[E\begin{pmatrix}\psi_{bb}\\ \psi_{c}\\ \psi_{ac}\end{pmatrix}=\begin{pmatrix}\hat{H}_{bb}&\hat{V}_{bb,c}&\hat{V}_{ bb,ac}\\ \hat{V}_{cb,bb}&\hat{H}_{c}&\hat{V}_{c,ac}\\ \hat{V}_{ac,bb}&\hat{V}_{ac,c}&\hat{H}_{ac}\end{pmatrix}\begin{pmatrix}\psi_ {bb}\\ \psi_{c}\\ \psi_{ac}\end{pmatrix}, \tag{14}\]
Figure 5: **Energy of a Feshbach resonance versus magnetic-field** (in arbitrary units) in the presence of a near-threshold open-channel bound state (a), virtual state (b) or decaying and capturing states (c). The energy is extracted from the pole of Eq. (33) of Ref. [1], where we use the Mittag-Leffler series presented in Eq. (36), for different values of the resonance momentum \(k_{n}\). The insets in all figures show the pole locations in the complex momentum plane.
where \(V_{a,b}\) represent potential operators that couple the hyperfine states and where \(H_{a}\) is defined as \(H_{a}=\hat{H}_{a}^{0}+\hat{V}_{a}\), with kinetic energy operator \(\hat{H}_{a}^{0}\) and two-body interaction potential \(\hat{V}_{a}\). We can now proceed in two ways. First of all, we can use the operator formalism presented in App. A in order to derive an effective potential interaction \(V_{\text{eff}}\) that solves the Schrodinger equation \(\left(E-H_{\text{eff}}\right)|\psi_{bb}^{+}\rangle=0\). Alternatively, we can follow the steps presented in App.B and analyze the Lippmann-Schwinger equation for the entrance channel wavefunction \(|\psi_{bb}^{+}\rangle\). Both the former as well as the latter method result in the following definition of the effective potential
\[V_{\text{eff}}= V_{bb,bb}+V_{bb,c}AV_{c,bb}+V_{bb,c}AV_{c,ac}G_{ac}^{0}V_{ac,bb}\] \[+V_{bb,ac}G_{ac}^{0}V_{ac,c}AV_{c,bb}+V_{bb,ac}G_{ac}^{0}V_{ac,bb}\] \[+V_{bb,ac}G_{ac}^{0}V_{ac,c}AV_{c,ac}G_{ac}^{0}V_{ac,bb}, \tag{15}\]
with propagators \(G_{a}^{0}=(E-H_{a})^{-1}\) and with the parameter A defined as
\[\text{A}=\frac{\left|\phi_{c}\right\rangle\left\langle\phi_{c} \right|}{E-\epsilon_{c}}\left[1-\frac{\left\langle\phi_{c}|\hat{V}_{c,ac} \left(E-\hat{H}_{ac}\right)^{-1}\hat{V}_{ac,c}|\phi_{c}\right\rangle}{E- \epsilon_{c}}\right]^{-1}, \tag{16}\]
where we have introduced the complex energy shift
\[A_{\text{ac}}(E)=\langle\phi_{c}|\hat{V}_{c,ac}\left(E-\hat{H}_{ac}\right)^{- 1}\hat{V}_{ac,c}|\phi_{c}\rangle \tag{17}\]
and where we have used the single resonance approximation to express the propagator \(G_{c}^{0}\) in the Feshbach channel \(|c\rangle\) as
\[G_{c,c}^{0}=\frac{\left|\phi_{c}\right\rangle\left\langle\phi_{c} \right|}{E-\epsilon_{c}}. \tag{18}\]
Physically, the parameter A indicates how the Feshbach state can either propagate freely in the \(|c\rangle\) channel or couple to the \(|ac\rangle\) channel where it propagates before coupling back to the \(|c\rangle\) channel. Since \(V_{\text{eff}}\left|\psi_{bb}^{+}\right\rangle=T_{bb,bb}\left|k\right\rangle\)[50], the knowledge of the effective potential allows for the computation of the entrance channel transition matrix element \(T_{bb,bb}\) and hence the scattering matrix element \(S_{bb,bb}\)[55].
### Resonance-facilitated three-channel model
As previously stated in the Introduction, the near-threshold behavior of the \(B=198.8\) G (\(M_{L}=0\)) and \(B=198.3\) G (\(M_{L}=\pm 1\)) resonances is expected to be well-described in terms of a resonance-facilitated model where we neglect the direct coupling between the \(|bb\rangle\) and the \(|ac\rangle\) channels (\(V_{bb,ac}=V_{ac,bb}=0\)).
This approximation simplifies the three-channel model significantly as only the first two terms in the effective potential defined in Eq. (15) remain. We recognize that all the information regarding the coupling between the Feshbach state \(|c\rangle\) and the \(|ac\rangle\) channel is now contained in the single parameter A. In order to completely isolate the effect of the \(|ac\rangle\) channel on the model, we introduce the dressing factor D as
\[\text{D}=\text{A}\left(\frac{\left|\phi_{c}\right\rangle\left\langle\phi_{c} \right|}{E-\epsilon_{c}}\right)^{-1}. \tag{19}\]
In the two-channel limit (\(V_{ac,c}=V_{c,ac}=0\)), we find that \(\text{D}=1\) and Eq. (15) reduces to the well-known two-channel effective potential. Conveniently, the resonance dressing factor allows us to recast the scattering matrix element \(S_{bb,bb}\) into the form
\[S_{bb,bb}=S_{\text{P}}\left(1-\frac{2\pi i\Big{|}\langle\phi_{c} |\hat{V}_{c,bb}|\phi_{bb}^{+}\rangle\Big{|}^{2}}{\frac{E-\delta\mu(B-B_{n})}{ \text{D}}-\langle\phi_{c}|\hat{V}_{c,bb}\left(E-\hat{H}_{bb}\right)^{-1}\hat{ V}_{bb,c}|\phi_{c}\rangle}\right), \tag{20}\]
where \(\delta\mu(B-B_{n})=\epsilon_{c}\), with bare magnetic resonance position \(B_{n}\). The combination of the complex energy shift \(A_{\text{bb}}(E)=\langle\phi_{c}|\hat{V}_{c,bb}\left(E-\hat{H}_{bb}\right)^{-1} \hat{V}_{bb,c}|\phi_{c}\rangle\) and the resonance dressing factor in the denominator of Eq. (20) shift the resonance location to its dressed magnetic-field value \(B_{0}\) and add a width to the resonance. The analysis of this shift and width is presented in Sec. IV.3. Equation (20) implies that the two-channel S-matrix can be used and updated to the three-channel resonance-facilitated model by a simple implementation of the resonance dressing factor. Physically, this factor describes how the free propagation in the Feshbach state is dressed by the \(|ac\rangle\) channel. The details of this dressing and the physics of the dressing factor will be discussed in more detail in Sec. IV.3.
### Gamow expansion for \(|bb\rangle\) and \(|ac\rangle\)
Equation (20) depends on the propagators \(G_{bb}^{0}=(E-\hat{H}_{bb})^{-1}\) and \(G_{ac}^{0}=(E-\hat{H}_{ac})^{-1}\) through the com
plex energy shifts \(A_{\rm ac}(E)\) and \(A_{\rm bb}(E)\) respectively. As discussed in Sec. II.1, both the \(|bb\rangle\) as well as the \(|ac\rangle\) channel has a near-threshold shape resonance. Considering that there are no other near-threshold poles in these channels, the shape resonances are the dominant contributions to the Mittag-Leffler expansion, or the Gamow expansion as presented in Eq. (12), such that we can approximate the propagators \(G_{bb}^{0}\) and \(G_{ac}^{0}\) as
\[G_{bb}^{0} = \left(\frac{|\Omega_{bb}\rangle\left\langle\Omega_{bb}^{D}\right| }{2k_{bb}(k-k_{bb})}-\frac{|\Omega_{bb}^{D}\rangle\left\langle\Omega_{bb} \right|}{2k_{bb}^{*}(k+k_{bb}^{*})}\right), \tag{21}\] \[G_{ac}^{0} = \left(\frac{|\Omega_{ac}\rangle\left\langle\Omega_{ac}^{D}\right| }{2k_{ac}((k_{\rm sh}-k_{ac})}-\frac{|\Omega_{ac}^{D}\rangle\left\langle\Omega _{ac}\right|}{2k_{ac}^{*}(k_{\rm sh}+k_{ac}^{*})}\right), \tag{22}\]
where we have introduced the Gamow states \(|\Omega_{bb}\rangle\) and \(|\Omega_{ac}\rangle\) as well as their dual states \(|\Omega_{bb}^{D}\rangle\equiv|\Omega_{bb}\rangle^{*}\) and \(|\Omega_{ac}^{D}\rangle\equiv|\Omega_{ac}\rangle^{*}\). In addition, the shifted momentum \(k_{\rm sh}\) in Eq. (22) is defined as \(k_{\rm sh}=\sqrt{E-E_{\rm th}}\) and accounts for the energy threshold difference between the \(|bb\rangle\) and \(|ac\rangle\) channels as discussed in Sec. II.1.
Substituting Eqs. (21) and (22) into Eqs. (20) and (16), we find that [1]
\[A_{\rm bb}(E) \approx \frac{\langle\phi_{c}|H_{c,bb}|\Omega_{bb}\rangle\left\langle \Omega_{bb}^{D}|H_{bb,c}|\phi_{c}\right\rangle}{2k_{bb}(k-k_{bb})}- \tag{23}\] \[\frac{\langle\phi_{c}|H_{c,bb}|\Omega_{bb}^{D}\rangle\left\langle \Omega_{bb}|H_{bb,c}|\phi_{c}\right\rangle}{2k_{bb}^{*}(k+k_{bb}^{*})}\]
and
\[A_{\rm ac}(E) \approx \frac{\langle\phi_{c}|H_{c,ae}|\Omega_{ac}\rangle\left\langle \Omega_{ac}^{D}|H_{bb,c}|\phi_{c}\right\rangle}{2k_{ac}(k_{\rm sh}-k_{ac})}- \tag{24}\] \[\frac{\langle\phi_{c}|H_{c,ae}|\Omega_{ac}^{D}\rangle\left\langle \Omega_{ac}|H_{ac,c}|\phi_{c}\right\rangle}{2k_{ac}^{*}(k_{\rm sh}+k_{ac}^{*})}.\]
Using the Wigner threshold scaling of the Gamow states as outlined in Ref. [1] and using the definition \(A(E)=\Delta_{\rm res}(E)-\frac{i}{2}\Gamma(E)\) with energy shift \(\Delta_{\rm res}(E)\) and energy width \(\Gamma(E)\), we can obtain
\[\Delta_{\rm res,}b(E) \approx g_{bb,c}\,{\rm Re}\left\{\frac{E_{bb}^{3/2}}{E-E_{bb}}\right\} \tag{25}\] \[\Gamma_{bb}(E) \approx -2g_{bb,c}\frac{E^{3/2}}{\left|E-E_{bb}\right|^{2}}\,{\rm Im}\{E _{bb}\}, \tag{26}\]
with momentum independent coupling strength \(g_{bb,c}\). Whereas the \(|bb\rangle\) channel is always energetically open, the \(|ac\rangle\) channel can be either open (\(E\geq E_{\rm th}\)) or closed (\(E<E_{\rm th}\)). Carefully distinguishing between these two regimes we find that
\[\Delta_{\rm res,}ac(E)\approx\left\{\begin{array}{ll}g_{ac,c}\,{\rm Re}\left\{ \frac{E_{\rm th}^{3/2}}{(E-E_{\rm th})-E_{ac}}\right\}&\,{\rm for}\,E\geq E_{ \rm th}\\ g_{ac,c}\left({\rm Re}\left\{\frac{E_{\rm th}^{3/2}}{(E-E_{\rm th})-E_{ac}} \right\}+i\frac{(E-E_{\rm th})^{3/2}}{|(E-E_{\rm th})-E_{ac}|^{2}}\,{\rm Im}\{E _{ac}\}\right)&\,{\rm for}\,E<E_{\rm th}\end{array}\right. \tag{27}\]
and
\[\Gamma_{ac}(E)\approx\left\{\begin{array}{ll}-2g_{ac,c}\frac{(E-E_{\rm th}) ^{3/2}}{|(E-E_{\rm th})-E_{ac}|^{2}}\,{\rm Im}\{E_{ac}\}&\,{\rm for}\,E\geq E_ {\rm th}\\ 0&\,{\rm for}\,E<E_{\rm th}\end{array}\right. \tag{28}\]
with momentum independent coupling strength \(g_{ac,c}\). Using the definitions of \(A_{\rm bb}(E)\) and \(A_{\rm ac}(E)\), we can rewrite the dressing factor \({\rm D}\) as
\[{\rm D}=\frac{E-\delta\mu(B-B_{\rm n})}{E-\delta\mu(B-B_{\rm n})-\Delta_{\rm res,ac}(E)+\frac{i}{2}\Gamma_{ac}(E)}, \tag{29}\]
and we can express the S-matrix component \(S_{bb,bb}\) presented in Eq. (20) as
\[S_{bb,bb}=S_{\rm P}\left(1-\frac{i\Gamma_{bb}(E)}{E-\delta\mu(B-B_{\rm n})-( \Delta_{\rm res,}b(E)+\Delta_{\rm res,}ac(E))+\frac{i}{2}(\Gamma_{bb}(E)+\Gamma _{ac}(E))}\right) \tag{30}\]
The insightful form of Eq. (30) reveals how the complex energy shifts set by the coupling of the Feshbach state to the \(|bb\rangle\) and \(|ac\rangle\) channels emerge on equal footing in the denominator of Eq. (29). As indicated by Eq. (28), the \(|ac\rangle\) channel only contributes a finite width factor \(\frac{i}{2}\Gamma_{ac}(E)\) once the channel becomes energetically open. Physically, this term in the denominator captures the resonance-facilitated loss of the \(|bb\rangle\) state to the \(|ac\rangle\)
state. Therefore, contrary to the well-known two-channel Feshbach formalism, the presented three-channel model is capable of including inelastic loss processes, such that \(S_{bb,bb}\) becomes non-unitary (\(|S_{bb,bb}|<1\)) for \(E\geq E_{\rm th}\).
## V Field dependence of resonance scattering parameters
Having factored out the dipole-dipole contribution, the low-energy scaling of the scattering phase shift \(\delta(k)\) follows the typical ERA for p-wave interactions, where
\[\cot\delta(k)=-(Vk^{3})^{-1}-(Rk)^{-1}+{\cal O}\{k\}, \tag{31}\]
with scattering volume \(V\) and effective range \(R\). Using the multiplicative nature of the total S-matrix as outlined in Sec. III, the ERA can be applied to the direct part of the scattering matrix \(S_{\rm P}\) and the Feshbach part \(S_{\rm FB}\) separately. Analysing each of these contributions in the following two subsections, the combined scattering volume and effective range can be computed from the two partial contributions as
\[V=V_{1}+V_{2} \tag{32}\]
and
\[R^{-1}=\frac{V_{1}^{2}}{V^{2}R_{1}}+\frac{V_{2}^{2}}{V^{2}R_{2}}. \tag{33}\]
The separate analysis of the contributions to the ERA will be particularly useful in the resonance width classification as presented in Sec. VI.2.
### Direct scattering parameters
Applying the parametrization of the \(|bb\rangle\) channel shape resonance as presented in Sec. IV.3, the Ning-Hu representation of the direct part of the scattering matrix \(S_{\rm P}\) is equivalent to the form presented in Ref. [1; 49]
\[S_{\rm P}=e^{-2ikr_{c}}\frac{(k-k_{bb}^{*})(k+k_{bb})}{(k-k_{bb})(k+k_{bb}^{*} )}. \tag{34}\]
Setting \(r_{c}\) to equal \(2\)Im\(\{k_{bb}^{-1}\}\), we ensure that \(S_{\rm P}\) follows the correct low-energy p-wave Wigner scaling. The \(k\to 0\) limit of \(S_{\rm P}\) then yields the following expression for the scattering volume [1]
\[V_{\rm P}=-r_{c}|k_{bb}|^{-2}\left(1-|r_{c}k_{bb}|^{2}/3\right) \tag{35}\]
and the effective range
\[R_{\rm P}=\frac{r_{c}(1-|r_{c}k_{bb}|^{2}/3)^{2}}{1-|r_{c}k_{bb}|^{2}+|r_{c}k_ {bb}|^{4}/5}. \tag{36}\]
### Feshbach scattering parameters
The Feshbach contribution to the scattering volume and effective range can be directly obtained from the low-energy expansion of the Feshbach part of Eq. (30). An important subtlety in this analysis is the presence of the threshold energy shift \(E_{\rm th}\) in the energy width \(\Gamma_{\rm res,ac}(E)\) and energy shift \(\Delta_{\rm res,ac}(E)\). In the absence of \(E_{\rm th}\) we find that
\[\Delta_{\rm res}(E) \approx\!\!\!\!\!\!\!\!\approx_{k\to 0}-gk_{R}-g\frac{k_{R}}{k_{R}^{2 }+k_{I}^{2}}k^{2}+{\cal O}(k^{4}) \tag{37}\] \[=\Delta_{res}^{0}+\Delta_{res}^{1}k^{2}+{\cal O}(k^{4})\]
and
\[\Gamma(E) \approx\!\!\!\!\!\!\!\!\approx_{k\to 0}-\frac{4gk_{I}k_{R}}{(k_{I}^{2 }+k_{R}^{2})^{2}}k^{3}+{\cal O}(k^{5}) \tag{39}\] \[=\Gamma^{0}k^{3}+\Gamma^{1}k^{5}+{\cal O}(k^{7}),\]
with \(k_{\rm n}=k_{R}+ik_{I}\). Using the previous expressions and isolating the effect of \(E_{\rm th}\) on the \(|ac\rangle\) channel parameters, we can use Eq. (30) in order to find the following form of the Feshbach part of the scattering volume
\[V_{\rm FB}(B)=-\frac{\Gamma_{bb}^{0}/2}{\delta\mu(B-B_{n})+\Delta_{\rm res,bb} ^{0}+\Delta_{\rm res,ac}^{0}\chi}, \tag{40}\]
where all dependence on \(E_{\rm th}\) is contained in the parameter \(\chi(E_{\rm th})\), defined as
\[\chi=\frac{k_{I,ac}^{2}+k_{R,ac}^{2}-2k_{I,ac}k_{\rm th}}{k_{I,ac}^{2}+k_{R, ac}^{2}+k_{th}^{2}-2k_{I,ac}k_{\rm th}} \tag{41}\]
In the limit of a vanishing asymptotic energy difference between the channels \(|ac\rangle\) and \(|bb\rangle\), the shift parameter reduces to \(\chi(k_{\rm th})\to 1\) and the energy shift \(\Delta_{\rm res,ac}^{0}\) has to be treated on equal footing with the direct entrance channel shift \(\Delta_{\rm res,bb}^{0}\). In the opposite limit where the energy shift becomes large, we find \(\chi(k_{\rm th})\to 0\), such that the scattering volume is insensitive to the energetically far removed channel \(|ac\rangle\).
Proceeding with the analysis of the Feshbach part of the effective range \(R_{\rm FB}\), we find
\[R_{FB} =\frac{(\Gamma_{bb}^{0})^{2}}{2}\left[\Gamma_{bb}^{0}(1-\Delta_{ \rm res,bb}^{1}-\xi\Delta_{\rm res,ac}^{1})+\right.\] \[\left.\Gamma_{bb}^{1}(\delta\mu(B-B_{n})+\Delta_{\rm res,bb}^{0}+ \chi\Delta_{\rm res,ac}^{0})\right]^{-1}, \tag{42}\]
with
\[\xi=\frac{(k_{I,ac}^{2}+k_{R,ac}^{2})(k_{I,ac}^{2}+k_{R,ac}^{2}-k_{I,ac}k_{ \rm th})}{(k_{I,ac}^{2}+k_{R,ac}^{2}+k_{th}^{2}-2k_{I,ac}k_{\rm th})^{2}} \tag{43}\]
Similar to the shift parameter \(\chi(E_{\rm th})\), the shift parameter \(\xi(E_{\rm th})\) reduces to a value of \(\xi(E_{\rm th})\to 1\) in the limit of a vanishing asymptotic energy shift and a value of \(\xi(E_{\rm th})\to 0\) in the opposite limit of large energy shifts. The values of the shift parameters in the \({}^{40}\)K analysis will be presented in Sec. VI.1.
Results
We proceed our analysis by fitting the resonance-facilitated form of the open channel S-matrix component \(S_{bb,bb}\) as presented in Eq. (30) to CC data in the low-energy limit. In our model, only the coupling parameters \(g_{bb,c}\) and \(g_{ac,c}\) are magnetic-field dependent [1]. Hence; we fit the shape resonance momenta \(k_{bb}\) and \(k_{ac}\) at a single magnetic-field value \(B=200\) G and keep the best-fit values as presented in Tab. 1 fixed for all subsequent fitting routines.
By choosing a B-field close to the resonance value, we ensure that the S-matrix exhibits rapid variations at low-energies, allowing for a well-determined fitting routine. To ensure physically realistic values for the fitting parameters below resonance where the S-matrix varies minimally, we force the pole of \(S_{bb,bb}\) to be located at the B-field dependent value of the binding energy \(E_{b}\) as extracted from the CC code. This procedure directly fixes one of the free parameters \(g_{bb,c}\) or \(g_{ac,c}\). As presented in Fig. 6, the outlined fitting routine is able to correctly reproduce the CC phase shift in the low-energy regime.
Whereas the two-channel fit as introduced in Ref. [1] and presented in Fig. 6 similarly captures the CC data, this model relies on the use of an artificial fitting parameter and is hence less physical. Both models start to deviate from the CC data at higher scattering energies. This is a consequence of the low-energy approximations used in the Gamow expansion and the Ning-Hu expansion of the S-matrix.
The high-energy deviations are also visible in the atom loss \(1-\left|S_{\text{bb,bb}}\right|^{2}\) presented in Fig. 7. However, notably, the resonance-facilitated model does correctly capture the avoided crossing structure of the bound-state (and quasi-bound state) around resonance. Consistent with the CC data, the loss magnitude approaches unity once the \(\left|ac\right>\) channel opens and reflects the strong B-field dependent nature of the loss rate resulting from the large magnetic moment difference between the Feshbach channel on the one hand and the \(\left|bb\right>\) and \(\left|ac\right>\) channels on the other hand.
### Resonance facilitated contributions and scattering parameters
As indicated by the denominator of Eq. (30), the dressing effects of both the \(\left|bb\right>\) and the \(\left|ac\right>\) channel on the Feshbach state arise equivalently. Both channels contribute an energy shift and, once the channels are energetically open, add a width \(\Gamma(E)\). As can be seen in Fig. 8, the value of these contributions depends on the scattering energy [56].
\begin{table}
\begin{tabular}{|c|c|} \hline \(\left|bb\right>\)**-channel shape resonance** \\ \hline \(\operatorname{Re}E_{bb}\) & \(0.222\,E\) \\ \(\operatorname{Im}E_{bb}/2\) & \(-0.114\,\bar{E}\) \\ \(V_{P}\) & \(-3.02\,r_{\text{vdW}}^{3}\) \\ \(R_{P}\) & \(1.81\,r_{\text{vdW}}\) \\ \hline \(\left|ac\right>\)**-channel shape resonance** \\ \hline \(\operatorname{Re}E_{ac}\) & \(0.179\,\bar{E}\) \\ \(\operatorname{Im}E_{ac}/2\) & \(-0.061\,\bar{E}\) \\ \hline \end{tabular}
\end{table}
Table 1: **Shape resonance parameterization.** The best-fit complex energies for the shape resonances in the \(\left|bb\right>\) and \(\left|ac\right>\) channel in terms of \(E_{vdW}\). The scattering volume \(V_{\text{P}}\) and effective range \(R_{\text{P}}\) appearing introduced in Sec. V.1.1 are also given in van der Waals units. The bare magnetic resonance position \(B_{\text{n}}=168.0\,\text{G}\) is set by comparison to the CC data.
Figure 6: **Scattering phase** for \(M_{L}=0\) at \(200\,\text{G}\). The phase shift \(\delta\) from CC calculations is plotted as the real part of \(k^{3}\cot\delta\) (blue line) versus collisional energy \(E/h\). Both the artificial two-channel (blue squares) and the realistic resonance-facilitated three-channel model (red crosses) match CC data at low energies. The black dashed curve represents the effective range expansion up to \(\mathcal{O}(k^{4})\).
Figure 7: Above threshold loss \(1-\left|S_{\text{bb,bb}}\right|^{2}\) and binding energy of the \(\left|bb\right>\) channel computed from the CC code (left) and from the 3-channel model (right). The green dashed line corresponds to the energy extracted from the pole of the 3-channel S-matrix. In the low-energy limit, this energy approaches the quasi-bound state energy [1]. The curve has been added to both figures to ease the comparison.
Predictably, the magnitude of the energy shift is largest when the scattering energy approaches the (real part of the) energy of the shape resonance [57]. In addition, the asymmetric and broad nature of the resonance widths can be contributed to the shape resonances being located above the centrifugal barrier, as indicated by the large value of the imaginary parts of the shape resonance energies. The comparable magnitude of the complex energy shift in the \(|bb\rangle\) and \(|ac\rangle\) channels implies the importance of the \(|ac\rangle\) channel added in the resonance facilitated model. Considering the effective range expansion of the phase shift \(\delta(k)\), it is furthermore possible to quantify the effect of the \(|ac\rangle\) channel on the low-energy scattering parameters. Following the approach presented in Sec. IV.2, we begin this analysis by computing the shift parameters \(\chi(E_{\rm th})\) and \(\xi(E_{\rm th})\) and obtain the results as presented in Fig. 9.
Consistent with the analysis of the full complex energy shift, the non-negligible values of the shift parameters \(\xi(E_{\rm th})\) and particularly \(\chi(E_{\rm th})\) indicate the importance of the \(|ac\rangle\) channel on the low-energy scattering. Whereas the values remain non-neglegle over the entire probed B-field regime, the magnitudes decrease for larger B-fields. This trend is consistent with the growing threshold energy difference between the \(|bb\rangle\) and \(|ac\rangle\) channels for increasing B-fields where, for larger energy differences, the effects of the \(|ac\rangle\) channel on the scattering states in the \(|bb\rangle\) channel gradually becomes less important.
The contribution of the \(|ac\rangle\) channel shift parameters \(\chi(E_{\rm th})\) and \(\xi(E_{\rm th})\) on the \(|bb\rangle\) channel scattering states is readily observed in Figs. 10 and 11. Here, the computed scattering volume and the effective range are compared to the artificial two-channel model results of Ref. [1], both in the presence of the shift parameters and in the absence of the shift parameters. As can be seen in Fig. 10, the factor \(\chi\Delta_{\rm res,ac}^{0}\) has a significant impact on the computation of the scattering volume. Without this shift factor induced by the \(|ac\rangle\) channel, the model is completely incapable of reproducing the CC resonance. The large impact of this factor precisely indicates why the true two-channel Feshbach model is unable to match the CC calculations and explains why in Ref. [1] the Breit-Wigner structure of this model was replaced by the artificial form presented in Eq. (1). This artificial model effectively allows for the decoupling of the fitting of the resonance widths and shifts; thereby correctly describing the large resonance shift, but consequently failing to partially attribute this shift to the \(|ac\rangle\) channel and instead resulting in physically unrealistic values of \(\delta\mu\). The resonance-facilitated model rectifies this physical inconsistency.
Figure 8: **The energy shift and width** of the \(|bb\rangle\)-channel (blue lines) and the \(|ac\rangle\) channel (red lines) as a function of the energy E/h at a magnetic-field value \(\rm B=200\) G.
Figure 10: **Scattering volume**. Artificial two-channel (blue) and three-channel (red diamond) scattering volume for \(M_{\rm L}=0\) as a function of the magnetic field. The scattering volume diverges at \(B_{0}=198.803\)G. The square data points are obtained by setting \(\chi=0\) in Eq. (40) and reveal the importance of the \(|ac\rangle\) contribution to the scattering volume.
Figure 9: **Resonance shift parameters.** The values of the shift parameters \(\chi(E_{\rm th})\) (blue) and \(\xi(E_{\rm th})\) as a function of magnetic-field.
### Resonance width classification
Whereas recent studies have revealed the universal behavior of low-energy p-wave scattering parameters, [17; 18; 19; 20; 21], the multichannel nature of Feshbach resonances affects these scaling laws. The degree to which the universal behavior is distorted by multichannel effects is quantified by the resonance width. In this classification scheme, the magnitude of the energy-dependent contributions of the Feshbach part to the S-matrix are compared to the universal single-channel contributions.
Since the resonance-facilitated S-matrix retains the typical two-channel p-wave Breit-Wigner form, much of the resonance width analysis presented in Ref. [1] can be directly applied to our current model. As such, we define the dimensionless resonance width parameter \(\zeta\) as
\[R^{-1}=\frac{-R_{\rm max}(B)^{-1}}{\zeta}\left(1-\frac{V_{\rm bg}}{V(B)} \right)^{2}+R_{\rm vdW}^{-1}(B). \tag{44}\]
Here, \(V(B)\) represents the universal Feshbach form of the scattering volume set by
\[V(B)=V_{\rm bg}\left(1-\frac{\Delta}{B-B_{0}}\right), \tag{45}\]
where the background scattering volume \(V_{\rm bg}\) and resonance width \(\Delta\) are set by the values presented in Tab. 2 of Ref. [1]. The direct single-channel contribution to the effective range in Eq. (44) is set by the van der Waals effective range \(R_{\rm vdW}\) given by
\[R_{\rm vdW}=R_{\rm max}\left(1+2\frac{\bar{V}}{V(B)}+2\frac{\bar{V}^{2}}{V(B) ^{2}}\right), \tag{46}\]
with \(R_{\rm max}\approx 76a_{0}\) and \(\bar{V}\approx(63.464a_{0})^{3}\) for two \({}^{40}\)K atoms. For broad resonances, where \(|\zeta|\gg 1\), the effective range is fully determined by the van der Waals contribution \(R_{\rm vdW}\). On the other hand, for narrow resonances where \(|\zeta|\ll 1\), the effective range is determined by the Feshbach term, such that
\[R\approx R_{\rm FB,0}\left(1-\frac{V_{\rm bg}}{V(B)}\right)^{2}, \tag{47}\]
with \(R_{\rm FB,0}\) the Feshbach part of the effective range on resonance.
It is important to note that \(R_{\rm vdW}\) is not identical to \(R_{\rm P}\). Whereas \(R_{\rm vdW}\) is a real single-channel effective range, the direct part of the effective range set by \(R_{P}\) follows from CC calculations, where the boundary conditions on the \(|bb\rangle\) channel are affected by the presence of coupled-channels [58]. Ensuring Eq. (44) to accurately represent the effective range of the resonance facilitated model and comparing Eqs. (33) and (47) implies that \(\zeta=-R_{\rm max}/R_{\rm FB,0}\). Here, the resonant value of Eq. (33) is fully determined by the resonant value of Eq. (42), which we name \(R_{0}\). This differs from \(R_{\rm FB,0}\) by the resonant contribution of \(R_{\rm vdW}\), set by \(R_{\rm max}\). As such, we find that \(R_{FB,0}^{-1}=(R_{0}-R_{\rm max})^{-1}\) and we can directly compute the resonance parameter \(\zeta\) as
\[\zeta=\left.\frac{\Gamma_{bb}^{0}}{\Gamma_{bb}^{0}-2R_{\rm max}(1-\Delta_{\rm res,\it bb}^{1}-\xi\Delta_{\rm res,\it ac})}\right|_{B=B_{0}}. \tag{48}\]
In agreement with Ref. [1], we find \(\zeta=\{-1.90,-1.85\}\) for \(M_{L}=0\) and \(M_{L}=|1|\) respectively.
Whereas the resonance width is unaltered with respect to the two-channel model, it is important to point out that, contrary to the analysis of Ref. [1], the classification presented here does not rely of the fitting of \(\zeta\) and instead follows directly from the three-channel model. As presented in Fig. 11, Eq. (44) correctly represents the CC effective range around resonance and requires the inclusion of both \(\Delta_{\rm res,\it bb}^{1}\) and \(\xi\Delta_{\rm res,\it ac}^{1}\) to the computation of the effective range. This is in contrast with typical systems with non-resonant background interactions, where the resonance shift is well-represented by its lowest energy contribution \(\Delta_{\rm res}^{0}\)[59].
## VII Conclusion
In this work, we upgrade the standard two-channel Feshbach formalism to a resonance-facilitated three-channel version. Here, the direct interaction between the open-channel and the added (third) channel is neglected and the model retains the typical Breit-Wigner form of the S-matrix. This allows for the intuitive interpretation of the results and enables the definition of the Feshbach
Figure 11: **Effective range**. Artificial two-channel (blue) and three-channel (red diamonds) effective range for \(M_{\rm L}=0\) as a function of the magnetic field. The black dashed line represents the effective range computed using Eq. (44) for a resonance width parameter value \(\zeta=-1.9\). The square, triangular and circular data points are obtained by setting the various shift parameters to a value of zero.
resonance width analogous to the two-channel classification [60].
For the analyzed p-wave resonances in \({}^{40}\)K, the resonance-facilitated structure is motivated by the CC data, where a large magnetic-field dependence in the inelastic above-threshold (\(E\geq 0\)) loss can be observed. The small magnetic moment difference between the open-channel and the third channel cannot account for this large dependence. As such, the CC data implies the dominant scattering processes arise through coupling to the Feshbach resonance. The formulated resonance-facilitated model successfully captures the "bending" of the dimer binding energy as a function of magnetic field for the considered \({}^{40}\)K resonances. Contrary to the two nonphysical parameters that we identified as a major shortcoming of the two-channel treatment in Ref. [1], the correct fitting of the resonance-facilitated model to the CC data does not require the use of nonphysical input parameters. Instead, the three-channel model presented here allows for the use of physically realistic input values. We attribute the success of the resonance-facilitated model over a two-channel version to the explicit inclusion of a shape resonance in the added (third) channel. Similarly to the open-channel shape resonance, the interplay of this feature with the Feshbach resonance alters the low-energy scattering physics and needs to be incorporated into the model explicitly.
The general framework of the resonance-facilitated three-channel model can be readily adapted to the study of other resonances [61] and systems with arbitrary partial wave interactions by the reconsideration of the low-energy scaling of the Gamow functions. Particularly systems where resonant features exist in coupled channels with similar spin-structure (singlet vs. triplet) are expected to be suitably treated by the presented model, since the coupling strength of the direct spin-exchange interaction between these channels is expected to be small compared to channels with different spin-structure.
Apart from considering interactions with different partial waves, interesting routes for further analyses include the consideration of more terms in the Ning-Hu expansion or the inclusion of higher-order \(k_{s}\) terms in the Gamow series in order to improve the correctness and validity range of the model.
Notably, the need for analytical three-channel models also extends to the three-body sector. For example, \({}^{7}\)Li bosons have been observed to have a \(|bb\rangle\) + Feshbach + \(|ac\rangle\) s-wave structure analogous to that of the \({}^{40}\)K p-waves developed here, and Ref [61] shows how three-channel two-body interactions modify the three-body recombination of \({}^{7}\)Li atoms. Overlapping resonances offer another example of an intrinsic three-channel system where two different ramping closed channels produce two resonance in a single open channel. Ref. [62] successfully treated such cases with an analytic multichannel quantum defect model. It is known that a three-channel model (with one open and two closed channels) of overlapping resonances in the two-body sector is needed in order to represent correctly the experimental three-body Efimov features for three Cs atoms near a magnetic field of \(B=550\) G, where a numerically-implemented three-channel model predicted features that agree with experiment while a conventional two-channel resonance model failed to agree [63]. Since overlapping resonances are common in many alkali and mixed-alkali systems, multi-channel resonance models that go beyond the usual two-channel isolated Feshbach picture represent a promising area for future research
## VIII Acknowledgments
The authors thank K.G Jackson, J.H. Thywissen, J. van de Kraats and J.-L. Li for useful discussions. This work is supported by the Netherlands Organisation for Scientific Research (NWO) under grant 680-47-623.
|
2305.01483 | Frequency-modulated combs via on-chip field enhancement | Frequency-modulated (FM) combs feature flat intensity spectra with a linear
frequency chirp, useful for metrology and sensing applications. Generating FM
combs in semiconductor lasers generally requires a fast saturable gain, usually
limited by the intrinsic gain medium properties. Here, we show how a spatial
modulation of the laser gain medium can enhance the gain saturation dynamics
and nonlinearities to generate self-starting FM combs. We demonstrate this with
tapered planarized THz quantum cascade lasers (QCLs). While simple ridge THz
QCLs typically generate combs which are a mixture of amplitude and frequency
modulation, the on-chip field enhancement resulting from extreme spatial
confinement leads to an ultrafast saturable gain regime, generating a pure FM
comb with a flatter intensity spectrum, a clear linear frequency chirp and very
intense beatnotes up to -30 dBm. The observed linear frequency chirp is
reproduced using a spatially inhomogeneous mean-field theory model which
confirms the crucial role of field enhancement. In addition, the modified
spatial temperature distribution within the waveguide results in an improved
hightemperature comb operation, up to a heat sink temperature of 115 K, with
comb bandwidths of 600 GHz at 90 K. The spatial inhomogeneity also leads to
dynamic switching between various harmonic states in the same device. | Urban Senica, Alexander Dikopoltsev, Andres Forrer, Sara Cibella, Guido Torrioli, Mattias Beck, Jérôme Faist, Giacomo Scalari | 2023-05-02T15:06:04Z | http://arxiv.org/abs/2305.01483v2 | # Frequency-modulated combs via on-chip field enhancement
###### Abstract
Frequency-modulated (FM) combs feature flat intensity spectra with a linear frequency chirp, useful for metrology and sensing applications. Generating FM combs in semiconductor lasers generally requires a fast saturable gain, usually limited by the intrinsic gain medium properties. Here, we show how a spatial modulation of the laser gain medium can enhance the gain saturation dynamics and nonlinearities to generate self-starting FM combs. We demonstrate this with tapered planarized THz quantum cascade lasers (QCLs). While simple ridge THz QCLs typically generate combs which are a mixture of amplitude and frequency modulation, the on-chip field enhancement resulting from extreme spatial confinement leads to an ultrafast saturable gain regime, generating a pure FM comb with a flatter intensity spectrum, a clear linear frequency chirp and very intense beatnotes up to -30 dBm. The observed linear frequency chirp is reproduced using a spatially inhomogeneous mean-field theory model which confirms the crucial role of field enhancement. In addition, the modified spatial temperature distribution within the waveguide results in an improved high-temperature comb operation, up to a heat sink temperature of 115 K, with comb bandwidths of 600 GHz at 90 K. The spatial inhomogeneity also leads to dynamic switching between various harmonic states in the same device.
## I Introduction
Terahertz (THz) quantum cascade lasers (QCLs) are compact sources of coherent THz radiation based on intersubband transitions in an engineered semiconductor superlattice heterostructure [1]. Owing to their relatively fast gain saturation nonlinear properties, they can operate as frequency combs [2; 3] and dual combs [4]. Along with their significantly higher output powers than THz TDS systems [5] for frequencies above \(\approx\)1.5 THz, these devices are appealing for use in broadband coherent spectroscopy and sensing.
Recent important milestones in THz QCL development include advances in high-temperature narrowband operation [6; 7; 8], comb formation in ring cavities [9; 10], heterogeneous integration on silicon substrates [11], operation as fast detectors [12; 13], and the development of a planarized waveguide platform with improved dispersion, RF and thermal properties [14].
For use in spectroscopy, combs with flat intensity spectra are often desired, as this relaxes the conditions on the required signal-to-noise ratio (or integration time) for measuring all the spectral components. From this perspective, mid-IR QCLs are considered more suitable than THz QCLs due to their fast saturable gain, which is the main cause of self-starting frequency-modulated (FM) combs [15]. Besides producing a flat intensity spectrum, their linear frequency chirp and parabolic phase profile also make external pulse compression schemes relatively straightforward [16].
In THz QCLs, the longer upper state lifetimes produce free-running comb states which are a mixture of amplitude and frequency modulation [2]. The amplitudes of the individual lines in the comb spectrum can often vary significantly, and comb operation is also limited to relatively low operating temperatures.
In this work, we show that spatial variation along the cavity, specifically tapering the laser, can lead to an effective increase in the speed of gain saturation dynamics. We find that the spatial field enhancement leads to an ultrafast saturable gain regime which produces pure FM combs with a flatter intensity spectrum, a linear frequency chirp and strong measured RF beatnotes. Moreover, the spatial modulation of the cavity width improves the resilience to high temperatures and also enables switching between various harmonic comb states on a single device.
## II Taped planarized waveguide
### Waveguide geometry
We designed and fabricated tapered waveguides using a homogeneous broadband THz QCL active region [17] and our planarized waveguide platform [14]. As shown in the optical microscope image in Fig. 1.(a), the tapered waveguide consists of a sequence of wide (80 um) and narrow (20 um) sections connected with adiabatic linear tapers that minimize scattering losses. While in the mid-IR, tapered active waveguide geometries have recently been shown to improve the frequency comb performance due to a lower overall chromatic dispersion [18], in our im
plementation there are several other crucial effects due to the tapered geometry.
The narrow sections act as a filter for selecting the fundamental transversal waveguide mode (required for comb operation with regular teeth [19]), without using any side absorbers. The wider sections provide more gain for higher output power and a broader emission spectrum due to lower waveguide losses. While fabricating a homogeneous waveguide with the narrow width of \(20\,\mathrm{\SIUnitSymbolMicro m}\) would be beneficial for transversal mode selection and heat dissipation, the increased dispersion and waveguide losses would severely limit the total comb bandwidth and output power. The propagating mode simulations in Fig. 1.(b) show the efficient, scattering-free transition between the wide and narrow sections, which do not affect the formation of longitudinal modes and standing waves along the cavity length.
### Field enhancement and nonlinearity
Due to the non-homogeneous waveguide width, there is a field enhancement effect in the narrow sections, as shown in the simulation results in Fig. 1(b). Considering an idealized case (neglecting any reflection or scattering loss and reduction of the overlap factor \(\Gamma\)), the field intensity enhancement (\(FIE\)) is proportional to the width ratio between the wide and narrow sections (in this case 4:1), while the field amplitude enhancement (\(FAE\)) scales with the square root of the ratio (\(\sqrt{4}\):1=2:1). This is an important aspect, as the spontaneous frequency comb formation in QCLs is based on the nonlinear four-wave mixing process [20]. Since the latter is a third-order process, its efficiency is proportional to the cube of the electric field intensity [21]. For example, if we consider the case of non-degenerate four-wave mixing with two initial frequencies of \(\omega_{1}\) and \(\omega_{2}\), due to the \(\chi^{(3)}\) (Kerr) nonlinearity within the active region waveguide [22], two new frequencies \(\omega_{3}=2\omega_{1}-\omega_{2}\) and \(\omega_{4}=2\omega_{2}-\omega_{1}\) will be generated with an output field intensity proportional to \(I_{3}\propto|\chi^{(3)}|^{2}I_{1}^{2}I_{2}\) and \(I_{4}\propto|\chi^{(3)}|^{2}I_{1}I_{2}^{2}\), where \(I_{i}\) is the field intensity of the lasing mode at frequency \(\omega_{i}\)[23].
The nonlinearity within the active region arises mainly from gain saturation, i.e., the fact that the gain changes as a function of the intracavity optical field intensity [24]:
\[g=\frac{g_{0}}{1+I/I_{\mathrm{sat}}} \tag{1}\]
where \(g_{0}\) is the unsaturated gain, \(I\) the intracavity optical field intensity, and \(I_{\mathrm{sat}}\) the saturation intensity.
With an increasing intracavity optical intensity, the gain will be reduced, while the nonlinearity will be enhanced (and vice versa). Since the active region of a QCL is providing both gain and nonlinearity, a trade-off between these two quantities arises naturally in ridge devices with a constant width. Achieving a good figure of merit in both simultaneously is challenging, as the nonlinearity will be maximum in the highly saturated gain response. With the tapered waveguide, we can, however, use the best of both: the wide sections provide a larger
Figure 1: **(a)** Optical microscope image of a tapered device, where wide (\(80\,\mathrm{\SIUnitSymbolMicro m}\)) and narrow (\(20\,\mathrm{\SIUnitSymbolMicro m}\)) sections are connected via adiabatic linear tapers. **(b)** The E-field distribution obtained from full-wave 3D numerical simulations reveals a strong field enhancement effect in the narrow taper sections. Assuming no scattering loss or overlap factor reduction, the field intensity enhancement is proportional to the width ratios, in this case 4:1. This results in an enhanced four-wave mixing nonlinear process, crucial for frequency comb generation. **(c)** SEM image of the tapered active waveguide after the dry etching process step with visible vertical, smooth sidewalls. Subsequently, the active waveguides are planarized with a low-loss polymer (BCB) and covered with an extended top metallization. **(d)** Illustration of the gain in the wide and narrow sections, including intensity-dependent gain saturation. In the narrow sections, due to stronger gain saturation and photon-driven transport, the gain is lower but more nonlinear. **(e)** Field intensity in the narrow and wide sections as a function of the width ratios \(\alpha\) and the fraction of the device length with narrow sections \(\beta\). The values are normalized to the field intensity of a homogeneous waveguide with the same properties and operation point.
gain, while the narrow sections provide an increased nonlinearity, which is a regime generally not accessible without field enhancement.
This is illustrated in Fig. 1(c), where we plot the modal gain at an increasing current density through the active region. The dependence for the wide (\(\mathrm{G_{W}}\)) and narrow (\(\mathrm{G_{N}}\)) sections were computed using the relation from Eg. 1 with \(I_{\mathrm{N}}=4\times I_{\mathrm{W}}\), and with the assumption that the unsaturated gain \(g_{0}\) is increasing linearly with the applied laser bias [25]. Due to the field enhancement effect, the gain in the narrow sections is decreased while the nonlinearity increases (larger curvature). To sustain lasing, the total gain of the cavity must overcome the total cavity losses (waveguide + mirror losses, as indicated by the gray horizontal line). The operating point is marked with the colored circles: due to the reduced gain in the narrow sections, the wider sections operate at a point with a higher gain.
While the relative field intensities in the narrow sections are enhanced by the width ratio compared to the wider sections, it is the absolute field intensities that are crucial in nonlinear processes. For a homogeneous ridge active waveguide, the steady state (average) intracavity field intensity will increase with the length of the cavity and with the mirror reflectivities.
We now compare the intracavity field intensities within a tapered and a ridge waveguide with several simplifying assumptions: both waveguides are made of the same active material, and have the same waveguide losses, mirror reflectivities and cavity length. The tapered waveguide is approximated as consisting of only the wide and narrow sections, neglecting the tapered transitions between them. Under these assumptions, the threshold gain \(g_{\mathrm{thr}}\) of both devices would be the same, and we can write:
\[g_{\mathrm{thr}}=\underbrace{\frac{(1-\beta)g_{0}}{1+I_{\mathrm{W}}/I_{\mathrm{ sat}}}}_{\mathrm{wide\ sections}}+\underbrace{\frac{\beta g_{0}}{1+\alpha I_{\mathrm{W}}/I_{ \mathrm{sat}}}}_{\mathrm{narrow\ sections}}=\underbrace{\frac{g_{0}}{1+I_{H}/I_ {\mathrm{sat}}}}_{\mathrm{homogeneous\ waveguide}} \tag{2}\]
Here, \(\alpha\) is the width ratio between the two sections, and \(\beta\) is the fraction of the narrow sections within the whole device length.
Following from Eq. 2, in Fig. 1(e) we show a general plot of the intracavity field intensities within an active multi-section waveguide with saturable gain, consisting of two different widths. The intensities are normalized to a homogeneous ridge waveguide with the same properties and at the same operating point. Depending on the filling factor \(\beta\), the normalized field enhancement in the narrow sections (red) varies between 1 and \(\alpha\). In the wide sections (green), the field is reduced to normalized values between 1 down to \(1/\alpha\). We should note that the curvature of the dependence on \(\beta\) (and consequently the normalized field enhancement at a specific point) changes also with the operation point (laser bias).
From this, we can draw some more general conclusions for designing tapered waveguides. For maximal field enhancement in the narrow parts, \(\alpha\) should be large and \(\beta\) small. For minimizing the field in the wider section, both \(\alpha\) and \(\beta\) should be large (a similar approach is used in tapered amplifiers, where the field intensity spreads out in the tapered sections to reduce gain saturation [26]). However, for nonlinear processes, the interaction length within the waveguide matters as well, so for a given application, the optimal \(\beta\) will lie somewhere between 0 and 1. Moreover, in practical devices also \(\alpha\) cannot be arbitrarily large. For example, in our THz QCL geometry, the narrow sections are limited by the increasing waveguide losses and reduction of the mode overlap factor, while the wider sections are limited by a worse thermal figure of merit.
## Results
In the following, we present measurement results of a 4.2 mm long tapered device with a total of three wide (\(80\,\mathrm{\SIUnitSymbolMicro m}\)) and two narrow (\(20\,\mathrm{\SIUnitSymbolMicro m}\)) sections, with cleaved end facets. For this specific device, \(\beta\)=0.36, while \(\alpha\)=3.16 (extracted from numerical wave propagation simulations). The device was soldered on a copper submount with a custom RF-optimized PCB [14] and mounted on a flow cryostat.
### Measured THz spectrum and RF beatnote
In Fig. 1(a-c), we show a typical THz spectrum and RF beatnote measurement which highlights several performance improvements of the tapered geometry. The measurement was done at a relatively high heat sink temperature of 90 K, and the comb spectrum spans around 600 GHz. In contrast to ridge devices, where the individual mode amplitudes are typically varying over several orders of magnitude, we observe a flatter comb spectrum, where the modes in the central \(\sim\)300 GHz of the comb are within a \(\sim\)10 dB intensity variation. The measured free-running RF beatnote power is nearly -30 dBm, which is almost three orders of magnitude higher than for ridge devices processed on the same chip in Ref. [14]. This is due to contributions of the field enhancement effect and a larger total intracavity optical power (larger device area with wider sections).
The dependence of the measured RF beatnote intensity on the field enhancement can easily be explained with the following expression: as the free-running RF beatnote is a direct measurement of the current modulation \(\Delta I(t)\) at the mode spacing frequency \(f_{\mathrm{rep}}\), its intensity is proportional to the sum of the product of the neighbouring modes' complex electric field amplitudes [27]:
\[\Delta I(t)\propto\sum_{i}E_{i}\ E_{i+1}^{*} \tag{3}\]
Here, \(E_{i}\), \(E_{i+1}^{*}\) are the electric field amplitudes of two neighbouring modes in the THz emission spectrum,
which contribute to a measurable signal at \(f_{\rm rep}\). Typical measured beatnotes also display narrow linewidths, in the order of \(\sim\)1 kHz.
For tapered devices at such elevated heat sink temperatures (90 K as opposed to the more standard 15-40 K [28; 29]), the comb bandwidth is reduced due to two main contributions: increased waveguide losses in the narrow sections (20 um), and the decreased gain due to an increased heat sink temperature. This results in the absence of the low frequency part of the emission spectrum for this epilayer (down to \(\sim\)2.4 THz).
However, at lower heat sink temperatures (40 K), a comparable comb bandwidth to those obtained with simple planarized ridges with a width of 40 um is observed (typically spanning 700-800 GHz at 40 K). THz emission and RF beatnote spectra for a varying laser bias are shown in Figs. S1, S2 of the Supplemental Document, along with a movie of a laser bias sweep which displays a rich landscape of fundamental and harmonic comb states. Additionally, a comparison of measured comb spectra of ridge and tapered devices is in Fig. S3, where it is evident that the tapered devices feature a significantly flatter envelope of the spectrum intensity.
### Linear chirp and flatter spectrum
We then performed Shifted Wave Interference Fourier Transform (SWIFT) spectroscopy measurements [2; 30] to assess the comb coherence and reconstruct the time domain profile using a fast hot electron bolometer (HEB) detector [31; 32] (a more detailed explanation of the working principle and setup used is in Ref. [14]). A relatively weak RF signal (-10 dBm at the RF source) was injected at the roundtrip frequency \(f_{\rm rep}\) to stabilize the comb repetition rate and to give the QCL and the spectrum analyzer a common time-base allowing the IQ demodulation. For such a weak RF injection power it is assumed that the comb is stabilized without perturbing its free-running state (namely, the specific intensity spectrum and intermodal phases). In Fig. 3(a), we plot both the spectrum product and the SWIFT spectrum. The first was obtained by measuring the DC interferogram with a slow detector (DTGS), while the latter was reconstructed with IQ demodulation from optical beatnote measurements with a fast detector (HEB). The excellent agreement and comparable signal-to-noise ratio is an indicator of good comb coherence, while also the detected optical beatnote measured on the HEB is in the order of 20-30 dB stronger than for planarized ridge samples. The reconstructed intermodal phase profile in this state follows a linear chirp, which is typically observed in mid-IR QCLs [33; 34], but has not yet been reported so clearly in THz QCLs. In THz QCL designs optimized for comb operation, the longer upper state lifetime (\(\tau_{\rm up}>10\) ps) and consequently the larger \(\omega_{\rm rep}\times\tau_{\rm up}\) product makes the comb state less explicitly FM (see time reconstructions in [34; 35]) compared to mid-IR QCL combs, where ultrafast gain saturation and gain asymmetry (arising from non-parabolicity and Bloch gain [22]) play a major role in driving the laser dynamics.
The reconstructed time profile has a quasi-continuous periodic output intensity with some oscillations, and the instantaneous frequency produces a linear chirp, both are shown in Fig. 3(b, c). This could indicate that the effects related to the field enhancement push the tapered THz QCL comb towards a regime similar to mid-IR QCL combs, with a flatter intensity spectrum and a linear chirp in frequency.
Indeed, another aspect of the field enhancement is the increased photon-driven current [36] in the narrow sections, which shortens the stimulated carrier lifetime \(\tau_{\rm st}\) through the dependence:
\[\tau_{\rm st}=\frac{1}{g_{\rm c}\;S} \tag{4}\]
where \(g_{\rm c}\) is the gain cross-section and \(S\) the photon density. This results in a reduction of the upper state lifetime \(\tau_{\rm up}^{-1}=\tau_{\rm nr}^{-1}+\tau_{\rm sp}^{-1}+\tau_{\rm st}^{-1}\), where \(\tau_{\rm nr},\tau_{\rm sp}\) are the non-radiative and spontaneous emission lifetimes, respectively. With this effective lifetime shortening, the tapered waveguide system goes into a fast saturable gain regime, where the laser tends to produce a continuous waveform with a quasi-constant optical intensity, manifested as frequency-modulated combs usually observed in mid-IR QCLs [15].
To study the mechanism driving the comb dynamics, we developed a model which includes a spatial dependence of the optical nonlinearities, gain saturation, and temperature distribution within the cavity (by modifying the gain profile), and performed numerical simulations following a mean-field theory approach based on Ref. [37]. The details of the model as well as all the simulation
Figure 2: **(a)** Measured THz emission spectrum in logarithmic scale with a comb bandwidth of around 600 GHz at a heat sink temperature of 90 K. **(b)** The same measured spectrum in linear scale displays a relatively flat comb spectrum between 2.9-3.2 THz. **(c)** Measured RF beatnote at the roundtrip frequency, up to nearly -30 dBm.
parameters used (Table S1) along with the frequency-dependent waveguide and material dispersion (Fig. S5) can be found in the Supplemental Document. Results obtained with our model have good agreement with the experimental results. As shown in Fig. 3(d-f), the simulation produces a relatively flat spectrum separated into two main lobes with a phase distribution similar to the measured result. The linear frequency chirp is also reproduced in simulation, with a discontinuity at the phase jump point. The time domain profile reconstructs the quasi-continuous intensity with oscillations and individual points in time where the intensity almost vanishes, again consistent with the measurements. From the simulation model we found that the spatial dependence of the nonlinearities and gain are crucial for obtaining these specific kinds of comb states.
Successively, when increasing the RF injection power up to +32 dBm, the comb spectrum is further broadened and flattened, and the observed linear chirp has a cleaner shape, as visible in Fig. 3(g-i). The THz intensity spectrum features a flatter profile as well, while its time domain intensity profile is modulated in amplitude, following the strong RF-injected cosine profile (\(\propto\cos(2\pi f_{\mathrm{rep}}\,t)\)). These observations are similar to the findings of extensive mid-IR QCL simulations reported in Ref. [38]. There, due to a non-zero third-order dispersion in the mid-IR QCL cavity, the intensity spectrum features oscillations in amplitude and the intermodal phases form groups, diverging from an ideal linear chirp (which is similar to some states observed in free-running THz QCLs). By means of RF injection, a clean linear chirp and flatter spectral amplitudes can be recovered, which is consistent also with our experimental results with the tapered planarized waveguide geometry.
### High-temperature performance
Improved frequency comb properties are maintained for even higher operating temperatures. Results of a 3D COMSOL thermal simulation in Fig. 4(a) show that the wider sections, which are heating up more, are separated into smaller islands and the connecting narrower regions act as heat dissipation channels. In the thermal simulations, typical maximum bias conditions (11 V, 400 A/cm\({}^{2}\)), a heat sink temperature of 100 K, and the following material heat conductivities were used: Cu = 320 W/mK, GaAs/AlGaAs active region = 5 W/mK [39], GaAs substrate = 100 W/mK, BCB = 0.15 W/mK (half of the reported room temperature conductivity) [40; 41].
In Fig. 4(b), we plot LIV curves measured in continuous wave (CW) operation. The power measurements were done with a large area Thomas Keating absolute THz power meter and a chopper wheel, with limited sensitivity and without any correction for collection losses, so they could not be performed up to the maximum lasing temperature. Measured with a room-temperature DTGS detector and an FTIR, the maximum lasing temperature in CW was as high as 118 K.
In Fig. 5(a, b) we compare the high-temperature comb operation of a tapered waveguide and a reference ridge waveguide, both fabricated during the same process run
Figure 3: SWIFT spectroscopy measurements and mean-field theory simulations. **(a-c)** Measurements on a weakly RF-injected (-10 dBm) tapered device produce a relatively flat comb emission spectrum, a linear frequency chirp and a quasi-continous output intensity with some oscillations. **(d-f)** Results of mean-field theory simulations with a spatial dependence of the crucial parameters are able to reproduce the main features of the measured device. **(g-i)** Measurement results of the same tapered device under strong RF-injection (+32 dBm) with a broadened comb emission spectrum and a cleaner linear frequency chirp with a more constant output intensity on top of a sine wave which follows the RF modulation.
on the same chip. The ridge sample has a length of 2.7 mm and a constant waveguide width of 40 um, with a maximum lasing temperature in CW up to 116.5 K. Comparing the two samples at the same heat sink temperatures, it can be seen that the tapered waveguide device features a broader comb bandwidth, a flatter THz emission spectrum, and a stronger RF beatnote. Comb operation is maintained up to 115 K, still with a bandwidth of \(\sim\)200 GHz with a strong single RF beatnote above -60 dBm. This improved high-temperature comb performance is attributed to two main contributions. The first is the field enhancement effect giving rise to stronger nonlinearities and gain saturation which affect the comb formation process. Secondly, the narrow sections are at a significantly lower temperature than the wider regions (simulation results indicate a difference of \(\sim\)25 degrees).
A more detailed thermal simulation result analysis can be found in Fig. S4 of the Supplemental Document, where the line cuts across different profiles of the device show a complex temperature distribution due to the non-homogeneous geometry consisting of materials with different thermal properties.
We should also note here that the observed threshold current density of around 180 A/cm\({}^{2}\) at 40 K in CW of this device is higher than the ones we reported for simple planarized ridges with a width of 40 um (140 A/cm\({}^{2}\) at 40 K), which is due to increased waveguide losses and gain saturation in the narrow sections with a width of only 20 um.
### Harmonic comb state switching
Another unique feature of the tapered geometry is the possibility to switch to various harmonic comb states on demand. Recently, such harmonic comb states, where the lasing mode spacing is an integer of the fundamental f\({}_{\text{rep}}\), have gained interest in the QCL community [42]. In mid-IR QCLs, engineered defects have been fabricated for selecting a specific harmonic order [43], while in THz these have been so far limited to spontaneously forming harmonic states without an external control [44; 45]. As our tapered devices have a non-uniform shape along the length of the device, they are prone to switch to harmonic comb states. We demonstrate harmonic comb state switching, where a plethora of pure harmonic comb states can be excited simply by varying the bias and temperature on a single tapered waveguide device. If we look at a fundamental comb state in Fig. 6(a, b), we can measure a strong RF beatnote at the fundamental f\({}_{\text{rep}}\), but also at higher harmonics up to the 7\({}^{\text{th}}\) harmonic (limited by the spectrum analyzer bandwidth of 67 GHz). This is an indication of strong comb coherence (maintained also between more distant comb lines). By varying the laser bias and temperature, we can switch between the fundamental and the 2\({}^{\text{nd}}\), 3\({}^{\text{rd}}\), 4\({}^{\text{th}}\) and 6\({}^{\text{th}}\) harmonic states, as shown in Fig. 6(b-f), respectively. These are pure harmonic comb states, as we detect a single RF beatnote only at the frequency of the harmonic mode spacing, without any other signals present in the RF spectrum.
An interesting aspect is that the THz emission bandwidth of harmonic comb states is typically larger than for fundamental comb states, covering \(\sim\)700 GHz for the 6\({}^{\text{th}}\) harmonic in Fig. 6(f) even at elevated heat sink temperatures above 80 K. In contrast to fundamental states, where also multi-beatnote or incoherent states are observed, harmonic states appear almost exclusively in a pure comb state. The harmonic combs in tapered devices produce strong RF beatnotes also for very high RF frequencies, still with a measured intensity of nearly -60 dBm at 56 GHz (we should note that this is without any correction for increased high-frequency cable and PCB losses). This feature makes them appealing also as reliable coherent sources of high-frequency RF signals or even millimetre waves [42; 46].
In the Supplemental Material, we added some more simulation details, including an analysis of 3D simulation results of the RF-field distribution at the corresponding harmonic microwave resonances (Figs. S6, S7), along with some additional explanation as to why the 5\({}^{\text{th}}\) and 7\({}^{\text{th}}\) harmonic comb states were not observed experimentally on this specific tapered device.
## Conclusion
In conclusion, we have presented a method to engineer the comb states in THz QCLs by spatially modulating the transverse dimension of a Fabry-Perot resonator. Such geometries are enabled by the planarized waveguide platform. A strong field enhancement effect results in an enhanced four-wave mixing process and gain saturation, both crucial for comb formation in THz QCLs. Measured devices produce flatter comb spectra spanning 600 GHz at a heatsink temperature of 90 K, with strong RF beatnotes up to nearly -30 dBm. Improved comb properties are maintained for high operating temperatures, with a
Figure 4: **(a)** 3D thermal COMSOL simulations of the tapered device at the maximum laser bias and a heat sink temperature of 100 K. While the wider sections are heating up more, they are separated into smaller islands where the narrower sections act as heat dissipation channels with a lower operating temperature. **(b)** High-temperature LIV characteristics of a tapered device in CW.
comb bandwidth of 200 GHz at 115 K. We also report on the first experimental observation of a clear linear frequency chirp in a THz QCL, due to the device operating in a fast saturable gain regime. We are able to reproduce the results with a mean-field theory simulation model with a spatial dependence of the optical nonlinearities, gain saturation, and active region temperature.
In a broader context, electromagnetic environment engineering strongly affects light-matter interaction. One may consider our experiments as an analog to the Purcell effect, where spontaneous emission can be enhanced by light confinement in a cavity [47]. In our work, the presence of strong subwavelength confinement with field enhancement leads to an ultrafast stimulated emission lifetime affecting the laser dynamics.
It is important to emphasize that the presented field enhancement approach is not limited to THz QCLs, but can be applied to a variety of other laser systems [48], where a modified photon flux can be used to change the upper state lifetime and affect the laser dynamics by changing the value of the \(\omega_{\mathrm{rep}}\times\tau_{\mathrm{up}}\) product. For example, in spectroscopy, a flatter comb spectrum is desired, and this can be produced with FM combs via strong field enhancement, as demonstrated in this work (\(\omega_{\mathrm{rep}}\times\tau_{\mathrm{up}}\) is small). On the other hand, an increased waveguide cross-section would reduce gain saturation leading to slower dynamics, possibly facilitating short pulse formation in the presence of an increased \(\omega_{\mathrm{rep}}\times\tau_{\mathrm{up}}\) product.
The tapered geometry also enables the switching between various pure harmonic comb states with increased comb bandwidths up to 750 GHz above 80 K. With the planarized waveguide platform, it should also be possible to engineer the switching to harmonic comb states by, for example, designing only the extended top metallization to match the shape of the harmonic RF field (by following the position of nodes and antinodes), without increasing waveguide losses as is the case in the tapered active waveguide geometry. This should in turn lead to increased absolute harmonic comb bandwidths.
Beyond improved frequency comb performance, such integrated field enhancement structures can be used to boost the nonlinear optical properties originating from the large \(\chi^{(2)}\) and \(\chi^{(3)}\) nonlinearities within the active region, which could lead to novel effects and functionalities, such as the generation of new frequencies (e.g., via difference frequency [49] and/or harmonic generation [50]).
## Acknowledgements
The authors gratefully acknowledge funding from the ERC Grant CHIC (No. 724344) and in part by the Innousisse (grant 53098.1 IP-ENG), and Actphast 4 Researchers P2020-41.
## Competing Interests
The authors declare that they have no competing financial interests.
## Authors contributions
U.S. and G.S. conceived the idea. U.S. designed and fabricated the devices, carried out all the measurements, analysed experimental data and performed electromagnetic and thermal numerical simulations under the super
Figure 5: Measured spectra and free-running RF beatnotes at high operating temperatures, comparing **(a)** a ridge waveguide (W = 40 μm, L = 2.7 mm) and **(b)** a tapered waveguide device (W = 80/20 μm, L = 4.2 mm). The tapered waveguide device displays broader THz bandwidths, flatter emission spectra, stronger RF beatnotes, and a higher maximum comb temperature of 115 K. The spectra were obtained with a room-temperature DTGS detector.
vision of G.S. and J.F. A.D. developed and implemented the spatially inhomogeneous mean-field theory model. A. F. built the SWIFTS setup. S.C. and G.T. provided the HEB detectors, A.F. and G.S. optimized the HEB RF coupling. M.B. performed the epitaxial growth. U.S. and G.S. wrote the manuscript. All authors discussed the results and commented on the manuscript.
## Correspondence
*Correspondence should be addressed to U. Senica (email: [email protected]) and G. Scalari (email: [email protected]).
## Data availability
All the simulation and experimental data supporting this study are available from the corresponding author upon reasonable request.
## Keywords
frequency combs, frequency modulation, field enhancement, terahertz, quantum cascade lasers
|
2310.05453 | Memory-Assisted Sub-Prototype Mining for Universal Domain Adaptation | Universal domain adaptation aims to align the classes and reduce the feature
gap between the same category of the source and target domains. The target
private category is set as the unknown class during the adaptation process, as
it is not included in the source domain. However, most existing methods
overlook the intra-class structure within a category, especially in cases where
there exists significant concept shift between the samples belonging to the
same category. When samples with large concept shift are forced to be pushed
together, it may negatively affect the adaptation performance. Moreover, from
the interpretability aspect, it is unreasonable to align visual features with
significant differences, such as fighter jets and civil aircraft, into the same
category. Unfortunately, due to such semantic ambiguity and annotation cost,
categories are not always classified in detail, making it difficult for the
model to perform precise adaptation. To address these issues, we propose a
novel Memory-Assisted Sub-Prototype Mining (MemSPM) method that can learn the
differences between samples belonging to the same category and mine sub-classes
when there exists significant concept shift between them. By doing so, our
model learns a more reasonable feature space that enhances the transferability
and reflects the inherent differences among samples annotated as the same
category. We evaluate the effectiveness of our MemSPM method over multiple
scenarios, including UniDA, OSDA, and PDA. Our method achieves state-of-the-art
performance on four benchmarks in most cases. | Yuxiang Lai, Yi Zhou, Xinghong Liu, Tao Zhou | 2023-10-09T06:57:55Z | http://arxiv.org/abs/2310.05453v3 | # Memory-Assisted Sub-Prototype Mining for Universal Domain Adaptation
###### Abstract
Universal domain adaptation aims to align the classes and reduce the feature gap between the same category of the source and target domains. The target private category is set as the unknown class during the adaptation process, as it is not included in the source domain. However, most existing methods overlook the intra-class structure within a category, especially in cases where there exists significant concept shift between the samples belonging to the same category. When samples with large concept shift are forced to be pushed together, it may negatively affect the adaptation performance. Moreover, from the interpretability aspect, it is unreasonable to align visual features with significant differences, such as fighter jets and civil aircraft, into the same category. Unfortunately, due to such semantic ambiguity and annotation cost, categories are not always classified in detail, making it difficult for the model to perform precise adaptation. To address these issues, we propose a novel Memory-Assisted Sub-Prototype Mining (MemSPM) method that can learn the differences between samples belonging to the same category and mine sub-classes when there exists significant concept shift between them. By doing so, our model learns a more reasonable feature space that enhances the transferability and reflects the inherent differences among samples annotated as the same category. We evaluate the effectiveness of our MemSPM method over multiple scenarios, including UniDA, OSDA, and PDA. Our method achieves state-of-the-art performance on four benchmarks in most cases.
## 1 Introduction
Unsupervised Domain Adaptation (UDA) [15; 22; 41; 44; 9; 19; 21] has become a crucial research area of transfer learning, as it allows models trained on a specific dataset to be applied to related but distinct domains. However, traditional UDA methods are limited by the assumption that the source and target domains have to share the same label space. This assumption is problematic in real-world scenarios where the target distribution is complex, open, and diverse. Universal Domain Adaptation (UniDA) represents a strategy to address the limitations of traditional unsupervised domain adaptation methods. In the UniDA, the target domain has a different label set than the source domain. The goal is to correctly classify target domain samples belonging to the shared classes in the source label set, while any samples not conforming to the source label set are treated as "unknown". The term "universal" characterizes UniDA as not relying on prior knowledge about the label sets of the target domain. UniDA relaxes the assumption of a shared class space while aiming to learn domain-invariant features across a more broad range of domains.
Despite being widely explored, most existing universal domain adaptation methods [24; 47; 40; 39; 6; 34; 8; 26] overlook the internal structure intrinsically presented within each image category. These methods aim to align the common classes between the source and target domains for adaptation but usually train a model to learn the class "prototype" representing each annotated category. This is
particularly controversial when significant concept shift exists between samples belonging to the same category. These differences can lead to sub-optimal feature learning and adaptation if the intra-class structure is neglected during training. Since this type of semantic ambiguity without fine-grained category labels occurs in almost all of the DA benchmarks, all the methods will encounter this issue.
In this paper, we aim to propose a method to learn the detailed intra-class distinction and mine "sub-prototypes" for better alignment and adaptation. This kind of sub-prototype is the further subdivision of each category-level prototype, which represents the "sub-class" of the annotated categories. The main idea of our proposed approach lies in its utilization of a learnable memory structure to learn sub-prototypes for their corresponding sub-classes. This can optimize the construction and refinement of the feature space, bolstering the classifier's ability to distinguish class-wise relationships and improve the model's transferability across domains. A comparison between our proposed sub-prototypes mining approach and previous methods is illustrated in Figure 1. In previous methods, samples within a category were forced to be aligned together in the feature space regardless of whether there exist significant differences among them because the labels were one-hot encoded. Contrastively, our sub-prototypes' feature space distinguishes sub-classes with apparent differences within the category, thus improving the model's accuracy of domain adaptation and interpretability.
Our proposed approach, named memory-assisted sub-prototype mining (MemSPM), is inspired by the memory mechanism works [17; 10; 45; 36]. In our approach, the memory generates sub-prototypes that embody sub-classes learned from the source domain. During testing of the target samples, the encoder produces embedding that is compared to source domain sub-prototypes learned in the memory. Subsequently, an embedding for the query sample is generated through weighted sub-prototype sampling in the memory. This results in reduced domain shift before the embedding is passed to the classifier. Our proposal of mining sub-prototypes, which are learned from the source
Figure 1: Illustration of our motivation. (a) Examples of concept shift and intra-class diversity in DA benchmarks. For the class of alarm clocks, we find that digital clocks, pointer clocks, and alarm bells should be set in different sub-classes. For the class of airplane, we find that images containing more than one plane, single jetliner, and turboprop aircraft should be differently treated for adaptation. (b) Previous methods utilize one-hot labels to guide classifying without considering the intra-class distinction. Consequently, the model forces all samples from the same class to converge towards a single center, disregarding the diversity in the class. Our method clusters samples with large intra-class differences into separate sub-class, providing a more accurate representation. (c) During domain adaptation by our design, the samples in the target domain can also be aligned near the sub-class centers with similar features rather than just the class centers determined by labels.
domain memory, improves the universal domain adaptation performance by promoting more refined visual concept alignment.
MemSPM approach has been evaluated on four benchmark datasets (Office-31 [37], Office-Home [46], VisDA [33], and Domain-Net [32]), under various category shift scenarios, including PDA, OSDA, and UniDA. Our MemSPM method achieves state-of-the-art performance in most cases. Moreover, we design a visualization module for the sub-prototype learned by our memory to demonstrate the interpretability of MemSPM. Our contributions can be highlighted as follows:
* We study the UniDA problem from a new aspect, which focuses on the negative impacts caused by overlooking the intra-class structure within a category when simply adopting one-hot labels.
* We propose Memory-Assisted Sub-Prototype Mining(MemSPM), which explores the memory mechanism to learn sub-prototypes for improving the model's adaption performance and interpretability. Meanwhile, visualizations reveal the sub-prototypes stored in memory, which demonstrate the interpretability of MemSPM approach.
* Extensive experiments on four benchmarks verify the superior performance of our proposed MemSPM compared with previous works.
## 2 Related Work
**Closed-Set Domain Adaptation (CSDA).** To mitigate the performance degradation caused by the closed-set domain shift, [16; 29; 48] introduce adversarial learning methods with the domain discriminator, aiming to minimize the domain gap between source and target domains. Beyond the use of the additional domain discriminator, some studies [41; 23; 50; 30; 13] have explored the use of two task-specific classifiers, otherwise referred to as bi-classifier, to implicitly achieve the adversarial learning. However, the previously mentioned methods for CSDA cannot be directly applied in scenarios involving the category shift.
**Partial Domain Adaptation (PDA).** PDA posits that private classes are exclusive to the source domain. Representative PDA methods, such as those discussed in [3; 49], employ domain discriminators with weight adjustments or utilize source samples based on their resemblance to the target domain [5]. Methods incorporating residual correction blocks in PDA have been introduced by Li et al. and Liang et al. [25; 27]. Other research [7; 11; 38] explores the use of Reinforcement Learning for source data selection within the context of PDA.
**Open-Set Domain Adaptation (OSDA).** Saito et al. [42] developed a classifier inclusive of an additional "unknown" class intended to differentiate categories unique to the target domain. Liu et al. [28] and Shermin et al. [43] propose assigning individual weights to each sample depending on their importance during domain adaptation. Jang et al. [20] strive to align the source and target-known distributions, while concurrently distinguishing the target-unknown distribution within the feature alignment process. The above PDA and OSDA methods are limited to specific category shift.
**Universal Domain Adaptation (UniDA)** You et al. [47] proposed Universal Adaptation Network (UAN) to deal with the UniDA setting that the label set of the target domain is unknown. Li et al. [24] proposed Domain Consensus Clustering to differentiate the private classes rather than treat the unknown classes as one class. Saito et al. [40] suggested that using the minimum inter-class distance in the source domain as a threshold can be an effective approach for distinguishing between "known" and "unknown" samples in the target domain. However, most existing methods [24; 47; 40; 39; 6; 34; 8; 26] overlook the intra-class distinction within one category, especially in cases where there exists significant concept shift between the samples belonging to the same category.
## 3 Proposed Methods
### Preliminaries
In unsupervised domain adaptation, we are provided with labeled source samples \(\mathcal{D}^{s}=\{x_{i}^{s},y_{i}^{s})\}_{i=1}^{n^{s}}\) and unlabeled target samples \(\mathcal{D}^{t}=\{(x_{i}^{t})\}_{i=1}^{n^{t}}\). As the label set for each domain in UniDA setting may not be identical, we use \(C_{s}\) and \(C_{t}\) to represent label sets for the two domains, respectively.
Then, we denote \(C=C_{s}\cap C_{t}\) as the common label set. \(\hat{C}_{s}\), \(\hat{C}_{t}\) are denoted as the private label sets of the source domain and target domain, respectively. We aim to train a model on \(\mathcal{D}^{s}\) and \(\mathcal{D}^{t}\) to classify target samples into \(|C|+1\) classes, where private samples are treated as unknown classes.
Our method aims to address the issue of intra-class concept shift that often exists within the labeled categories in most datasets, which is overlooked by previous methods. Our method enables the model to learn an adaptive feature space that better aligns fine-grained sub-class concepts, taking into account the diversity present within each category. Let \(X\) denote the input query, \(Z\) denote the embedding extracted by the encoder, \(L\) denote the data labels, \(\hat{Z}\) denotes the embedding obtained from the memory, \(\hat{X}\) denote the visualization of the memory, \(\hat{L}\) denotes the prediction of the input query, and the \(K\) denotes the top-\(K\) relevant sub-prototypes, respectively. The overall pipeline is presented in Figure 2. More details will be described in the following sub-sections.
### Input-Oriented Embedding vs. Task-Oriented Embedding
Usually, the image feature extracted by a visual encoder is directly used for learning downstream tasks. We call this kind of feature as input-oriented embedding. However, it heavily relies on the original image content. Since different samples of the same category always vary significantly in their visual features, categorization based on the input-oriented embedding is sometimes unattainable. In our pipeline, we simply adopt a CLIP-based [35] pre-trained visual encoder to extract the input-oriented embeddings, which is not directly used for learning our downstream task.
In our MemSPM, we propose to generate task-oriented embedding, which is obtained by serving input-oriented embedding as a query to retrieve the sub-prototypes from our memory unit. We define \(f^{fixed}_{encode}(\cdot):X\to Z\) to represent the fixed pre-trained encoder and \(f^{UniDA}_{class}(\cdot):\hat{Z}\rightarrow\hat{L}\) to represent the UniDA classifier. The input-oriented embedding \(Z\) is used to retrieve the relevant sub-prototypes from the memory. The task-oriented embedding \(\hat{Z}\) is obtained using the retrieved sub-prototypes for classification tasks. In conventional ways, \(\hat{Z}=Z\), which means the \(\hat{Z}\) is obtained directly from \(Z\). Our method obtains the \(\hat{Z}\) by retrieving the sub-prototypes from the memory, which differentiates \(\hat{Z}\) from \(Z\) and eliminates the domain-specific information from the target domain during the testing phase. As a result, it improves the performance of \(f^{UniDA}_{class}(\cdot)\) when performing UniDA.
### Memory-Assisted Sub-Prototype Mining
The memory module proposed in MemSPM consists of two key components: a memory unit responsible for learning sub-prototypes, and an attention-based addressing [18] operator to obtain better task-oriented representation \(\hat{Z}\) for the query, which is more domain-invariant.
Figure 2: Our model first utilizes a fixed pre-trained model as the encoder to extract input-oriented embedding given an input sample. The extracted input-oriented embedding is then compared with sub-prototypes learned in memory to find the closest \(K\). These \(K\) are then weighted-averaged into a task-oriented embedding to represent the input, and used for learning downstream tasks. During the UniDA process, we adopt the cycle-consistent matching method on the task-oriented embedding \(\hat{Z}\) generated from the memory. Moreover, a decoder is designed to reconstruct the image, allowing for visualizing of the sub-prototypes in memory and verifying the effectiveness of sub-class learning.
#### 3.3.1 Memory Structure with Partitioned Sub-Prototype
The memory in MemSPM is represented as a matrix, denoted by \(M\in\mathbb{R}^{N\times S\times D}\), where \(N\) indicates the number of memory items stored, \(S\) refers to the number of sub-prototypes partitioned in each memory item, and \(D\) represents the dimension of each sub-prototype. For convenience, we assume \(D\) is the same to the dimension of \(Z\in\mathbb{R}^{C}\) ( \(\mathbb{R}^{D}\)=\(\mathbb{R}^{C}\)). Let the vector \(m_{i,j}\), \(\forall i\in[N]\) denote the \(i\)-th row of \(M\), where \([N]\) denotes the set of integers from 1 to \(N\), \(\forall j\in[S]\) denote the \(j\)-th sub-prototype of \(M\) items, where \([S]\) denotes the set of integers from 1 to \(S\). Each \(m_{i}\) denotes a memory item. Given a embedding \(Z\in\mathbb{R}^{D}\), the memory module obtains \(\hat{Z}\) through a soft addressing vector \(W\in\mathbb{R}^{1\times 1\times N}\) as follows:
\[\hat{Z}=W\cdot M=\Sigma_{i=1}^{N}w_{i,j=s_{i}}\cdot m_{i,j=s_{i}}, \tag{1}\] \[w_{i,j=s_{i}}=\text{argmax}(w_{i,j},dim=1), \tag{2}\]
where \(W\) is a vector with non-negative entries that indicate the maximum attention weight of each item's sub-prototype, \(s_{i}\) denotes the index of the sub-prototype in the \(i\)-th item, and \(w_{i,j=s_{i}}\) denotes the \(i\), \(j=s_{i}\)-th entry of \(W\). The hyperparameter \(N\) determines the maximum capacity for memory items and the hyperparameter \(S\) defines the number of sub-prototypes in each memory item. The effect of different settings of hyper-parameters is evaluated in Section 4.
#### 3.3.2 Sub-Prototype Addressing and Retrieving
In MemSPM, the memory \(M\) is designed to learn the sub-prototypes to represent the input-oriented embedding \(Z\). We define the memory as a content addressable memory [17; 10; 45; 36] that allows for direct referencing of the content of the memory being matched. The sub-prototype is retrieved by attention weights \(W\) which are computed based on the similarity between the sub-prototypes in the memory items and the input-oriented embedding \(Z\). To calculate the weight \(w_{i,j}\), we use a softmax operation:
\[w_{i,j}=\frac{\exp(d(z,m_{i,j}))}{\Sigma_{n=1}^{N}\Sigma_{s=1}^{S}\exp(d(z,m_{ n,s}))}, \tag{3}\]
where \(d(\cdot,\cdot)\) denotes cosine similarity measurement. As indicated by Eq. 1 and 3, the memory module retrieves the sub-prototype that is most similar to \(Z\) from each memory item in order to obtain the new representation embedding \(\hat{Z}\). As a consequence of utilizing the adaptive threshold addressing technique(Section 3.3.3), only the top-\(K\) can be utilized to obtain a task-oriented embedding \(\hat{Z}\), that serves to represent the encoded embedding \(Z\).
#### 3.3.3 Adaptive Threshold Technique for More Efficient Memory
Limiting the amount of sub-prototypes retrieved can enhance memory utilization and avoid negative impacts on unrelated sub-prototypes during model parameter updates. Despite the natural reduction in the number of selected memory items, the attention-based addressing mechanism may still lead to the combination of small attention-weight items into the output embedding \(\hat{Z}\), which has a negative impact on the classifier and sub-prototypes in the memory. Therefore, it is necessary to impose a mandatory quantity limit on the amount of the relevant sub-prototypes retrieved. To address this issue, we apply an adaptive threshold operation to restrict the amount of sub-prototypes retrieved in a forward process.
\[\hat{w}_{i,j=s_{i}}=\begin{cases}w_{i,j=s_{i}},&w_{i,j=s_{i}}>\lambda\\ 0,&\text{other}\end{cases} \tag{4}\]
where \(\hat{w}_{i,j=s_{i}}\) denotes the \(i,j=s_{i}\)-th entry of \(\hat{w}\), the \(\lambda\) denotes the adaptive threshold:
\[\lambda=\text{argmin}(topk(w)). \tag{5}\]
Directly implementing the backward for the discontinuous function in Eq. 4 is not an easy task. For simplicity, we use the method [17]that rewrites the operation using the continuous ReLU activation function as:
\[\hat{w}_{i,j=s_{i}}=\frac{\max(w_{i,j=s_{i}}-\lambda)\cdot w_{i,j=s_{i}}}{|w_{i,j= s_{i}}-\lambda|+\epsilon}, \tag{6}\]
where \(\text{max}(\cdot,0)\) is commonly referred to as the ReLU activation function, and \(\epsilon\) is a small positive scalar. The prototype \(\hat{Z}\) will be obtained by \(\hat{Z}=\hat{W}\cdot M\). The adaptive threshold addressing encourages the model to represent embedding \(Z\) using fewer but more relevant sub-prototypes, leading to learning more effective features in memory and reducing the impact on irrelevant sub-prototypes.
### Visualization and Interpretability
We denote \(f^{unfixed}_{decode}(\cdot):\hat{Z}\rightarrow\hat{X}\) to represent the decoder. The decoder is trained to visualize what has been learned in the memory by taking the retrieved sub-prototype as input. From an interpretability perspective, each encoded embedding \(Z\) calculates the cosine similarity to find the top-\(K\) fitting sub-prototype representation for the given input-oriented embedding. Then, these sub-prototypes are combined to represent the \(Z\) in \(\hat{Z}\). The sub-prototype in this process can be regarded as the visual description for the input embedding \(Z\). In other words, the input image is much like the sub-classes represented by these sub-prototypes. In this way, samples with significant intra-class differences will be matched to different sub-prototypes, thereby distinguishing different sub-classes. The use of a reconstruction auxiliary task can visualize the sub-prototypes in memory to confirm whether our approach has learned intra-class differences for the annotated category. The results of this visualization are demonstrated in Figure 3.
### Cycle-Consistent Alignment and Adaption
Once the sub-prototypes are mined through memory learning, the method of cycle-consistent matching, inspired by DCC [24], is employed to align the embedding \(\hat{Z}\). The cycle-consistent matching is preferred due to it can provide a better fit to the memory structure compared to other UniDA methods. The other method, One-vs-All Network (OVANet), proposed by Saito et al. [40], needs to train the memory multiple times, which can lead to significant computational overhead. In brief, the Cycle-Consistent Alignment provides a solution by iteratively learning a consensus set of clusters between the two domains. The consensus clusters are identified based on the similarity of the prototypes, which is measured using a similarity metric. The similarity metric is calculated on the feature representations of the prototypes. For unknown classes, we set the size \(N\) of our memory during the initial phase to be larger than the number of possible sub-classes that may be learned in the source domain. This size is a hyperparameter that is adjusted based on the dataset size. Redundant sub-prototypes are invoked to represent the \(\hat{Z}\), when encountering unknown classes, allowing for an improved distance separation between unknown and known classes in the feature space.
**Training Objective**. The adaptation loss in our training is similar to that of DCC, as \(\mathcal{L}_{DA}\):
\[\mathcal{L}_{DA}=\mathcal{L}_{ce}+\lambda_{1}\mathcal{L}_{cdd}+\lambda_{2} \mathcal{L}_{reg}, \tag{7}\]
where the \(\mathcal{L}_{ce}\) denotes the cross-entropy loss on source samples, \(\mathcal{L}_{cdd}\) denotes the domain alignment loss and \(\mathcal{L}_{reg}\) denotes the regularizer. For the auxiliary reconstruction task, we add a mean-squared-error (MSE) loss function, denoted as \(\mathcal{L}_{rec}\). Thus, the model is optimized with:
\[\mathcal{L}=\mathcal{L}_{DA}+\lambda_{3}\mathcal{L}_{rec}=\mathcal{L}_{ce}+ \lambda_{1}\mathcal{L}_{cdd}+\lambda_{2}\mathcal{L}_{reg}+\lambda_{3}\mathcal{ L}_{rec}. \tag{8}\]
## 4 Experiments
### Datasets and Evaluation Metrics
We first conduct the experiments in the UniDA setting [47] where private classes exist in both domains. Moreover, we also evaluate our approach on two other sub-cases, namely Open-Set Domain Adaptation (OSDA) and Partial Domain Adaptation (PDA).
**Datasets**. Our experiments are conducted on four datasets:
Office-31 [37], which contains 4652 images from three domains (DSLR, Amazon, and Webcam); OfficeHome [46], a more difficult dataset consisting of 15500 images across 65 categories and 4 domains (Artistic images, Clip-Art images, Product images, and Real-World images); VisDA [33], a large-scale dataset with a synthetic source domain of 15K images and a real-world target domain of 5K images; and DomainNet [32], the largest domain adaptation dataset with approximately 600,000 images. Similar to previous studies [14], we evaluate our model on three subsets of DomainNet (Painting, Real, and Sketch).
As in previous work [24, 41, 2, 4, 47], we divide the label set into three groups: common classes \(C\), source-private classes \(\hat{C}_{s}\), and target-private classes \(\hat{C}_{t}\). The separation of classes for each of the four datasets is shown in Table 3 and is determined according to alphabetical order.
**Evaluation Metrics**. We report the average results of three runs. For the PDA scenario, we calculate the classification accuracy over all target samples. The usual metrics adopted to evaluate OSDA are the average class accuracy over the known classes \(OS^{*}\), and the accuracy of the unknown class \(UNK\). In the OSDA and UniDA scenarios, we consider the balance between "known" and "unknown" categories and report the H-score [1]:
\[\text{H-score}=2\times\frac{OS^{*}\timesUNK}{OS^{*}+UNK}, \tag{9}\]
which is the harmonic mean of the accuracy of "known" and "unknown" samples.
**Implementation Details**. Our implementation is based on PyTorch [31]. We use CLIP [12] as the backbone pretrained by CLIP [35] for the MemSPM is hard to train with a randomly initialized encoder. The classifier consists of two fully-connected layers, which follow the previous design [4, 47, 41, 14, 24]. The weights in the \(\mathcal{L}\) are empirically set as \(\lambda_{1}=0.1\), \(\lambda_{2}=3\) and \(\lambda_{3}=0.5\) following DCC [24]. For a fair comparison, we also adopt CLIP as backbone for DCC [24] and
\begin{table}
\begin{tabular}{c|c|c c c c c c|c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Pretrain} & \multicolumn{8}{c|}{DomainNet} & \multicolumn{3}{c}{VisDA} & \multicolumn{3}{c}{Office-31} \\ \cline{3-14} & & P2R & P2S & R2P & R2S & S2P & S2R & Avg & S2R & A2D & A2W & D2A & D2W & W2A & W2D & Avg \\ \hline UAN [47] & 41.9 & 39.1 & 43.6 & 38.7 & 38.9 & 43.7 & 41.0 & 34.8 & 59.7 & 58.6 & 60.1 & 70.6 & 60.3 & 71.4 & 63.5 \\ CMU [14] & 50.8 & 45.1 & 52.2 & 45.6 & 44.8 & 51.0 & 4.3 & 32.9 & 68.1 & 67.3 & 71.4 & 79.3 & 72.2 & 80.4 & 73.1 \\ DCC [24] & & 56.9 & 43.7 & 50.3 & 43.3 & 44.9 & 56.2 & 49.2 & 43.0 & **88.5** & 78.5 & 70.2 & 79.3 & 75.9 & 88.6 & 80.2 \\ OWAN [46] & ImageNet & 56.0 & 47.1 & 51.7 & 49.4 & 47.4 & 57.2 & 50.7 & 53.1 & 85.5 & 79.4 & 80.1 & 95.4 & 84.0 & 94.3 & 86.5 \\ UMAD [25] & ImageNet & 59.0 & 44.3 & 50.1 & 42.1 & 32.0 & 55.3 & 47.1 & 58.3 & 79.1 & 77.4 & 87.4 & 90.7 & **90.4** & 97.2 & 87.0 \\ GATE [8] & 57.4 & 48.7 & 52.8 & 47.6 & 49.5 & 56.3 & 52.1 & 56.4 & 87.7 & 81.6 & 84.2 & 94.8 & 83.4 & 94.1 & 87.6 \\ UniOpt [6] & 59.3 & 47.8 & 51.8 & 46.8 & 48.3 & 58.3 & 52.0 & 57.3 & 83.7 & **85.3** & 71.4 & 91.2 & 70.9 & 90.84 & 82.2 \\ GLC [34] & 63.8 & 50.5 & 54.9 & 50.9 & 49.6 & 61.3 & 55.1 & 73.1 & 81.5 & 84.5 & **80.8** & 90.4 & 88.4 & 92.3 & 87.8 \\ \hline GLC [34] & 51.2 & 44.5 & 55.6 & 43.1 & 47.0 & 39.1 & 46.8 & **80.3** & 50.5 & 90.4 & 77.5 & **95.6** & 77.7 & 96.9 & 84.8 \\ DCC [24] & ViT-B/16 & 61.1 & 38.8 & 51.8 & 49.3 & 49.1 & 60.3 & 52.2 & 61.2 & 82.2 & 76.9 & 83.6 & 75.2 & 85.8 & 88.7 & 82.1 \\ MemSPM+DCC & & 62.4 & **52.8** & **58.5** & **53.3** & **50.4** & **62.6** & **56.7** & 79.3 & 88.0 & 84.6 & 88.7 & 87.6 & 87.9 & 94.3 & **88.5** \\ \hline \hline \end{tabular}
\end{table}
Table 1: H-score (\(\%\)) comparison in UniDA scenario on DomainNet, VisDA and Office-31,some results are cited from [24, 34]
\begin{table}
\begin{tabular}{c|c|c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Pretrain} & \multicolumn{8}{c|}{Office-Home} \\ \cline{3-14} & & Ar2X1 & Ar2P & Ar2R & C2R & C2R & C2R & P2C & C2R & P2R & Rv2Ar & Rv2 & Rv2 & Rv2 & Rv2 & Avg \\ \hline UAN [47] & & 51.6 & 51.7 & 54.3 & 61.7 & 57.6 & 61.9 & 50.4 & 47.6 & 61.5 & 62.9 & 52.6 & 65.2 & 56.6 \\ CMU [14] & & 56.0 & 56.9 & 59.2 & 67.0 & 64.3 & 67.8 & 54.7 & 51.1 & 66.4 & 68.2 & 57.9 & 69.7 & 61.6 \\ DCC [24] & & 58.0 & 54.1 & 58.0 & 74.6 & 70.6 & 77.5 & 64.3 & 73.6 & 74.9 & **81.0** & 75.1 & 80.4 & 70.2 \\ OWANet [46] & ImageNet & 62.8 & 75.6 & 78.6 & 70.7 & 68.8 & 75.0 & 71.3 & 58.6 & 80.5 & 76.1 & 64.1 & 78.9 & 71.8 \\ UMAD [25] & & 61.1 & 76.3 & 82.7 & 70.7 & 67.7 & 75.7 & 64.4 & 55.7 & 76.3 & 73.2 & 60.4 & 77.2 & 70.1 \\ GATE [8] & 63.8 & 75.9 & 81.4 & 74.0 & 72.1 & 79.8 & 74.7 & 70.3 & 82.7 & 79.1 & 71.5 & 81.7 & 75.6 \\ UniOpt [6] & 67.2 & 80.5 & 86.0 & 73.5 & 73.7 & 83.4 & 75.5 & 63.3 & 86.0 & 77.8 & 65.4 & 81.9 & 76.6 \\ GLC [34] & & 64.3 & 78.2 & 89.8 & 63.1 & 81.7 & **89.1** & 77.6 & 54.2 & **88.9** & 80.7 & 54.2 & 85.9 & 75.6 \\ \hline GLC [34] & & 79.4 & 88.9 & **90.8** & **76.3** & 84.7 & 89.0 & **71.5** & 72.9 & 85.7 & 78.2 & 79.4 & 90.0 & 82.6 \\ DCC [24] & CLIP & 62.6 & 88.7 & 87.4 & 63.3 & 68.5 & 79.3 & 67.9 & 63.8 & 82.4 & 70.7 & 69.8 & 87.5 & 74.4 \\ MemSPM+DCC & & **78.1** & **90.3** & 90.7 & **81.9** & **90.5** & **88.3** & 79.2 & **77.4** & 87.8 & 78.8 & **76.2** & **91.6** & **8
state-of-art method GLC [34]. We use the official code of DCC [24] and GLC [34] (Links in Appendix D).
### Comparison with State-of-The-Arts
We compare our method with previous state-of-the-art algorithms in three sub-cases of unsupervised domain adaptation, namely, object-specific domain adaptation (OSDA), partial domain adaptation (PDA), and universal domain adaptation (UniDA).
**Results on UniDA.** In the most challenging setting, i.e. UniDA, our MemSPM approach achieves state-of-the-art performance. Table 7 shows the results on DomainNet, VisDA, and Office-31, and the result of Office-Home is summarized in Table 2. We mainly compare with GLC and DCC using ViT-B/16 as the backbone. On Office-31, the MemSPM+DCC outperforms the previous state-of-art method GLC by \(3.7\%\) and surpasses the DCC by \(6.4\%\). On visda, our method surpasses the DCC by a huge margin of \(16.1\%\). Our method also surpasses the GLC by \(9.9\%\) and the DCC by \(4.5\%\) on DomainNet. On the Office-Home, we surpass the DCC by \(9.8\%\) and the GLC by \(3.7\%\).
**Results on OSDA and PDA.** In table 4 and table 5, we present the results on Office-Home, Office-31, and VisDA under OSDA and PDA scenarios. In the OSDA scenario, MemSPM+DCC still achieves state-of-the-art performance. Specifically, MemSPM+DCC obtains \(95.6\%\) H-score on Office-31, with an improvement of \(5.5\%\) compared to GLC and \(13.7\%\) compared to DCC. In the PDA scenario, MemSPM still achieves comparable performance compared to methods tailored for PDA. The MemSPM+DCC surpasses the DCC by \(8.1\%\) on the VisDA.
significantly enhances DCC performance with CLIP as the backbone. CLIP is chosen because MemSPM's memory module, with a large latent space initialized by a random normal distribution, faces challenges in retrieving diverse sub-prototypes early in training. CLIP's learned global feature space addresses this issue.
**Sensitivity to Hyper-parameters**. We conducted experiments on the VisDA dataset under the UniDA setting to demonstrate the impact of hyperparameters \(S\) and \(N\) on the performance of our method. The impact of \(S\) is shown in Figure 3. When \(S\geq 20\), the performance achieves a comparable level. At the same time, the performance of the model is not sensitive to the value of \(N\), when \(S=30\).
**Effect of CLIP-based Feature**. As shown in Table 6, we have conducted experiments to compare ViT-B/16 (pre-trained by CLIP), ViT-B/16 (pre-trained on ImageNet), and ViT-B/16 (without pre-training). The performance of MemSPM on Offechemo using ViT-B/16 (ImageNet) is 76.7% (H-score), which is 7.5% lower than MemSPM using ViT-B/16 (pre-trained on CLIP). Additionally, the ViT-B/16 (without pre-training) only achieves 64.3%, which is 19.9% lower than that using ViT-B/16 (pre-trained on CLIP).
**Effect of Adaptive Threshold** As shown in Table 6, to demonstrate the effectiveness of the adaptive threshold, we find a best-performed fixed threshold of 0.005 through experiments. It limits the memory to learn sub-prototypes, which only achieved 73.9% (H-score) on Officehome.
**Effect of Loss** As shown in Table 6, We experimented with loss contributions. \(\mathcal{L}_{ce}\) for classification is essential; removing \(\mathcal{L}_{cdd}\) led to a 4.4% drop (79.8%). Optimal coefficients for \(\mathcal{L}_{ce}\) (\(\lambda_{1}=0.1\)) and \(\mathcal{L}_{cdd}\) (\(\lambda_{2}=3\)) achieves the best performance. The reconstruction loss (\(\mathcal{L}_{rec}\)) slightly improved performance, mainly for visualizing sub-prototypes.
\begin{table}
\begin{tabular}{c|c|c c c c c c c c c c c c} \hline \hline Model & Protein & AGT & AGT & AGT\({}_{2}\)W & AGT\({}_{3}\)W & C2U/L & C2U/L & C2U/L & P2U/L & P2U/L & P2U/L & P2U/L & P2U/L & \(\mathbf{x}\)-\(\mathbf{x}\) \\ \hline CLIP-Biance & CLIP & 65.5 & 85.7 & 78.1 & 73.7 & 82.7 & 85.5 & 68.1 & 68.7 & **90.8** & 60.5 & 60.5 & 60.7 & 77.2 \\ DCC-AttSPM & ImageNet & 32.1 & 80.8 & 88.4 & 60.8 & 61.1 & 85.2 & **83.5** & 16.1 & 80.7 & 82.5 & **82.7** & **77.3** & 76.4 & 76.7 \\ DCC-AttSPM & CLIP & **78.1** & **90.3** & **90.7** & **81.9** & **90.5** & **90.8** & 90.3 & 77.4 & 82.8 & 76.3 & **76.2** & **91.6** & **84.2** \\ DCC-AttSPM & None & 30.7 & 84.0 & 85.6 & 50.2 & 67.1 & 53.2 & 44.1 & 77.9 & 67.1 & 50.3 & 81.7 & 64.33 \\ \hline Fixed Threshold-0005 ECC-AttSPM & CLIP & 64.6 & 86.7 & 87.4 & 63.1 & 63.5 & 79.3 & 65.0 & 65.8 & 81.4 & 70.7 & 66.8 & 85.3 & 73.9 \\ \hline DCC-AttSPM Without Local & CLIP & 75.9 & 75.4 & 86.4 & 80.1 & 71.6 & 87.3 & 70.1 & **89.7** & 83.7 & 74.2 & 73.5 & **83.7** & **79.8** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Ablation Studies
Figure 3: (a) The tSNE visualization shows the feature space of the sub-classes belonging to each category, which demonstrates the MemSPM mining the sub-prototypes successfully. (b) The results of different values of \(S\) and \(N\). (c) The reconstruction visualization shows what has been learned in the memory, which demonstrates the intra-class diversity has been learned by MemSPM.
Conclusion
In this paper, we propose the Memory-Assisted Sub-Prototype Mining (MemSPM) method, which can learn the intra-class diversity by mining the sub-prototypes to represent the sub-classes. Compared with the previous methods, which overlook the intra-class structure by using the one-hot label, our MemSPM can learn the class feature from a more subdivided sub-class perspective to improve adaptation performance. At the same time, the visualization of the tSNE and reconstruction demonstrates the sub-prototypes have been well learned as we expected. Our MemSPM method exhibits superior performance in most cases compared with previous state-of-the-art methods on four benchmarks.
|
2303.02204 | KGLiDS: A Platform for Semantic Abstraction, Linking, and Automation of
Data Science | In recent years, we have witnessed the growing interest from academia and
industry in applying data science technologies to analyze large amounts of
data. In this process, a myriad of artifacts (datasets, pipeline scripts, etc.)
are created. However, there has been no systematic attempt to holistically
collect and exploit all the knowledge and experiences that are implicitly
contained in those artifacts. Instead, data scientists recover information and
expertise from colleagues or learn via trial and error. Hence, this paper
presents a scalable platform, KGLiDS, that employs machine learning and
knowledge graph technologies to abstract and capture the semantics of data
science artifacts and their connections. Based on this information, KGLiDS
enables various downstream applications, such as data discovery and pipeline
automation. Our comprehensive evaluation covers use cases in data discovery,
data cleaning, transformation, and AutoML. It shows that KGLiDS is
significantly faster with a lower memory footprint than the state-of-the-art
systems while achieving comparable or better accuracy. | Mossad Helali, Niki Monjazeb, Shubham Vashisth, Philippe Carrier, Ahmed Helal, Antonio Cavalcante, Khaled Ammar, Katja Hose, Essam Mansour | 2023-03-03T20:31:04Z | http://arxiv.org/abs/2303.02204v4 | # Linked Data Science Powered by Knowledge Graphs
###### Abstract.
In recent years, we have witnessed a growing interest in data science not only from academia but particularly from companies investing in data science platforms to analyze large amounts of data. In this process, a myriad of data science artifacts, such as datasets and pipeline scripts, are created. Yet, there has so far been no systematic attempt to holistically exploit the collected knowledge and experiences that are implicitly contained in the specification of these pipelines, e.g., compatible datasets, cleansing steps, ML algorithms, parameters, etc. Instead, data scientists still spend a considerable amount of their time trying to recover relevant information and experiences from colleagues, trial and error, lengthy exploration, etc. In this paper, we, therefore, propose a scalable system (KGLiDS) that employs machine learning to extract the semantics of data science pipelines and captures them in a knowledge graph, which can then be exploited to assist data scientists in various ways. This abstraction is the key to enabling Linked Data Science since it allows us to share the essence of pipelines between platforms, companies, and institutions without revealing critical internal information and instead focusing on the semantics of what is being processed and how. Our comprehensive evaluation uses thousands of datasets and more than thirteen thousand pipeline scripts extracted from data discovery benchmarks and the Kaggle portal and shows that KGLiDS significantly outperforms state-of-the-art systems on related tasks, such as dataset recommendation and pipeline classification.
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
FootnoteFootnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
FootnoteFootnote †: copyrighted: none
+
FootnoteFootnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
FootnoteFootnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
FootnoteFootnote †: copyrighted: none
+
FootnoteFootnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
FootnoteFootnote †: copyrighted: none
+
FootnoteFootnote †: copyrighted: none
+
FootnoteFootnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
FootnoteFootnote †: copyrighted: none
+
FootnoteFootnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
FootnoteFootnote †: copyrighted: none
+
FootnoteFootnote †: copyrighted: none
+
FootnoteFootnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
FootnoteFootnote †: copyrighted: none
+
FootnoteFootnote †: copyrighted: none
+
FootnoteFootnote †: copyrighted: none
+
FootnoteFootnote †: copyrighted: none
+
FootnoteFootnoteFootnote: copyrighted: none
+
FootnoteFootnote: copyrighted: none
+
FootnoteFootnote †: copyrighted: none
+
FootnoteFootnoteFootnote: copyrighted: none
+
FootnoteFootnote: copyrighted: none
+
FootnoteFootnoteFootnote: |
2305.00945 | How effective is multifactor authentication at deterring cyberattacks? | This study investigates the effectiveness of multifactor authentication (MFA)
in protecting commercial accounts from unauthorized access, with an additional
focus on accounts with known credential leaks. We employ the
benchmark-multiplier method, coupled with manual account review, to evaluate
the security performance of various MFA methods in a large dataset of Microsoft
Azure Active Directory users exhibiting suspicious activity. Our findings
reveal that MFA implementation offers outstanding protection, with over 99.99%
of MFA-enabled accounts remaining secure during the investigation period.
Moreover, MFA reduces the risk of compromise by 99.22% across the entire
population and by 98.56% in cases of leaked credentials. We further demonstrate
that dedicated MFA applications, such as Microsoft Authenticator, outperform
SMS-based authentication, though both methods provide significantly enhanced
security compared to not using MFA. Based on these results, we strongly
advocate for the default implementation of MFA in commercial accounts to
increase security and mitigate unauthorized access risks. | Lucas Augusto Meyer, Sergio Romero, Gabriele Bertoli, Tom Burt, Alex Weinert, Juan Lavista Ferres | 2023-05-01T16:59:28Z | http://arxiv.org/abs/2305.00945v1 | # How effective is multifactor authentication at
###### Abstract
This study investigates the effectiveness of multifactor authentication (MFA) in protecting commercial accounts from unauthorized access, with an additional focus on accounts with known credential leaks. We employ the benchmark-multiplier method, coupled with manual account review, to evaluate the security performance of various MFA methods in a large dataset of Microsoft Azure Active Directory users exhibiting suspicious activity. Our findings reveal that MFA implementation offers outstanding protection, with over 99.99% of MFA-enabled accounts remaining secure during the investigation period. Moreover, MFA reduces the risk of compromise by 99.22% across the entire population and by 98.56% in cases of leaked credentials. We further demonstrate that dedicated MFA applications, such as Microsoft Authenticator, outperform SMS-based authentication, though both methods provide significantly enhanced security compared to not using MFA. Based on these results, we strongly advocate for the default implementation of MFA in commercial accounts to increase security and mitigate unauthorized access risks.
## 1 Introduction
In the past decade, prominent identity providers such as Microsoft, Google and Okta have increasingly adopted risk-based authentication, also known as _challenges_, to enhance security against unauthorized access. These challenges utilize various passive signals to identify anomalous login attempts, including IP geolocation, device and IP address reputation, and the interval between login attempts. Upon detecting irregular login patterns or receiving a user's request to change their password, identity providers issue a challenge requesting supplementary forms of authentication to grant access to the protected resource[1].
Supplementary verification methods can be classified into three categories, also called _factors_: knowledge (something the user **knows**), possession (something the user **has**), or inference (something the user **is**). When an authentication scheme requires a secondary factor of authentication, it is referred to as two-factor authentication (2FA). More broadly, multifactor authentication (MFA) encompasses authentication methods that require users to present two or more factors to the authentication mechanism [2].
Although there is a lot of variability on factors required to authenticate consumer accounts, companies such as Microsoft and Okta that provide authentication services to enterprises primarily require a possession
verification method, sending a code to a device that the user possesses[3]. Various methods exist for code generation and delivery, including SMS, dedicated mobile applications like Microsoft Authenticator, or authentication-specific devices such as Yubikey [4]. To use these supplementary authentication measures, users must pre-register them with their accounts. However, the increased friction of pre-registering and frequently verifying a code on a secondary device can potentially reduce adoption and increase account lockouts [5, 6].
## 2 Previous Research and Our Contribution
Prior research has investigated the efficacy of multi-factor authentication (MFA) challenges for consumer accounts, such as the Microsoft Account (MSA) and the Google Account, and found that 1) MFA challenges are highly effective in preventing account compromise, 2) some types of additional authentication forms are more effective than others at preventing account compromise, and 3) there are trade-offs between prevention effectiveness, ease of adoption, and ease of use [5, 7, 8, 9]. Recently, the continued effectiveness of MFA has been called into question [10, 11].
Consumer accounts are pervasive and primarily grant access to free services, including personal email, media personalization, and instant messaging. In contrast, accounts provided by enterprises and governmental institutions to their workforce and customers often grant access to different types of data and resources, such as payment information, servers containing aggregated financial data, and computational resources. These commercial accounts often rely on protection from commercial identity products like Microsoft's Azure Active Directory (AAD) and Okta's Workforce Identity Cloud, although some large providers, such as Amazon, use their own in-house solutions [3].
During our measurement period, commercial accounts constituted approximately one-third of the total accounts in use within a given month. Unlike consumer account users, who directly register with authentication providers, commercial account users must register with an intermediate layer known as the tenant administrator, typically their own institution. For instance, a university professor's account is provided by the university itself, even when the authentication services are ultimately performed by an identity provider like Microsoft. The tenant administrator, the university in our example, is responsible for registering and maintaining accounts, defining security policies, including which resources will require MFA and the type of MFA to be used, and delivering first-level support[12].
Although consumer account data may occasionally hold value, gaining access to commercial accounts is generally more valuable[13]. Consequently, bad actors may dedicate more time and resources targeting commercial accounts, which may result in MFA having different effectiveness for commercial accounts. This paper focuses on evaluating the effectiveness of security solutions applied to commercial accounts and comparing these findings to previous research conducted on consumer accounts.
## 3 Methodology and Data
Our goal is to determine the effectiveness of MFA in preventing account compromise in the population of commercial accounts. It is generally not possible for an authentication provider to obtain the exact number of account compromises in a population without resorting to sampling and manual reviews. When users detect an account compromise, they may simply change their passwords and not notify their administrators. Even when the administrators are notified, they may choose not to notify the authentication provider. Therefore, methods that rely on the authentication provider using reported account compromises will result in an undercount of the actual rate. On the other hand, it is cost-prohibitive for an authentication provider that has billions of accounts to manually review all suspected compromises. Therefore, we have to rely on sampling methods.
To achieve our goal, we obtained a list of active Microsoft Azure Active Directory users that had their account reviewed due to suspicious activity between April 22, 2022, and September 22, 2022. Some accounts had MFA configured, and some did not. If the account had suspicious activity and had MFA configured, a challenge was automatically issued. A sample of the sessions was retroactively reviewed by a specialized
team that examines account logs and determines whether a compromise occurred or not. If a compromise was detected, the account was sanitized, and the user notified.
To estimate the proportion of compromised accounts in the whole population, we use the benchmark multiplier method[14], commonly used in epidemiological research in situations where individuals tend to underreport the actual frequency of an event. The benchmark multiplier method requires two datasets: one, the benchmark, has a complete and accurate count of the event being studied for a subgroup of the population. The other dataset is a representative sample from the population, used to estimate the proportion of the population represented by the benchmark. The reciprocal of that proportion is called the multiplier.
In our case, the benchmark is the set of accounts that were manually reviewed by the account specialists. For this dataset, we have the exact numbers of accounts compromised. Our benchmark is divided into two MFA categories (MFA enabled and MFA not enabled). To connect the benchmark with the total population, we obtain a random sample of accounts of the whole population for each category and calculate the proportion \(\pi\) of accounts that are in our benchmark and have been compromised.
Using the methodology laid out in [15], given a benchmark of size \(\hat{N}_{x}\) and the probability \(\hat{\pi}\) for members of the representative sample to be in the benchmark, we can estimate \(\hat{N}_{y}\), the number of accounts compromised in the population as
\[\hat{N}_{y}=\frac{\hat{N}_{x}}{\hat{\pi}}\]
For each category, following [16], the proportion \(\pi\) is distributed \(\hat{\pi}\sim\beta(x+1,n+x+1)\), where \(n\) is the size of the representative sample and \(x\) is the number of members of that sample that share the benchmark's characteristics. In addition, even if \(\hat{N}_{x}\) and \(\hat{\pi}\) are unbiased, \(\hat{N}_{y}\) is a biased estimator of \(N_{y}\) because of its non-linearity with respect to \(\hat{\pi}\). Therefore, following [16], we use a bias-corrected estimator:
\[\hat{N}_{y}=\frac{\hat{N}_{x}}{\hat{\pi}}-\frac{1}{n}\hat{N}_{x}\frac{(1-\hat {\pi})}{\hat{\pi}}\]
We use a Monte Carlo simulation to estimate \(\hat{N}_{y}\) for each category. We run each simulation 1,000 times. Our 95% confidence intervals are based on the 2.5% and 97.5% percentiles of the 1,000 simulated estimates. The estimates for the proportion \(\hat{\pi}\) are in Table 1.
## 4 Results
Our results are shown in Table 1, where (a) is the number of compromises measured in the benchmark, (b) is the median number of compromises estimated in the population, and (c) is the median percentage number of compromises estimated in the population.
According to these estimates, the median estimated compromise rate of MFA accounts is 0.0079%, which means that MFA accounts have a protection factor better than 99.99% for commercial accounts, in line with estimates previously found for consumer accounts.
We also calculate effectiveness as the proportion of risk reduction, using the same formula used to calculate vaccine effectiveness. A member of the population treated with MFA has an estimated median risk of
\begin{table}
\begin{tabular}{l r r r r} \hline \hline Category & \(\hat{\pi}\) (95\% CI) & (a) & (b) & (c) \\ \hline With MFA & 2.20\% - 3.01\% & 1,525 & 59,414 & 0.0079\% \\ Without MFA & 0.18\% - 0.26\% & 15,195 & 7,085,925 & 1.0071\% \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results with and without MFA
0.0079%, while a member of the population not treated with MFA has an estimated median risk of 1.0071%. Therefore, the risk reduction of using MFA is
\[\text{RR}=1-\frac{\text{treatment}}{\text{no treatment}}=1-\frac{0.0079\%}{1.0071 \%}=99.22\%\]
Another way of measuring the effectiveness of MFA is calculating the ratio of account compromises coming from accounts with and without MFA enabled, as shared by Microsoft in the 2020 RSA conference [17]. Using the median estimate of compromises in our data, we find that \(1-\frac{59,414}{(7,085,925+59,414)}=99.17\%\) of the compromised accounts did not have MFA enabled. This is slightly lower than the number found in 2019 by [17]. However, between 2019 and 2022, we have observed the adoption of MFA to have increased by over 400%.
## 5 Accounts with Known Leaked Credentials
In 2019, Google published a study about consumer accounts that found that challenges and MFA prevented 100% of automated attacks, 96% of bulk phishing attacks, and 76% of targeted attacks [5]. These percentages were calculated for a subset of accounts for which an attack was known to have happened, and therefore not directly comparable to our figures above.
We obtained a sample of 128,000 accounts that had their passwords leaked between April and September of 2022. Users were immediately notified. We retroactively studied those accounts for 30 days prior to the discovery of the credential leak. Reviewing the accounts manually, we found 7,861 accounts that had MFA and for which we could confirm that attackers used the passwords to try to obtain access to protected resources. For those accounts, we found that MFA prevented 98.6% of the attacks.
For this sample, we were able to analyze the specific type of MFA used and its performance. The detailed results are in Table 2. Similar to [5], we find that SMS was 40.8% less effective than Microsoft Authenticator, a mobile application specifically designed for multi-factor authentication.
## 6 Conclusion
In this research, we have conducted the first analysis of the effectiveness of multifactor authentication (MFA) in securing commercial accounts. By leveraging the benchmark-multiplier method and manually reviewing a sample of potentially compromised accounts, we have found that 99.99% of accounts with MFA enabled remained protected throughout the investigation period. Our findings further demonstrate that implementing MFA leads to a 99.22% reduction in the risk of compromise across the entire population, and a 98.56% reduction even in cases where credentials have been leaked. These results for commercial accounts are similar to the results reported in previous studies for consumer accounts.
In addition, our study finds that dedicated MFA applications outperform SMS-based authentication, although both methods are significantly more effective than not employing MFA at all. In light of these findings, we strongly advocate for the default activation of MFA in commercial accounts to bolster cybersecurity measures, as already required by many institutions[18].
\begin{table}
\begin{tabular}{l c} \hline \hline MFA Type & Failure Rate \\ \hline Authenticator OTP & 0.99\% \\ Authenticator Notifications & 0.97\% \\ SMS & 1.66\% \\ \hline
**Total** & **1.44\%** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results with and without MFA |
2310.15640 | Contribution to Galactic cosmic rays from young stellar clusters | The origin of Galactic cosmic rays (CR) is still a matter of debate.
Diffusive shock acceleration (DSA) applied to supernova remnant (SNR) shocks
provides the most reliable explanation. However, within the current
understanding of DSA several issues remain unsolved, like the CR maximum
energy, the chemical composition and the transition region between Galactic and
extra-Galactic CRs. These issues motivate the search for other possible
Galactic sources. Recently, several young stellar clusters (YSC) have been
detected in gamma rays, suggesting that such objects could be powerful sources
of Galactic CRs. The energy input could come from winds of massive stars hosted
in the clusters which is a function of the cluster total mass and initial mass
function of stars. In this work we evaluate the total CR flux produced by a
synthetic population of YSCs assuming that the CR acceleration occurs at the
termination shock of the collective wind resulting from the sum of cluster's
stellar winds. We show that the spectrum produced by YSC can significantly
contribute to energies $\gtrsim 100$ TeV if the diffusion inside the wind-blown
bubble is Bohm-like and the spectral slope is harder than the one produced by
SNRs. | G. Morlino, S. Menchiari, E. Amato, N. Bucciantini | 2023-10-24T08:59:35Z | http://arxiv.org/abs/2310.15640v1 | # Contribution to Galactic cosmic rays from young stellar clusters
###### Abstract:
The origin of Galactic cosmic rays (CR) is still a matter of debate. Diffusive shock acceleration (DSA) applied to supernova remnant (SNR) shocks provides the most reliable explanation. However, within the current understanding of DSA several issues remain unsolved, like the CR maximum energy, the chemical composition and the transition region between Galactic and extra-Galactic CRs. These issues motivate the search for other possible Galactic sources. Recently, several young stellar clusters (YSC) have been detected in gamma rays, suggesting that such objects could be powerful sources of Galactic CRs. The energy input could come from winds of massive stars hosted in the clusters which is a function of the cluster total mass and initial mass function (IMF) of stars. In this work we evaluate the total CR flux produced by a synthetic population of YSCs assuming that the CR acceleration occurs at the termination shock of the collective wind resulting from the sum of cluster's stellar winds. We show that the spectrum produced by YSC can significantly contribute to energies \(\gtrsim 100\,\mathrm{TeV}\) if the diffusion inside the wind-blown bubble is Bohm-like and the spectral slope is harder than the one produced by SNRs.
Introduction
The origin of Galactic Cosmic Rays (GCRs) in the knee region is still debated in the community. Recently, young and massive stellar clusters (YMSCs) have been suggested as alternative candidate sources to supernova remnants (SNRs). The energy input could come from the cluster stellar winds which provide, during a lifetime of several million years, an energy comparable with respect to SNR outputs [1]. In addition, YMSCs could offer favorable conditions for particle confinement. The stellar winds, in fact, generate a wind-blown bubble around the cluster with a typical size of tens of pc, where the magnetic turbulence may be enhanced with respect to the Galactic interstellar medium (ISM), increasing the particle diffusion time and, consequently, the possibility to achieve very high energies. Besides energetic considerations, acceleration of particles from the wind of massive stars appears to be a necessary component to explain the \({}^{22}\)Ne/\({}^{20}\)Ne anomaly in CR composition [2]: the enhancement observed with respect to Solar abundances requires acceleration from a carbon-enriched medium rather than from the standard ISM at a few percent level. The relative contribution of these different source populations (SNRs and YMSC) to the observed CRs is yet to be clarified, in particular in the region across the _knee_ around few PeVs.
In recent years, diffuse gamma-ray emission has been detected in coincidence with many young massive stellar clusters (YMSC) by several gamma-ray facilities, like Fermi-LAT, H.E.S.S. [3] and LHAASO. These findings strongly supporting the idea that some acceleration mechanism is taking place there. Detailed morphological and spectral analysis suggest a continuous injection of possible hadronic origin [4, 5]. The Cygnus cocoon has even been observed at the highest energies ever probed by gamma-ray astronomy: a 1.4 PeV photon was detected from LHAASO [7], strengthen the idea that stellar clusters may act as PeVatrons.
The exact location where particle acceleration takes place in YMSCs is still unclear. For compact and young systems, the so-called wind termination shock (WTS) [8] is expected to be strong enough to enable particle acceleration at such high energies [9]. Alternatively, stochastic acceleration might be driven by the highly turbulent environment of the cluster, particularly in its core [10], further amplified once SN explosions start to occur [11]. In this work we want to estimate the contribution of YMSCs to the Galactic CR flux before supernova start to explode. Hence, we will consider only acceleration at the cluster WTS. For each SC, we build a stellar population, which properties in terms of wind speed and mass loss rate are used to determine the size of the wind-blown bubbles, as described in SS2. Then, we will integrate the WTS contribution over the entire population of Galactic SCs. However, the SC population is reasonably well know only within \(\sim 2\) kpc from the Sun and, to overcame this lack of information, we will build a synthetic population of SC based on properties of local clusters, as explained in SS3. Finally, in SS4 we summarize the acceleration model of [8] and we discuss the results in SS5. The reader is also referred to a companion work presented in the same conference [12] where the same approach is used to estimate the SC contribution to the Galactic diffuse gamma-ray emission.
## 2 Properties of stellar winds and wind-blown bubbles
To describe the bubble structure around a SC, for each star we need two quantities, the wind velocity and the mass loss rate and the star age. In the following we will only deal with main
sequence stars, neglecting the contribution from the stellar final stages, like Wolf-Rayet, which, however, may contribute up to \(\sim 30\%\) of the wind power in a SC [1]. In the stellar wind theory, the wind velocity is generally written as [15]
\[v_{\rm w,\star}=C(T_{\rm eff})v_{\rm esc}=C(T_{\rm eff})\;[2gR_{\star}\;(1- \Gamma)]^{1/2} \tag{1}\]
where \(v_{\rm esc}\) is the escape speed form the star, \(g\) is the surface gravity and \(R_{\star}\) the stellar radius. The factor \(\Gamma=L_{\star}/L_{\rm Edd}\) takes into account the reducing effect of Thomson scattering on the gravitational potential. The wind velocity is in general larger than \(v_{\rm esc}\) due to the radiation pressure from the star. Such an effect is accounted for by the function \(C\) which depends on the effective surface temperature of the star which is, in turn, estimated using the Boltzmann law: \(T_{\rm eff}=\left[\frac{L_{\star}}{4\pi R_{\star}^{2}\sigma_{b}}\right]^{1/4}\), \(\sigma_{b}\) being the Boltzmann constant. \(C\) ranges from 1 for \(T_{\rm eff}<10^{4}\) K up to 2.65 for \(T_{\rm eff}>2\times 10^{4}\) K. In this range we assume a linear increase with \(T_{\rm eff}\).
The stellar mass loss rate is a rather difficult quantity to constraint from observations. Stellar evolution codes provide results which, however, depends on several quantities like the metallicity and the the stellar rotation. For the sake of simplicity, here we use the approximate model by [16] where the mass loss rate depends only on the stellar luminosity, mass and radius, and reads [for a comprehensive discussion see also 4]:
\[\dot{M}_{\star}=9.55\times 10^{-15}(L_{\star}/L_{\odot})^{1.24}(M_{\star}/M_{ \odot})^{0.16}(R_{\star}/R_{\odot})^{0.81}\,{\rm M_{\odot}\,yr^{-1}}\,. \tag{2}\]
The stellar luminosity is only a function of its mass and it is taken from [4] and consists of a smoothed broken power law mixing two different empirical mass-luminosity relations. The relation between stellar radius and mass is given to first approximation by \(R_{\star}/R_{\odot}=0.85(M_{\star}/M_{\odot})^{0.67}\)[17]. We notice that the uncertainty in the mass-ratio relation translates into an uncertainty of only \(\sim 15\%\) in the final cluster's luminosity. For the purpose of this work, the properties of stellar winds can be considered almost stationary during the main sequence lifetime, which last
\[\log\left(T_{\rm age}/{\rm yr}\right)=6.43+0.825\;\left[\log\left(M_{\star}/1 20{\rm M_{\odot}}\right)\right]^{2}\,. \tag{3}\]
After such a time, we assume that the stellar wind do not contribute anymore to the SC wind. We stress again that the subsequent explosion of SNe from massive stars is neglected.
For the initial mass function (IMF) of star inside a cluster, we adopt the distribution from [18] which is a broken power-law in several mass range, \(\xi_{\star}=A_{i}(M_{\rm cl})M_{\star}^{k_{i}}\), which reduces to a Salpeter IMF for \(M>M_{\odot}\) with \(k_{i}=2.35\). The IMF for each cluster is normalized to give the SC total mass, i.e. \(\int_{M_{\star,\rm min}}^{M_{\star,\rm max}}M\xi_{\star}(M_{\star},M_{\rm cl} )dM=M_{\rm c}\). The minimum and maximum stellar mass are assumed to be \(M_{\star,\rm min}=0.08M_{\odot}\) which is related to the minimum theoretical mass to support significant nuclear burning and \(M_{\star,\rm max}=150\) M\({}_{\odot}\) that is the maximum observed stellar mass.
Once the stellar distribution is fixed, the properties of the SC collective wind can be estimated from the mass and momentum flux conservation, integrating over all stellar winds. Hence, the final SC mass loss rate and wind speed are:
\[\dot{M}_{\rm c}(M_{\rm c})=\int_{M_{\star,\rm min}}^{M_{\star, \rm max}}\dot{M}_{\star}\,\xi_{\star}(M_{\star},M_{\rm c})\;dM_{\star} \tag{4}\] \[v_{\rm w,c}=\frac{1}{\dot{M}_{\rm c}}\int_{M_{\star,\rm min}}^{M _{\star,\rm max}}\dot{M}_{\star}v_{\rm w,\star}\,\xi_{\star}(M_{\star},M_{\rm c })\;dM_{\star} \tag{5}\]
The bubble structure is described using the classical solution for adiabatic expansion [19]. Defining the cluster age \(t\) and the cluster luminosity \(L_{\rm w,c}=\frac{1}{2}\dot{M}_{\rm c}v_{\rm w,c}^{2}\), as well as the ISM mass density \(\rho_{0}\), the position of the TS is located at
\[R_{\rm s}(t)=48.6\,\,\left(\frac{\rho_{0}}{\rm cm^{-3}}\right)^{-0.3}\,\left( \frac{\dot{M}_{\rm c}}{10^{-4}{\rm M}_{\odot}{\rm yr}^{-1}}\right)^{0.3}\, \left(\frac{v_{\rm w,c}}{1000\,{\rm km\,s}^{-1}}\right)^{0.1}\,\left(\frac{t}{ 10\,{\rm Myr}}\right)^{0.4}\,{\rm pc} \tag{6}\]
while the bubble radius is
\[R_{\rm b}(t)=174\,\,\left(\frac{\rho_{0}}{\rm cm^{-3}}\right)^{-0.2}\,\left( \frac{L_{\rm w,c}}{10^{37}\,{\rm erg\,s}^{-1}}\right)^{0.2}\,\left(\frac{t}{10 \,{\rm Myr}}\right)^{0.6}\,{\rm pc}\,. \tag{7}\]
Because stellar clusters born from giant molecular clouds, the local ISM density where their winds expand is usually denser than the average Galactic ISM. Here we assume a reference value of \(\rho_{0}=10\) protons/cm\({}^{3}\) identical for all SCs.
## 3 Stellar cluster distribution
The SC distribution in mass, time and position in the Galaxy is defined as
\[\xi_{\rm c}(M_{\rm c},T,\vec{r})=\frac{dN}{dTdM_{\rm c}d\Sigma}\,, \tag{8}\]
such that \(\xi_{\rm c}dTdM_{\rm c}\) is the number of SC with initial masses \([M_{\rm c},\,M_{\rm c}+dM_{\rm c}]\) formed per unit surface \(d\Sigma\) of the Galactic disk, at the position \(\vec{r}\), in the time interval \([T,T+dT]\). Following [20] we assume that the distribution is factorized in time and mass. Moreover, we also assume that the distribution can be factorized in space such that the cluster initial mass function (CIMF) depends only by the distance \(r\) from the Galactic Centre through a normalization factor. Hence, we can write
\[\xi_{\rm c}(M_{\rm c},T,\vec{r})=\psi(T)f_{\rm c}(M_{\rm c})\rho_{\rm c}(r) \tag{9}\]
where \(\psi\) is the SC formation rate (CFR), \(f_{\rm c}(M_{\rm c})\) is the CIMF and \(\rho_{\rm c}(r)\) is the cluster radial distribution, normalized to be unity at the Sun location, \(r_{\odot}=8.5\,{\rm kpc}\). To get the present distribution of SC we should integrate in time Eq.(9), however, here we are interested in describing only the young population of SC, with an age \(\lesssim 10\,{\rm Myr}\), because for larger ages the wind power drops to negligible values and the production of CRs becomes irrelevant. [20] showed that the present SC distribution in the solar neighborhood is compatible with a formation rate roughly constant during the last \(\sim 50\,{\rm Myr}\). Hence the CFR can be assumed constant. Its value can be derived from the work by [21] who obtained a surface star formation rate in the solar neighbourhood of \(350\,{\rm M}_{\odot}\,{\rm Myr}^{-1}\,{\rm kpc}^{2}\) for clusters' mass between \(100\,{\rm M}_{\odot}\) and \(3\times 10^{4}\,{\rm M}_{\odot}\). This corresponds to an average CFR of \(\vec{\psi}=0.63\,{\rm kpc}^{-2}\,{\rm Myr}^{-1}\). For the CIMF, we follow [20] which derived for the SC population in the solar neighborhood the following broken power-law:
\[f_{\rm c}(M_{\rm c})=\begin{array}{ccc}k_{1}\,M_{\rm c}^{-1.63}&\mbox{ for}&M_{\rm c,min}<M_{\rm c}<M_{\rm c}^{*}\\ k_{2}\,M_{\rm c}^{-1.24}&\mbox{for}&M_{\rm c}^{*}<M_{\rm c}<M_{\rm c,max}\end{array} \tag{10}\]
where \(M_{\rm c,min}=2.5\,{\rm M}_{\odot}\), \(M_{\rm c,min}=6.3\times 10^{4}\,{\rm M}_{\odot}\), and \(M_{\rm c}^{*}=100\,{\rm M}_{\odot}\). The constants \(k_{1}\) and \(k_{2}\) are obtained from the continuity at \(M_{\rm c}^{*}\) and from the normalization condition \(\int_{M_{\rm c,min}}^{M_{\rm c,max}}f_{\rm c}(M_{\rm c})dM_{\rm c}=1\)
Due to strong stellar light extinction in the Galactic plane, the spatial distribution of SC is known with sufficient accuracy only in the solar neighborhood (for a distance \(\lesssim 2\,\)kpc from the Sun [20]). As a consequence \(\rho_{\rm c}\) should be derived from some other proxy. Here we use the distribution of pulsars as derived by [22], which reads
\[\rho_{c}(r)=\left(\frac{r+r_{\odot}}{2r_{\odot}}\right)^{1.64}\exp\left[-4.01 \ \frac{r-r_{\odot}}{r_{\odot}+0.55\rm kpc}\right] \tag{11}\]
where \(r_{\odot}=8.5\,\)kpc is the Sun position. On top of the radial distribution, we also account for the distribution inside the Galactic spiral arms, using the same procedure that [23] adopted to evaluate the distribution of SNRs. The spiral structure is realized by choosing a galactocentric distance \(r\) from Eq.(11) and than by choosing randomly an arm. The polar angle is then determined so that the cluster lies in the centroid of the arm. The actual position of the SC is computed by applying a correction to the galactocentric distance drawn from a normal distribution centered at zero with a standard deviation \(0.07\,r\). Figure 1 shows the result of a single realization of SC population with an age younger than 3 Myr, in terms of radial distribution from the GC and position in the Galactic disk. The total number of clusters results to be \(\simeq 300\).
## 4 Cosmic ray acceleration
Following [8], we assume that the acceleration of particles only occurs at the WTS developed by the SC wind. The (relativistic) energy distribution function of accelerated particles located at the shock position can be written as
\[f_{s}(E)=\xi_{\rm cr}\,\frac{L_{\rm w,c}}{m_{p}c^{2}}\frac{1}{\Lambda_{p}} \left(\frac{E}{m_{p}c^{2}}\right)^{-s}\ e^{-\Gamma(E)}\,, \tag{12}\]
Figure 1: Distribution of a single realization of stellar cluster population with age \(<3\,\)Myr, as a function of Galactocentric radius (a) and in the Galactic plane (b). The solid line in (a) show the pulsar distribution from Eq.(11) for comparison.
where the normalization constant \(\Lambda_{p}\) is defined such that the CR luminosity of the system is equal to \(\xi_{\rm cr}\) times the wind luminosity, i.e. \(L_{\rm cr}\equiv\int Ef_{s}(E)u_{2}\,dE=\xi_{\rm cr}L_{c,w}\), \(u_{2}\) being the downstream wind speed. The solution in Eq. (12) has the typical power-law term \(\propto E^{-s}\), found for the case of plane parallel shocks, plus an additional exponential function which accounts for the effects due to the spherical geometry and to the escaping of particles from the bubble boundary, which determine the maximum energy of the system. The exponential function has a complicated expression which, however, can be approximated by the following formula:
\[e^{-\Gamma(E)}\simeq\left[1+A(E/E_{\rm max})^{b}\right]\ e^{-k(E/E_{\rm max})^ {c}}\;. \tag{13}\]
where \(A\), \(b\), \(k\) an \(c\) are fitting parameters, while \(E_{\rm max}\) is the nominal maximum energy defined by the condition that the upstream diffusion length is equal to the shock radius, i.e. \(D_{1}(E_{\rm max})/u_{1}=R_{s}\), \(D_{1}\) being the upstream diffusion coefficient. The diffusion properties inside the bubble represents the most uncertain parameter of the system. Again following [8] we parameterise the diffusion coefficient as \(D=v/3\,r_{L}^{\delta}\ (r_{L}/L_{c})^{1-\delta}\), where \(r_{L}\) is the Larmor radius, while \(L_{c}\simeq 1\,\)pc, is the coherence length-scale of the magnetic field, assumed to be of the order of the SC core radius. The exponent \(\delta\) is equal to 1/3, 1/2 and 1 for Kolmogorov, Kraichnan and Bohm-like diffusion, respectively. Finally, the magnetic field, \(\delta B\), used to evaluate the Larmor radius, is estimated assuming that a fraction \(\eta_{B}\) of the wind luminosity at the shock, is converted into magnetic pressure, namely \((\delta B^{2}/4\pi)\,4\pi R_{s}^{2}v_{w}=\eta_{B}\,\dot{M}\,v_{w}^{2}/2\). \(\eta_{B}\) is expected to be of the order of few percent.
On a very general ground, the spectral slope of accelerated particles is determined by the effective compression ratio at the shock, \(\sigma\), which includes the velocity of the scattering turbulence. Using hybrid simulations, it has been recently shown [24] that the downstream turbulence is, in general, more effective than the upstream one in determining the slope, hence here we will include only such an effect. In a parametric form, we can write the mean velocity of the waves downstream as \(\vec{v}_{A,2}=\chi_{A}\sqrt{11/8}\,\eta_{B}^{1/2}\,v_{w}\), where \(v_{A}\) is the Alfven speed and \(\chi_{A}=0\) for waves that are symmetrically moving in all directions. The compression ratio is than written as
\[\sigma=\frac{u_{1}}{u_{2}+v_{A,2}}=\frac{\sigma}{u_{2}+\sqrt{11/8\,\xi_{B}}\, \chi_{A}\,u_{1}}\;. \tag{14}\]
The value of the parameter \(\chi_{A}\) obtained from numerical simulations is of the order of few tens of percent. However, in such simulations the magnetic field amplification is only due to the CR streaming, while in the case of stellar winds the magnetic field is more probably determined by MHD instabilities. Hence it is not clear whether the results by [24] can be straightforwardly applied to our case. As a consequence, we will take \(\chi_{A}\) as a free parameters.
Now we do have all the ingredients to evaluate the CR flux produced by SCs. However, one last caveat need to be addressed. The instantaneous CR luminosity is formally obtained from the particle flux escaping from the wind bubble, which reads \(\phi_{\rm esc}=4\pi R_{b}^{2}\ [D\partial f/\partial R]_{R=R_{b}}\), and the total amount of CR injected by a single SC should be given by the integral during its lifetime, \(\int\phi_{\rm esc}dT\). However, one can easy realize that such a contribution is always negligible with respect to the amount of particles stored inside the bubble, which is approximately given by \(4\pi/3f_{s}R_{b}^{3}\). This apparent inconsistency is given by the fact that the solution provided by [8] is stationary and does not account for the time evolution of the wind-bubble after the adiabatic phase, when the bubble will fade out, releasing all CR stored during the acceleration phase. This issue can be solved
replacing the escaping flux with the flux injected inside the bubble, i.e. \(\phi_{s}=4\pi R_{s}^{2}\,u_{2}f_{s}\). [8] have shown that \(\phi_{\rm esc}\) and \(\phi_{s}\) differ only slightly at energies \(\gtrsim E_{\rm max}\).
## 5 Results and discussion
Using the approximation provided in Eq.(9) we can write the CR flux at the present time \(T\) injected by the entire population of SC as follows:
\[Q_{\rm SC}(E,T)\simeq\bar{\psi}\,\int_{0}^{T}\!\!\!\int_{M_{c,\rm min}}^{M_{c, \rm max}}f_{c}\,(M_{c},t)\;\phi_{s}(M_{c},t)\;dt\;dM\times\int_{0}^{R_{\rm disk }}\rho_{c}(r)\;dr\;. \tag{15}\]
In the present work, rather then performing the analytical integral, we proceed generating a synthetic population of SC and then we sum up over the contribution of all individual clusters. In Figure 2 we show the CR spectrum injected by the synthetic population shown in Figure 1. Two different cases are considered: \(\chi_{A}=0\) and \(\chi_{A}=0.1\). The CR acceleration efficiency and magnetic amplification efficiency are fixed to \(\xi_{\rm cr}=0.05\) and \(\eta_{A}=0.05\). The CR spectrum is shown for three different assumption of the diffusion coefficient (Bohm, Kraichnan and Kolmogorov) and is compared with the spectrum of CR injected by SNRs. Notice that all Figures only show the proton CR component. Heavier elements are not discussed here. One can see that for Bohm diffusion, the \(E_{\rm max}\) reaches PeV energies, and the SC contribution dominates the CR spectrum above \(\sim 100\) TeV. The case with \(\chi_{A}=0\) show a harder spectrum (\(s=2.03\)) than the one having \(\chi_{A}=0.1\) (\(s=2.18\)).
Notice that the contribution from SNRs is estimated using some simplifications. We assume that all SN explode into an uniform medium with density \(0.1\) cm\({}^{-3}\). The kinetic explosion energy is fixed to \(10^{51}\) erg and the ejecta mass to \(5\,\mathrm{M}_{\odot}\). The particle acceleration and escape is than calculated using the model by [26] which include the magnetic field amplification due to streaming instability (even though in a simplified approach). As one can see from Figure 2, the effective maximum energy produced by SNRs is only \(\simeq 50\) TeV (similar results are obtained using more refined models like [27] for the same SNR parameters).
There are two important aspects to discuss. The relative normalization between the SCs and the SNRe contributions and the different slopes. The normalization of the CR flux produced by SNRs is obtained taking the Salpeter IMF for the entire Galaxy stellar population and normalizing it to a SFR of \(2\,\mathrm{M}_{\odot}\,\mathrm{yr}^{-1}\)[28]. Than we assume that all stars with \(M>8\,\mathrm{M}_{\odot}\) explode as SNe. This approach gives a rate of \(\simeq 1\) SN every century, a factor roughly 2 smaller (but still compatible within the errors) than the one estimated from the combined evidence from external galaxies and from the observation of historical SNe in our Galaxy [29]. Then, we assume that the SNR acceleration efficiency is the same as the WTS, namely 5%. These assumptions gives the power ratio between SCs and SNRs equal to \(P_{\rm SC}/P_{\rm SNR}\simeq 8\%\). If one account for the uncertainties of several parameters, such a value ranges between 1% and 30%. Given the smaller power injected by SCs with respect to SNRs, the possibility that they are responsible for the CR spectrum at PeV energies strongly depends on two quantities: the diffusion coefficient in the bubble and the value of the spectral slope. Concerning the former, only a diffusion close to Bohm-like seems able to produce a maximum energy substantially larger than the one produced by SNRs. In addition, the spectral slope needs to be harder than the one produced by SNR, like the case shown in Figure 2(a). If these two conditions are realized, than SCs could be responsible for the observed CRs at PeV energies. On the other
hand, steeper diffusion and/or slope similar (or steeper) than the one produced by SNRs would question the role of SCs, at least in the simplified model presented here.
|
2309.00448 | Account Abstraction, Analysed | Ethereum recently unveiled its upcoming roadmap's \textit{Splurge} phase,
highlighting the integration of
EIP-\hlhref{https://eips.ethereum.org/EIPS/eip-3074}{4337} as a foundational
standard for account abstraction (AA). AA aims to enhance user accessibility
and facilitate the expansion of functionalities. Anticipatedly, the deployment
of AA is poised to attract a broad spectrum of new users and ignite further
innovation in DApps. In this paper, we elucidate the underlying operating
mechanisms of this new concept, as well as provide a review of concurrent
advancements in accounts, wallets, and standards related to its development. We
step further by conducting a preliminary security evaluation to qualitatively
assess the extent of security enhancements achieved through AA updates. | Qin Wang, Shiping Chen | 2023-09-01T13:27:34Z | http://arxiv.org/abs/2309.00448v1 | # Account Abstraction, Analysed
# Account Abstraction, Analysed
Qin Wang1, Shiping Chen1
1CSIRO Data61, Australia
###### Abstract
Ethereum recently unveiled its upcoming roadmap's _Splurge_ phase, highlighting the integration of EIP-4337 as a foundational standard for account abstraction (AA), AA aims to enhance user accessibility and facilitate the expansion of functionalities. Anticipatedly, the deployment of AA is poised to attract a broad spectrum of new users and ignite further innovation in DApps. In this paper, we elucidate the underlying operating mechanisms of this new concept, as well as provide a review of concurrent advancements in accounts, wallets, and standards related to its development. We step further by conducting a preliminary security evaluation to qualitatively assess the extent of security enhancements achieved through AA updates.
Ethereum, Account Abstraction, EOA
## I Introduction
Accounts within the Ethereum ecosystem [1] serve as the bedrock for asset querying, storage, and transactions, constituting a pivotal element of its infrastructure. However, the present account design poses challenges for Web2 users due to its complicated design. Ethereum classifies accounts into two types [2], namely, _externally owned accounts_ (EOA) and _contract accounts_ (CA)1. EOA is controlled by private keys held by its creators or users. Contract accounts are controlled by codes without the involvement of private keys (Fig.1). The reliance on private keys makes these two accounts act in different roles: EOA can prove the validity of a transaction and trigger the state transition in CAs but with limitations:
Footnote 1: Across several documents, the term _smart contract account_ has been used interchangeably to denote the same concept within the context of this paper.
* _Expense_: The functioning of the contract wallet necessitates initiation by an EOA, essentially entailing a contract invocation. Every transaction within this process incurs an extra 21,000 Gas cost, which includes charges for ECDSA signature verification, Nonce value verification, and ensuring adequate account balance.
* _Elevated barrier_: EOAs must possess a substantial ether balance to cover Gas expenses (involving management of two separate accounts), or alternatively depend on a Relayer to manage Gas payments, potentially introducing centralization concerns.
* _Flunction_: In addition to the fee in ether (ETH), users are consequently required to hold ether, which exposes them to the potential volatility of its price.
* _User perception_: From a user's perspective, grasping the nuances of gas price, gas limit, and transaction congestion is far from straightforward.
The effort to integrate two distinct types of accounts while preserving their core functionalities has been a longstanding subject of deliberation within Ethereum communities. Two primary technical pathways have come to the forefront.
* EOA delegates control to smart contracts, in which the contract logic can implement the core functionalities from EOA transactions such as gas payment (e.g., EIP2-3074).
* EOA accounts are designed to be armed with several key functionalities from smart contracts (EIP-4337).
The concept of _account abstraction_ (AA), indexed by EIP-4337 [3], aligns with the second approach, aiming to bestow EOA with the programmable functionality akin to CAs. The incorporation of EIP-4337 into the present roadmap [4] signifies Ethereum's definitive stance in these dual directions. By adopting AA updates, Ethereum sidesteps the potential hurdle of transitioning existing users. This is attributed to its implementation on the application layers rather than the consensus layer, which inherently offers robust backward compatibility capabilities. Ethereum transactions are required to emanate from EOAs, which are typically accessed through various wallets like MetaMask, Phantom, and Rainbow. The account abstraction approach ensures the retention of current users while empowering them with an effortless means of engaging with smart contracts. Beyond that, the functionalities elucidated in EIP-4337 can be further applied across Ethereum-compatible blockchains, including platforms such as BNB Chain (formerly BSC), Polygon, Avalanche, Optimism, Arbitrum, and Base (also refer to Sec.III-D).
As of August 2023, a sequence of indicators3 highlights the escalating attention garnered by this impactful concept. The cumulative count of active accounts has reached 739,295, and the aggregate count of successfully executed UserOperations (detailed explanations refer to Sec.III) has reached 1,307,197. In addition, the total number of bundled transactions has
Fig. 1: Ethereum accounts [5]
amounted to 1,197,871. Besides, in August, a remarkable up-swing is observed across various market indicators compared to previous months (starting from March 2023, which marks the launch of EIP-4337): The count of monthly active EIP-4337 smart accounts has surged to an impressive 420k (inclusive of all platforms). The earnings from monthly revenues and UserOperation fees have surpassed 24k and 220k respectively. Notably, the monthly tally of successfully executed UserOperations stands at 710k, while the gas expenses covered by paymasters have exceeded $360k USD.
**Our attempts.** Given its nascent stage, only a limited number of studies have delved into this emerging concept. We are aligned with the growing momentum surrounding account abstraction and further expand the boundaries of its research landscape. In particular,
* _Concept refinement_ (Sec.II&III). We conduct a thorough exploration of diverse resources, including academic literature, blogs, forum discussions, and Git repositories. Drawing from such analyses, we offer a coherent and succinct elucidation of the fundamental concepts and operational mechanisms of account abstraction.
* _Security framework_ (Sec.IV). We further delve into the security risks that may exist in AA. Delving into historical contract and account-related vulnerabilities, we construct a framework (Tab.III) to engage in a proper discussion about the security implications of this account paradigm.
* _Further discussion_ (Sec.V). Additionally, we provide more discussion about the principle of account abstraction designs and potential opportunities inspired.
**Security results.** Our security discussion (majorly Sec.IV) led to the following insights. It is evident that integrating AA into current account systems can alleviate a multitude of vulnerabilities in both contract usage and block creation domains. However, AA's effectiveness remains limited when addressing intrinsic aspects of solidity language and design intricacies, such as structural elements, configuration constraints, and verification mechanisms. Upon further analysis, we have identified that AA's strength in fortifying security arises from its decoupling of key account abstraction components. By reallocating functions previously encompassed within accounts, AA can protect against numerous vulnerabilities in Ethereum's application layer: critical functions like gas payment mechanisms are seamlessly transferred to the paymaster, avoiding vulnerabilities related to gas fees; the intricate process of transaction packaging now benefits from the management of bundle; and complex trading and swapping operations, previously reliant on CEX or DEX, are elegantly executed within the same transaction. The decoupling process has reduced the threat that was heavily dependent on contract functions. thereby increasing the overall security quotient.
**Available resources.** The Ethereum official documentation [4] has introduced formal documentation detailing its account abstraction concept and roadmap that is accompanied by a suite of relevant standards, encompassing EIP-2771, 4337, 2938, and 3047. Notably, EIP-4337 encapsulates the central concept driving this transition. In a study by Singh et al. [6], an initial exploration into the Ethereum account abstraction is provided, outlining its distinctive features and functional methodologies. Binance Research [7] has released an insightful analysis that delves into recent market trends and noteworthy advancements within this area. Based on these contributions, a range of media sources [8, 9, 10, 11] have embarked on elucidating this intricate concept, offering surrounding insights and explanations. Besides, several studies have initiated the development of on-top solutions aimed at addressing various challenges within the Web3 ecosystem [12], such as address incompatibility [13].
## II Ethereum Accounts
### _Recall Ethereum Account_
**EOA.** EOAs serve the purpose of storing and transferring ether, and ERC-20 tokens. EOAs are generated from a public key, which is a 20-byte hexadecimal identifier (e.g., 0xF57D1D6b84db4053cE452B35B7DB77878dCbdc65). These accounts are managed by a private key, which includes the EOAs password or seed phrase, held exclusively by the account's owner. Transactions involving EOAs do not rely on code or smart contract logic for their validity. As long as the private key remains known, the account's owner possesses the capability to execute transactions. The transaction's verification is contingent upon the user's signature and nonce.
**Contract account.** CA is a type of account that executes operations based on its pre-programmed logic, thereby enabling the creation of decentralized applications (DApps) and facilitating various functionalities within the Ethereum network. CA autonomously executes code in response to transactions, potentially modifying the contract's state, and exchanging ether or tokens. CA accounts are assigned unique addresses akin to individual identification numbers, which permit interaction with other accounts. Once a CA is deployed, its code and state become immutable, contributing to transparency and ensuring the integrity of transactions. They also offer storage capabilities for manipulating data on the blockchain (Fig.1). These accounts can interact with EOAs and generate events that facilitate communication with other accounts.
**Two types of transaction.** These different accounts give rise to two distinct practical types of transactions for communication: _contract creation_ and _message call_. Contract creation involves the generation of a new smart contract, with the transaction carrying an initialization code segment to define the new contract's properties. This process results in the assignment of a unique address to the newly created contract, which includes both its code and storage within the corresponding account state. Conversely, a message call signifies the modification of a smart contract's state. In this case, the transaction includes input data to update the contract's internal data. A message call does not create a new contract in the world state; instead, it alters the existing contract's state.
### _Challenges in Account Design_
Coming with private keys, EOA has many fundamental functions including claiming the _ownership_ of the account and
signing_ the permissions of transactions. However, this may present a number of significant concerns:
* _Risk of private key loss._ Users who lose their private keys (due to loss or hacking) would face the irreversible loss of all their assets.
* _Restricted signature options._ The native protocol exclusively supports ECDSA signature and verification algorithms for transaction validation.
* _Single signer authority._ The absence of inherent multi-signature capability (which can only be achieved through smart contracts) means that a single signature is all that's required to execute any operation.
### _Ethereum Roadmap_
Account abstraction is a crucial functional improvement outlined in the sixth phase of Ethereum's roadmap (Tab.I). This upgrade involves a series of smaller refinements and adjustments aimed at ensuring seamless network operations subsequent to the implementation of other upgrades.
## III Account Abstraction
### _Previous Attempts_
A series of continuous efforts have been made to make existing accounts extensible with more functions. We list a series related token standards in Tab.II.
In Ethereum's early phases, attempts were made to distinguish between EOA and CA by introducing various new transaction types. EIP-86/208 leverage the differentiation between two accounts to design customizable and collision-resistant contract addresses. This, in turn, led to the implementation of EIP-1014 and EIP-2470. EIP-859 introduced transaction-initiated contract deployment. It allowed for on-the-spot deployment of contract addresses if none existed, forming the basis for the fundamental functionality of EIP-4337. EIP-2718 ushered in compatibility for future Ethereum iterations with any newly suggested transaction types. EIP-293 systematically cataloged various advantages of contract accounts, encompassing recovery, key rotation, customized identity verification algorithms, and meta-transactions, which collectively entrenched the role of contract accounts.
However, these efforts resulted in unwieldy complexity. Altering transaction types requires concurrent modifications to the underlying signature verification algorithms. This encompasses considerations like miner acceptance of new types, ensuring incentives are not lower than regular transactions to encourage verification, and addressing concerns about account address management and conflicts. Unfortunately, this approach lacked both backward and forward compatibility. Ultimately, these attempts have coalesced around two primary directions as stated in our Sec.I.
### _Account Abstraction_
Account abstraction introduces a suite of functions designed to personalize fundamental elements that enable smart contract functionalities. These customizations encompass various aspects, including user operations, fee payment methods, and transaction packing mechanisms.
_Overview._ The workflow is summarized as follows: 1 Users initiate interactions with the frontend abstraction (also referred to as application) layer, where their actions are translated into underlying transactions (via UserOperation). 2 Bundlers aggregate multiple UserOperation instances, consolidating them into a single transaction that is then transmitted to the EntryPoint contract. 3 Within the EntryPoint contract, user signatures are verified and transactions initiated by the abstraction layer are processed. 4 The entries logged in UserOperation prompt the activation of relevant smart contracts, initiating state transitions tailored to their specific prerequisites. 5 Optionally, the user's operations can be aggregated and authenticated using BLS signatures. Meanwhile, transaction fees for user actions are managed by the Paymaster contract. Consequently, on-chain applications interact with user actions in a manner similar to their interaction with standard external accounts.
\begin{table}
\begin{tabular}{c|c c c}
**EIP-** & **Time** & **Status** & **Key** \\ \hline
101 & 2015 & Stagnant & Serenity currency and crypto abstraction \\
86 & 2017 & Stagnant & Abstraction of transaction origin and signature \\
859 & 2018 & Stagnant & Account abstraction for main chain \\
2718 & 2020 & n/a & Typed transaction envelope \\
2938 & 2020 & Stagnant & Account abstraction \\
2607 & 2021 & n/a & Reject transactions from senders with deployed code \\
5021 & 2022 & Stagnant & Insert Code into EOAs with AUTHUSURP \\
5189 & 2022 & Stagnant & Account abstraction via endorsed operations \\ \hline
2771 & 2020 & Final & Secure Protocol for native meta transactions \\
3074 & 2020 & Review & Allow EOAs to delegate control to a contract \\
0900 & 2021 & Draft & Modular Smart Contract Accounts and Plugins \\
4337 & 2021 & Draft & Account abstraction using all mempool \\
6551 & 2023 & Review & Non-fungible token bound accounts \\ \end{tabular}
\end{table} TABLE II: AA-related Ethereum token standards
Fig. 2: Account abstraction
\(\Leftrightarrow\)_User operation._UserOperation serves as a transaction-like entity, representing a user's request achieved by transaction events. Within UserOperation, multiple requests and supplementary data can be encapsulated, facilitating the execution of transactions that a CA can perform. While sharing similarities with conventional transactions, UserOperations exhibit several distinctive characteristics.
* _Additional fields._UserOperation includes new fields in the transaction structure, e.g., EntryPoint, Bundler, Paymaster and Aggregator.
* _Alternare mempool._UserOperation is sent to a separate mempool, where bundles can package them into transactions which get included in a block.
* _Intent based._Today, transaction inputs are specific, e.g., swap 2k USDC for 1.2ETH. In contrast, UserOperation can be decorated with additional metadata to be more outcome-focused, e.g., I want to trade 2k USDC for the most amount of ETH possible.
\(\Leftrightarrow\)_Bundler._Bundlers constitute a pivotal component of the infrastructure necessary for the realization of EIP-4337. These entities are a specific type of Ethereum node dedicated to facilitating the processing of UserOperations. These UserOperations are directed to a network of bundlers, which diligently monitor the mempool. Bundlers efficiently consolidate multiple UserOperations into a singular transaction, expertly packaging and submitting them to the blockchain on behalf of users. In exchange for this service, bundlers receive compensation for their efforts in executing these tasks.
\(\Leftrightarrow\)_EntryPoint._It is a singleton smart contract that undertakes the verification and execution of UserOperations.
* _Verification loop._The process checks the account balance. It involves assessing whether the wallet possesses sufficient funds to cover the maximum potential gas expenditure (derived from the gas field in the UserOp). Transactions lacking adequate funds will be declined.
* _Execution loop._Upon verification, the transaction will be executed. Correspondingly, an amount from CA is deducted to reimburse the bundler. This reimbursement is required to cover the gas expenses.
\(\Leftrightarrow\)_Aeggregator (optional)._In scenarios where multiple messages are signed using distinct keys, an aggregator plays a role in producing an authenticated signature. The signature inherently validates the authenticity of all individual signatures within the collection. For accounts associated with particular signature types that enable aggregation (such as BLS), the verification process of the account's signature is deferred to an external contract. This external contract, in turn, is responsible for verifying a solitary signature across an entire bundle, thus streamlining the validation process.
Through the consolidation of multiple signatures into a singular entity, aggregators contribute to the efficient management of data availability. This allows for the validation of numerous bundled UserOperations in a single consolidated process.
\(\Leftrightarrow\)_Paymaster (optional)._The Paymaster takes charge of implementing gas payment policies, offering developers the capability to offer their end users gas-free interactions through sponsored or ERC-20 gas policies. These policies introduce flexibility in _how gas is paid_, such as the choice of currency (e.g., native ETH or ERC-20 tokens), and _by whom it is paid_. This effectively eliminates the need for users to possess native blockchain tokens to engage with the blockchain.
### _AA Benefits_
Based on its design, AA brings a series of direct benefits that previous CA and EOA do not have.
* _Transaction batching._ Capitalizing on the capabilities of the EntryPoint contract, the consolidation of multiple transactions can guard against frontrunning attacks and potential MEV threats.
* _Multisig._Departing from reliance solely on a single signer (EOA, private key holder), AA introduces the capability of multi-signatures sourced from a spectrum of entities.
* _Gas abstraction._ERC-20 tokens are accepted as valid payment for these fees, unshackling the restrictions previously confined to ether (ETH).
* _Social recovery._AA is engineered to encompass a mechanism for recovery through a social network of trusted associates. This mechanism offers an alternative route to regain access to the wallet.
* _Custom module._Users are empowered with the ability to craft bespoke modules catering to specific functions.
### _AA Surroundings_
_Supportive standards._EIP-2771 is designed to enable gasless transactions for users by introducing the concept of meta-transactions. It permits third parties to cover a user's gas expenses without necessitating modifications to Ethereum. The standards include the following key steps: The transaction signer signs and dispatches transactions to a gas relay. The Gas Relay, operating off-chain, receives these signed requests and subsidizes the gas costs, transforming them into valid transactions that are then routed through a trusted forwarder. This trusted forwarder is essentially a contract entrusted by recipients to accurately verify signatures and nonces.
EIP-2047 is designed to enable users to delegate control of their EOA to a contract, a parallel approach discussed in Sec.I. This feature allows any EOA to function as a smart contract wallet without requiring the deployment of a separate contract. As part of this standard, two new opcodes are introduced to the Ethereum Virtual Machine (EVM): AUTH and AUTHCALL. These opcodes facilitate proper invocation from smart contracts. Although this standard are incorporated into future updates of Ethereum's account infrastructure, it still provides valuable insights.
EIP-6900 introduces a standard aimed at synchronizing the implementation efforts of CA developers and plugin developers. It outlines the creation of a modular CA with the capability to accommodate all plugins adhering to standard protocols. In this standard, CAs are rendered as fully programmable accounts capable of housing assets within on-chain smart contracts. In parallel, account plugins serve as interfaces for
smart contracts that facilitate the integration of compositional logic within CAs. This approach enhances data portability for users and alleviates the need for plugin developers to target specific wallet implementations.
EIP-6551 introduces a token standard primarily focused on NFTs [14]. This standard enables the binding of one or multiple Ethereum accounts to an NFT. By doing so, NFTs gain the capability to possess assets and engage with applications, all without necessitating modifications to current smart contracts or infrastructure. These accounts, which are associated with tokens, seamlessly integrate with existing on-chain asset standards and have the potential to encompass forthcoming asset standards. Ethereum's abstract account model offers an enhanced approach to implementing this standard, exemplified by the Web3 game project Sapienz [15].
_Supportive platforms._ In light of the forthcoming updates to Ethereum, a diverse range of competitive blockchain platforms are also stepping up their game. Notable contenders such as BNB Chain, Avalanche, Optimism, Polygon, StarkNet, and zkSync have all unveiled their respective initiatives for implementing crucial updates.
_Supportive wallet._ Complementing the infrastructural advancements, an equally pivotal component lies within the realm of wallets. Serving as the hub where users manage, utilize, and safeguard their accounts, wallets play a pivotal role. Multiple wallet service providers, including MetaMash, Argent, Beam, Safe, Trust Wallet, Braavos, and OKX Wallet, have expressed their commitment to seamlessly incorporate these anticipated updates into their products.
### _Real Case Adoption_
_Visa._ As demonstrated in [16], Visa is actively exploring a solution that leverages the Paymaster smart contract. This solution aims to abstract the core interactions of blockchain technology and enhance the user payment experience by introducing a self-custodial smart contract wallet. The primary goal is to streamline the process for users conducting transactions within their wallets. One of the intriguing aspects is that users can now use any token to pay for gas fees, and these gas costs will be covered by Paymaster. In more detail, users have the capability to make payments to the Paymaster contract using USDC. Subsequently, the Paymaster contract converts the USDC into ETH and facilitates the transaction on the blockchain network. After a period of on-chain processing, the recipient of the transaction can receive an equivalent amount converted back to USDC. As of now, this particular use case is still in the conceptual verification stage.
_Web3 wallet._ By seamlessly integrating AA into existing EOA wallets, the accounts are elevated to the status of smart contract wallets, enriched with programmable logic and functionalities. For instance, Gnosis Safe [17] brings forth a _smulti-signature_ approach, necessitating multiple authorized entities to provide signatures for the same account, rather than relying solely on individual private keys. Argent [18] introduces the concept of _social recovery_, enabling users to recover lost or forgotten private keys. It allows users to utilize email addresses and phone numbers for offline recovery, introducing a familiar two-factor authentication mechanism. Users in Braavos [19] can access their wallets using the biometric features of their smartphones, such as facial or fingerprint recognition, adding an extra layer of security.
## IV Security Analysis
We explore the potential security issues that may exist in new account formats after its update. We recall a series of account threats and related smart contract risks and develop a framework to evaluate potential risks in account abstraction.
**Focused scope.** Conventional analyses of Ethereum often partition it into four distinct layers (e.g., [20]): the application layer (including accounts, smart contracts, and EVM), the data layer (including transactions, blocks, and events), the consensus layer (involving PoX mechanisms and incentives), and the network layer (comprising node discovery, message propagation, and verification). Given the central theme of this endeavor around accounts, we deliberately narrow our purview to the _application layer_, exploring the potential vulnerabilities targeting accounts, smart contracts, and related solidity language. Additionally, our coverage extends to certain elements within the transaction and block layers.
**Framework design** (Tab.III). Inspired by a series of elegant investigations [20, 21, 22, 23, 24, 25, 26, 27], we distill a succinct overview of vulnerabilities present within the current Ethereum application layer. These vulnerabilities can be broadly attributed to three factors: design pitfalls, vulnerabilities intrinsic to the Solidity programming language, and errors stemming from contract programming. Within each of these categories, we identify and elaborate upon sub-vulnerabilities (**Vulnerability**) that manifest across diverse functions and processes (**CauseFmWhere**). Our discussion then delves into the potential of account abstraction to alleviate these vulnerabilities. We examine the feasibility of employing account abstraction to mitigate these issues and explore strategies for implementation that could effectively address these concerns (**byWhichFunc**).
**Analyses.** Our discussions are categorized into two facets, contingent upon the effectiveness of AA updates.
_Transaction disorder._ This vulnerability centers around a concurrency issue where the blockchain's future state is contingent upon the sequence of transaction execution, a process influenced by miners who group transactions into blocks based on incentives. The design of AA effectively mitigates this problem by employing bundlers to manage and submit transactions. Unlike miners or validators, bundlers have a limited scope of impacting transaction orders, as they handle a relatively small number of transactions.
_Timestamp manipulation._ This vulnerability happens when a contract utilizes block.timestamp in crucial operations or as a means of generating randomness. It can be exploited by a malicious miner who has the capability to manipulate the value of block.timestamp (evidenced by [28]). The introduction of account abstraction helps address this concern, as AA reduces reliance on block.timestamp and shifts focus to bundlers,
which are better suited for managing transactions and ensuring more controlled and secure interactions.
Short addressThe vulnerability stems from the EVM's absence of address validity checking. In the process of encoding during contract invocation, if the encoded arguments' length is insufficient, the EVM compensates by adding extra zeros to ensure 32 bytes. The AA design, through the involvement of the bundler and EntryPoint contract, can mitigate this issue. The design can effectively check the length of Ethereum addresses via msg.data.
Randomness relianceThis vulnerability revolves around the manipulation of seed (a form of randomness) by malicious miners in gambling and lottery contracts that use pseudorandom number generation. AA might not directly enhance the security of random seed creation. This is because the process of generating random seeds typically involves functions within the contract logic rather than being directly embedded in accounts, which is the primary focus of AA.
Empty accountMalicious actors leverage empty accounts (devoid of nonce, balance, code, and storage) to initiate a DoS attack. These empty accounts still necessitate tracking within the Ethereum state trie and result in extended transaction processing times. AA is not equipped to mitigate this vulnerability, as it currently lacks verification mechanisms for empty accounts. However, the issue has been addressed through the implementation of EIP-161.
Under-price opcodesEthereum uses the gas mechanism to deter abuse of computing resources, but this vulnerability emerges when the gas cost for resource consumption is improperly set. The exploitation of under-priced opcodes will consume a disproportionate amount of computing resources. This issue is not specifically addressed by the design of account abstraction, as it pertains to the gas pricing and resource allocation mechanisms within Ethereum.
Stack limitSolidity would trigger an exception and terminate the call due to its rigid limit of 1,024 frames in the EVM call stack. Malicious actors could recursively call a contract, causing it to reach the maximum stack depth and fail subsequent external calls. While account abstraction cannot inherently address this specific vulnerability, changes in the Ethereum protocol (e.g., EIP-150) can indirectly impact AA as well as other aspects of the Ethereum ecosystem.
Unbounded operationsThe vulnerability arises when a contract's execution demands a greater amount of gas than the permitted block gas limit. This occurs due to the inclusion of unbounded operations, such as loops, within the contract. AA design offers effective solutions to counter such issues in two primary ways: Firstly, the paymenter contract defines a fee policy that establishes clear limitations on gas limits. This mechanism ensures that transactions adhere to predefined gas constraints, preventing situations where execution demands exceed the allowable limit. Secondly, after AA's update, every loop in smart contracts is invoked by an internal account rather than an external account. This setup grants the contract owner the ability to terminate loops if necessary, enhancing control over potential gas consumption issues.
Inconsistent call returnsThis vulnerability stems from the inconsistency between the two return values from different calling routes: (i) directly referencing the callee contract instance and (ii) using low-level methods like send and call. Route-(i) will return the call back to the caller while Route-(ii) returns false to the caller. The inconsistency in behavior persists in the current state of the Solidity compiler, lacking rectification. AA design does not inherently address this issue, but updates or changes in the Solidity compiler or EVM could potentially influence AA as well.
Pointer unresetThe vulnerability arises from Solidity's behavior regarding uninitialized compound local variables, which, when not explicitly initialized, leads to the reference defaulting to slot 0 and overwriting content from that point onwards. AA design does not address this particular language-level issue, but it's fortunate that the vulnerability was rectified by updating the version of Solidity.
Outdated compilerThis vulnerability surfaces when a contract is compiled using an outdated compiler that harbors bugs. While AA design may not directly address this concern, it aligns with the solution: opting for an up-to-date compiler can substantially reduce the associated risks. Keeping the compiler current is a key practice to ensure the solidity and security of the compiled contracts.
\begin{table}
\begin{tabular}{c c c c} & **Vulnerability** & **CauseFunWhere** & **byWhichFunc** \\ \hline \multirow{4}{*}{**EIP-161**} & Transaction disorder & BlockCreate & Bundler \\ & Timestamp manipulation & BlockCreate & Bundler \\ & Short address & InputCheck & Bdr/EntryPoint \\ & Randomness reliance & BlockCreate & ✗ \\ & Empty account & StateTrie & ✗ \\ & Under-price opcodes & GasCost & ✗ \\ & Stack limit & Execution & ✗ \\ \hline \multirow{4}{*}{**EIP-161**} & Unbounded operations & GasUsage & Paymaster \\ & Inconsistent call returns & Exception & ✗ \\ & Pointer unreset & BlankField & ✗ \\ & Outdated compiler & Compiler & ✗ \\ & Unclear constructor name & Syntax & ✗ \\ & Type casts & Design & ✗ \\ \hline \multirow{4}{*}{**EIP-161**} & Reentrancy & Dependence & Paymaster \\ & Frozen ether & Dependence & Bundler \\ & contract Upgradeability & Dependence & ✗ \\ & Delegatecall injection & Dependence & ✗ \\ & Unexpected revert & Dependence & ✗ \\ & Manipulated balance & Validation & PyMsUsrOpen \\ & Integer overflow & Validation & ✗ \\ & Insufficient signature & Authentication & Bdr/MultiSig \\ & Self-destruct contract & Authentication & EntroPoint \\ & Uc.origin & Authentication & Uc/Open/Bdr \\ & Lack of confidentiality & Authentication & ✗ \\ & Erroneous visibility & Authentication & ✗ \\ \hline \multicolumn{4}{c}{_AA updates_} \\ \end{tabular}
\end{table} TABLE III: Vulnerabilities in Ethereum application layer
_Unclear constructor name._ The vulnerability stems from the mismaning of the constructor function, inadvertently allowing unauthorized parties to take ownership of the contract. While AA design cannot directly solve this issue, it is fortunate that the problem was mitigated through the introduction of the _constructor_ keyword
_Type casts._ A contract can invoke another contract's function by directly referencing the callee contract's instance. However, the verification process carried out by the Solidity compiler only assesses whether the invoked function has been declared, and it doesn't extend to verifying child functions that the invoked function inherits or parent functions that trigger it. This loophole can lead to unintended function execution. AA design does not offer a solution for this vulnerability, as it stems from Solidity's insufficient type system. In this context, AA's mechanisms don't directly address this issue.
_Reentrancy._ The vulnerability arises when an external contract initiates a function within the caller contract before the latter's ongoing execution concludes--essentially manifesting as a cyclic call. This exploitable invocation persists until the caller contract depletes its gas resources. The foundation of this vulnerability can be attributed to two pivotal factors: (i) a contract's decision-making hinges on specific state variables, which necessitate updating prior to invoking another contract; and (ii) the absence of a predefined gas limit for the transition. The introduction of AA can offer a partial solution to this issue. AA's advantages, including reduced gas consumption and the ability to control transactions precisely through the paymentic contract, contribute to its potential mitigation. Additionally, user operations are treated as distinct inputs through independent transactions, fostering an environment free from dependencies on other contracts. This segregation of operations enhances security by reducing the risk of vulnerability to such cyclic call-related attacks.
_Frozen ether._ The vulnerability arises from a scenario in which users can deposit funds into their contract accounts but are unable to later access or spend those deposited funds. Two primary factors contribute to this vulnerability: (i) contracts not furnishing a function for the expenditure of funds, and (ii) the inadvertent or deliberate termination of the callee contract. Account abstraction can offer partial mitigation of this vulnerability due to its transaction batching made by bundlers. With each user-requested operation, such as depositing funds into a smart contract, the transaction bundler gains an enhanced capability to inspect the contract before sending tokens. This heightened level of examination surpasses that of a standard user. The transaction bundler possesses the potential to ensure that operations transpire within the confines of the originating contract's security.
_Upgraded contract._ Contract upgrading can be approached through two methods: (i) Dividing a contract into a proxy contract (non-updatable) and a logic contract (updatable). (ii) Employing a registry contract to maintain a record of the updated contracts. While the design of account abstraction is separate from these complexities, its contract functionalities align with its principles of upgradability.
_Delegatedcall injection._ EVM offers an opcode called delegatecall, allowing the bytecode of a callee contract to be inserted into the bytecode of the caller contract. The vulnerability arises due to the possibility that a callee contract can modify state variables within the caller contract. Account abstraction does not inherently provide a distinct method to mitigate this particular vulnerability. The effectiveness of AA's security in mitigating the issue hinges on its inherent design and solution. Fortunately, addressing this can be achieved via a straightforward means by enhancing its declaration.
_Unexpected revert._ This vulnerability stems from the situation where a callee contract causes the execution of a caller contract to be reverted. The design of account abstraction does not offer a significantly improved mitigation for this issue, as it fundamentally stems from contract programming practices. Consequently, this vulnerability might still be pertinent even within AA's contract functionalities.
_Manipulated balance._ This vulnerability surfaces when a contract's decision-making relies on the values of this.balance or address(this).balance. These values can be manipulated by an attacker to gain unauthorized access to funds. The design update introduced by AA can potentially influence this vulnerability. In AA, all account addresses are explicitly bound with balances, unlike the implicit nature of the former design. This alteration could impact the dynamics of this vulnerability.
_Integer overflow._ This vulnerability arises when the result of an arithmetic operation surpasses the range of a Solidity data type. Regrettably, neither the Solidity compiler nor the EVM incorporates mechanisms to identify integer overflow or underflow. Account abstraction design limitations prevent it from overcoming these challenges. However, a solution to mitigate this vulnerability is within reach through the utilization of the SafeMath library, which has already proven effective in handling such arithmetic issues.
_Insufficient signature._ The vulnerability surfaces when a sender channels funds to multiple recipients via a proxy contract, bypassing individual transactions. In this mechanism, the proxy contract evaluates the authenticity of digital signatures from senders. If these signatures lack essential data (such as nonce or proxy contract address), a malicious recipient can exploit the situation to replay the message repeatedly, facilitating the withdrawal of surplus payments. In AA's design, the role of the bundler mirrors that of a proxy agent. The bundler manages the accumulation of multiple transactions, functioning as a trusted entity. This approach not only aligns with the principles of a proxy contract but also serves as a safeguard against replay attacks. Further, the adoption of multi-sig techniques in AA can also contribute to the mitigation of issues.
_Self-destruct contract._ When the owner of a contract or a third party utilizes the self-destruct method to terminate a contract, the contract's bytecode and storage will be erased. The vulnerability emerges from inadequate authentication
mechanisms embedded within the contract. AA can indeed mitigate this issue by introducing multi-factor authentication, a more robust approach. This mandates the approval of multiple parties before a suicide operation is executed. By implementing this safeguard, AA enhances security by fortifying the authentication process and reducing the likelihood of unauthorized termination.
* tx.origin. The problem originates from the utilization of tx.origin, a global variable within Solidity that identifies the original EOA responsible for triggering the transaction. The vulnerability arises when a contract employs tx.origin for authorization, thereby exposing it to phishing attacks. AA addresses this concern through its integration of EOA with contract invocation. The integration is pivotal as AA replaces tx.origin with msg.sender in default authentication. This shift is now a necessary step to bolster security against vulnerabilities stemming from tx.origin usage.
* _Wrong payment._ This security issue arises due to the absence of identity verification when a caller invokes a function to transfer Ether to a random address. As seen in the mitigation strategies employed for previous balance-related vulnerabilities, the authentication mechanisms and the role of the bundle within account abstraction can aid in alleviating this problem. This ensures that only authorized entities are permitted to initiate such actions, thwarting unauthorized withdrawals.
* _Lack of confidentiality._ Transaction details within a blockchain are inherently public due to the nature of the technology. While designating a state variable as private can limit other contracts' access, the value of such a variable can still be inferred from transaction data. It's worth noting that AA design does not specifically address the intricacies of transaction confidentiality; however, it aligns with the overall progression of smart contract evolution. Mitigation strategies encompass the implementation of cryptographic techniques like commitment schemes [29] and zero-knowledge proofs [30]. Additionally, hardware-based solutions [31] could also be explored to enhance the security of sensitive data within an untrusted environment.
* _Erroneous visibility._ This vulnerability stems from inaccurately defining the visibility of a function, where the default public visibility can be exploited by attackers to improperly access functions. AA design doesn't directly address this issue but aligns with its solution. This vulnerability is mitigated by requiring explicit specification of function visibility, a measure AA's design also adheres to.
## V Further Discussions
### _More about AA Design_
_AA represents an application-level update with its primary impact centered around the application layer._ Aligned with the idea outlined in EIP-4337, AA introduces perfect backward compatibility, minimizing its effects on consensus layers and underlying structures. The fact is evident in security enhancements, which effectively tackle many challenges originating from the application layer rather than fundamental design flaws (refer to Tab.III).
_Rather than directly merging EOA and CA, AA incorporates key functionalities and delegates subsequent tasks to related contracts in tandem._ Notably, separating the functions required to be executed within the smart contract and allocating a portion of them to the user's account enhances overall efficiency. The corresponding batch processing isn't limited to user operations but also extends to fee payments.
_The user experience is markedly improved when employing an AA account as opposed to an EOA._ While a series of interconnected contracts work harmoniously to support the functionality of an AA account, users perceive it as effortlessly accessible as popular Web2 Apps (e.g., shopping, and banking). Complex technical jargon is relegated to the backend, and this will spare regular users from unnecessary cognitive load. The user interface offers a friendly entry point.
### _AA Opportunities_
_Ease accessibility to Web3._ Account abstraction not only streamlines the interaction with DApps but also significantly enhances the user experience within Web3. By embracing AA, users gain the ability to tailor their accounts to operate exclusively under specific conditions, which significantly differs from previous solutions that rely on complicated invocation across functions or contracts [32]. Users are empowered with a greater degree of control over their customized programming functions. For instance, unlike the traditional multi-sig setup, where the conditions for executing transactions are relatively standardized, account abstraction allows for a more personalized set of conditions. This adaptability ensures that the operations are only executed when predetermined criteria are met, such as a predefined number of signatories.
_Promoting Layer-2 (L2)._ The fundamental L2 projects revolve around alleviating the substantial computational load on L1 chains. As a result, a significant focus of various L2 solutions lies in streamlining the intricate processing logic inherent in DApps. This optimization not only reduces the overall gas cost but also enriches the user experience through the introduction of extended functionalities. To realize this goal, numerous L2 projects have embraced the notion of incorporating account abstraction, marking a pivotal step toward achieving their mission. Notable examples [7] include zkSync, which seamlessly integrates the IAccount interface, and StarkNet, with the well-regarded platform Agent. Similarly, Optimistic rollups also witness the integration of customized APIs.
## VI Conclusion
In this paper, we explore the notion of account abstraction (featured by EIP-4337), which is formally involved the sixth stage of Ethereum's roadmap. We study its operating mechanism, key features, and surrounding developments. We also examine its security by assessing a set of related criteria. Our results reveal the scope and extent of the security improvements introduced by the adoption of account abstraction. To our knowledge, this work provides the first formal AA study. |
2303.14466 | Two step I to II type transitions in layered Weyl semi-metals and their
impact on superconductivity | Novel "quasi two dimensional" typically layered (semi) metals offer a unique
opportunity to control the density and even the topology of the electronic
matter. Along with doping and gate voltage, a robust tuning is achieved by
application of the hydrostatic pressure. In Weyl semi - metals the tilt of the
dispersion relation cones, k , increases with pressure, so that one is able to
reach type II k > 1 starting from the more conventional type I Weyl semi -
metals k < 1. The microscopic theory of such a transition is constructed. It is
found that upon increasing pressure the I to II transition occurs in two
continuous steps. In the first step the cones of opposite chirality coalesce so
that the chiral symmetry is restored, while the second transition to the Fermi
surface extending throughout the Brillouin zone occurs at higher pressures.
Flattening of the band leads to profound changes in Coulomb screening.
Superconductivity observed recently in wide range of pressure and chemical
composition in Weyl semi-metals of both types. The phonon theory of pairing
including the Coulomb repulsion for a layered material is constructed and
applied to recent extensive experiments on HfTe5. | Baruch Rosenstein, B. Ya. Shapiro | 2023-03-25T13:24:20Z | http://arxiv.org/abs/2303.14466v1 | # Two step I to II type transitions in layered Weyl semi-metals and their impact on superconductivity
###### Abstract
Novel "quasi two dimensional" typically layered (semi) metals offer a unique opportunity to control the density and even the topology of the electronic matter. Along with doping and gate voltage, a robust tuning is achieved by application of the hydrostatic pressure. In Weyl semi - metals the tilt of the dispersion relation cones, \(\kappa\), increases with pressure, so that one is able to reach type II (\(\kappa>1\)starting from the more conventional type I Weyl semi - metals.\(\kappa<1\). The microscopic theory of such a transition is constructed. It is found that upon increasing pressure the I to II transition occurs in two continuous steps. In the first step the cones of opposite chirality coalesce so that the chiral symmetry is restored, while the second transition to the Fermi surface extending throughout the Brillouin zone occurs at higher pressures. Flattening of the band leads to profound changes in Coulomb screening. Superconductivity observed recently in wide range of pressure and chemical composition in Weyl semi-metals of both types. The phonon theory of pairing including the Coulomb repulsion for a layered material is constructed and applied to recent extensive experiments on \(HfTe_{5}\).
pacs: 74.20.Fg, 74.70.-b, 74.62.Fj
Introduction.
The 3D and 2D topological quantum materials, such as topological insulators and Weyl semi - metals (WSM), attracted much interests due to their rich physics and promising prospects for applications. The band structure in the so called type I WSM like graphene[1] in 2D and \(ZrTe_{5}\) in 3D [2; 3; 4][5], is characterized by appearance of linear dispersion relation, cones around several Dirac points, due to the band inversion. This is qualitatively distinct from conventional metals, semi - metals or semiconductors, in which bands are typically parabolic. The dispersion cones are often tilted[6]. In an extreme case of type-II WSMs, the cones have such a strong tilt, \(\kappa\geq 1\), that they exhibit a nearly flat band at Fermi surface first predicted[7] in \(WTe_{2}\). Typically the Fermi surface "encircles" the Brillouin zone and therefore is topologically distinct from conventional "pockets". This in turn leads to exotic electronic properties different from conventional and the type I materials. Examples include the collapse of the Landau level spectrum in magnetoresistance [8], and novel quantum oscillations [9]. Several _layered_ materials were predicted and observed to undergo[10] the I to II (abbreviated as \(I\to II\)) transition while doping or pressure is changed [11][12]. In fact a well known layered organic compound \(\alpha-(BEDT-TTF)_{2}I_{3}\) was a long time suspected [6] to be a quasi - 2D materials undergoing such transition.
Recent experiments concentrated on two (close) families of layered materials. The first is superlattice of transition metal dichalcogenides [13]layers with formula \(MX_{2}\). The metals include \(M=Mo,W,V,Ta,Pd\), and the chalcogenides \(X=S,Te,Se\). Majority of representatives of these class are 2D WSM. The well separated layers are integrated into van der Waals heterostructures by vertically stacking [14][15]. Intercalation and external pressure are the direct and effective methods for achieving exotic properties distinctive from the pristine materials[16][17]. Yet another class of stacked transition metal pentatellurides, including \(HfTe_{5}\) and \(ZrTe_{5}\), were recently comprehensively investigated [4][19]. For example the transport and superconductive properties of \(HfTe_{5}\) were comprehensively studied [19] at pressures as high as \(30GPa\).
Pressure in particular[20] controls both the strength of the interlayer coupling and of the cone slope allows to observe the topological transition. The affect on physical properties of the topological phase transitions between the type I to type II Weyl phases was considered theoretically. In ref.[21] the heat capacity, compressibility and magnetic susceptibility was studied. Superconductivity observed recently in wide range of pressure and chemical composition in Weyl semi-metals of both types. In the previous paper[22] and a related work[23] a continuum theory of conventional superconductivity through the \(I\to II\) topological transition was developed. Magnetic response in the superconducting state were calculated in [24][25]. The continuum approach used was too "mesoscopic" in order to describe the transition region since the global topology of the Brillouin zone is beyond the scope of the continuum approach.
In the present paper a theory of the topological transitions of the electron liquid of layered WSM under hydrostatic pressure is constructed using a (microscopic) tight binding model on the honeycomb lattice similar to that used to model [26] dichalcogenite \(2H\)\(WTe_{2}\). It possesses an important chiral symmetry between two Brave (hexagonal) sublattices. The Weyl cones of opposite chirality appear at the crystallographic \(K\) and \(K^{\prime}\) points for \(\kappa=0\). The (discrete) chiral symmetry persists at all values of \(\kappa\). This relatively simple model describes well both classes of layered materials that are Weyl semimetals.
Unexpectedly investigation of the pressure - "topology" phase diagram of this sufficiently universal microscopic model reveals that (at nonzero chemical potential) the \(I\to II\) transition always occurs in two steps. In the first step upon increasing pressure leading to higher tilt \(\kappa\) the circular pockets around the cones of opposite chirality coalesce into a single (type I) elliptic Fermi surface. The chiral symmetry chiral symmetry is spontaneously broken. The second transition to the type II Fermi surface (extending throughout the Brillouin zone) occurs at yet higher pressures.
As in previous investigations[22][23] superconductivity is used as an efficient signature of the topological transition. The phonon pairing theory was improved compared to previous work by accounting for the effects of screened Coulomb repulsion. We calculate the superconducting critical temperature taking into consideration the modification of the Coulomb electron-electron interaction. The Gorkov equations for two sublattices system are solved without resorting to the mesoscopic approach. Moreover it turns out that the screening of Coulomb repulsion plays a much more profound role in quasi 2D materials and do not allow the pseudo-potential simplification developed by MacMillan[27]. Taking this into account involves a nontrivial dependence on quasi-momentum in the gap equation (along with frequency dependence). The results compare well with recent experiment on[19]\(HfTe_{5}\).
Rest of the paper is organized as follows. In Section II the universal microscopic model of the layered WSM is described. The dependence of the tilt parameter \(\kappa\), electron density and the interlayer distance on pressure are phenomenologically related to parameters of the model. In Section III the Gorkov equations for the optical phonon mediated intra - layer pairing for a multiband system including the Coulomb repulsion is derived and solved numer
ically. In Section IV the phonon theory of pairing including the Coulomb repulsion for a layered material is applied to recent extensive experiments on \(HfTe_{5}\) under the hydrostatic pressure. The last Section contains conclusions and discussion.
## II A "Universal" lattice model of layered (type I and type II) Weyl semi-metals
### Inter - layer hopping on honeycomb lattice
A great variety of tight binding models were used to describe Weyl (Dirac) semimetals in 2D. Historically the first was graphene (type I, \(\kappa=0\)), in which electrons hope between the neighboring cites of the honeycomb lattice. Two Dirac cones appear at \(K\) and \(K^{\prime}\) crystallographic points in Brillouin one (BZ). Upon modification (gate voltage, pressure,intercalation) the hexagonal symmetry is lost, however a discrete chiral symmetry between two sublattices, denoted by \(I=A,B\), ensures the 2D WSM. The tilted type I and even type II (\(\kappa>1\)) WSM can be described by the same Hamiltonian with the tilt term added. We restrict the discussion to systems with the minimal two cones of opposite chirality and negligible spin orbit coupling. This model describes the compounds listed in Introduction and can be generalizable to more complicated WSM. This 2D model is extended to a layered system with interlayer distance \(d\). The 2D WSM layers are separated by dielectric streaks with interlayer hopping neglected, so that they are coupled electromagnetically only[28].
The lateral atomic coordinates on the honeycomb lattice are \(\mathbf{r_{n}}=n_{1}\mathbf{a_{1}}+n_{2}\mathbf{a_{2}}\), where lattice vectors are:
\[\mathbf{a_{1}}=a\left(\frac{1}{2},\frac{\sqrt{3}}{2}\right);\;\mathbf{a_{2}}= a\left(\frac{1}{2},-\frac{\sqrt{3}}{2}\right). \tag{1}\]
The length of the lattice vectors \(a\) will be taken as the length unit and we also set \(\hbar=1\). The hopping Hamiltonian including the tilt term is:
\[K=\sum\nolimits_{\mathbf{n}l}\left\{t\left(\sum\limits_{i=1,2,3}\psi_{ \mathbf{n}l}^{sA\dagger}\psi_{\mathbf{r_{n}}+\delta_{i},l}^{sB}+\text{h.c.} \right)-\kappa\psi_{\mathbf{n}l}^{sI\dagger}\psi_{\mathbf{r_{n}}+\mathbf{a_{1 }},l}^{sI}-\mu n_{\mathbf{n},l}\right\}. \tag{2}\]
Here an integer \(l\) labels the layers. Operator \(\psi_{\mathbf{n}l}^{sA\dagger}\) is the creation operators with spin \(s=\uparrow,\downarrow\), while the density operator is defined as \(n_{\mathbf{n}l}=\psi_{\mathbf{n}l}^{sI\dagger}\psi_{\mathbf{n}l}^{sI}\). The chemical potential is \(\mu\), while \(t\) is the hopping energy. Each site has three neighbors separated by vectors \(\delta_{1}=\frac{1}{3}\left(\mathbf{a_{1}}-\mathbf{a_{2}}\right),\delta_{2}= -\frac{1}{3}\left(2\mathbf{a_{1}}+\mathbf{a_{2}}\right)\) and \(\delta_{3}=\frac{1}{3}\left(\mathbf{a_{1}}+2\mathbf{a_{2}}\right)\). Dimensionless parameter \(\kappa\) determines the tilt of the Dirac cones along the \(\mathbf{a_{1}}\)direction[6]. In the 2D Fourier space, \(\psi_{n_{1}n_{2}l}^{sA\dagger}=N_{s}^{-2}\sum_{k_{1}k_{2}}\psi_{k_{1}k_{2}l}^ {sA\dagger}\exp\left[2\pi i\left(k_{1}n_{1}+k_{2}n_{2}\right)/N_{s}\right]\), one obtains for Hamiltonian (for finite discrete reciprocal lattice \(N_{s}\times N_{s}\)):
\[K=\frac{1}{N_{s}^{2}}\sum\nolimits_{k_{1}k_{2}l}\psi_{k_{1}k_{2}l}^{s\dagger} M_{k_{1}k_{2}}\psi_{k_{1}k_{2}l}^{s}. \tag{3}\]
Here \(\mathbf{k}=\frac{k_{1}}{N_{s}}\mathbf{b_{1}}+\frac{k_{2}}{N_{s}}\mathbf{b_{2}}\) are the reciprocal lattice vectors and the matrix
\[M_{\mathbf{k}}=d_{\mathbf{k}}^{x}\sigma_{x}+d_{\mathbf{k}}^{y}\sigma_{y}+d_{ \mathbf{k}}^{0}I \tag{4}\]
where
\[d_{\mathbf{k}}^{x} =\cos\left[\frac{2\pi}{3N_{s}}\left(k_{1}-k_{2}\right)\right]+2 \cos\left[\frac{\pi}{N_{s}}\left(k_{1}+k_{2}\right)\right]\cos\left[-\frac{ \pi}{3N_{s}}\left(k_{1}-k_{2}\right)\right]; \tag{5}\] \[d_{\mathbf{k}}^{y} =-\sin\left[\frac{2\pi}{3N_{s}}\left(k_{1}-k_{2}\right)\right]+2 \cos\left[\frac{\pi}{N_{s}}\left(k_{1}+k_{2}\right)\right]\sin\left[\frac{\pi }{3N_{s}}\left(k_{1}-k_{2}\right)\right];\] \[d_{\mathbf{k}}^{0} =-\kappa\cos\left[\frac{2\pi}{N_{s}}k_{1}\right]-\mu.\]
From now on the hopping energy \(t\) will be our energy unit.
The free electrons part of the Matsubara action for Grassmanian fields \(\psi_{\mathbf{k}ln}^{*f}\) therefore is:
\[S^{e}=\frac{1}{T}\sum\nolimits_{\mathbf{k}ln}\psi_{\mathbf{k}ln}^{*sA}\left\{ \left(-i\omega_{n}+d_{\mathbf{k}}^{0}\right)\delta^{AB}+\sigma_{i}^{AB}d_{ \mathbf{k}}^{i}\right\}\psi_{\mathbf{k}ln}^{*B}. \tag{6}\]
Here \(\omega_{n}=\pi T\left(2n+1\right)\) is the Matsubara frequency. The Greens' function, \(g_{\mathbf{k}n}^{ss^{\prime}}=\delta^{ss^{\prime}}g_{\mathbf{k}n}\), of free electrons has the (sublattice) following matrix form:
\[g_{\mathbf{k}n}=\left[\left(-i\omega_{n}+d_{\mathbf{k}}^{0}\right)I+\sigma_{i} d_{\mathbf{k}}^{i}\right]^{-1}=\frac{\left(-i\omega_{n}+d_{\mathbf{k}}^{0} \right)I-\sigma_{i}d_{\mathbf{k}}^{i}}{\left(i\omega_{n}-d_{\mathbf{k}}^{0} \right)^{2}-\left(d_{\mathbf{k}}^{x}{}^{2}+d_{\mathbf{k}}^{y2}\right)}. \tag{7}\]
Now we turn to the interactions part of the Hamiltonian.
### Coulomb repulsion
The electron-electron repulsion in the layered WSM on the lattice can be presented in the form,
\[V=\frac{e^{2}}{2}\sum\nolimits_{\mathbf{n}\mathbf{n}^{\prime}ll^{\prime}}n_{ \mathbf{n}l}v_{\mathbf{n}-\mathbf{n}^{\prime},l-l^{\prime}}^{C}n_{\mathbf{n}^ {\prime}l^{\prime}}, \tag{8}\]
where \(v_{\mathbf{n}-\mathbf{n}^{\prime},l-l^{\prime}}^{C}\) is the "bare" Coulomb interaction between electrons. Making the 2D Fourier transform, one obtains,
\[V=\frac{e^{2}}{2N_{s}^{2}}\sum\nolimits_{\mathbf{q}l^{\prime}}n_{\mathbf{q}l} v_{\mathbf{q},l-l^{\prime}}^{C}n_{-\mathbf{q}l^{\prime}}, \tag{9}\]
where
\[v_{\mathbf{q},l-l^{\prime}}^{C}=v_{\mathbf{q}}^{2D}e^{-dq}|l-l^{\prime}|, \tag{10}\]
with the in plane Coulomb repulsion being \(v_{\mathbf{q}}^{2D}=\frac{2\pi e^{2}}{q^{\epsilon}}\). Here \(\epsilon\) is the inter - layer dielectric constant [34], while \(d\) is the interlayer distance. On the hexagonal lattice the exponential formula approximates the Coulomb repulsion well only away from the BZ boundaries. Near the boundaries the (periodic) potential is calculated numerically in SI3. The long range screening effect of the Coulomb interaction is effectively taken into account using the RPA approximation. Effect of pressure on the various parameters is discussed in the next section.
## IV Two step I to II type topological transition
### Pressure induced parameter modifications
While pressure turned out to be more experimentally accessible control parameter than the gate voltage, in the early works mentioned in Introduction typically the phase diagram was studied as a function of the chemical potential. Moreover in most recent experiments the hydrostatic pressure serves as a control parameter to induce topological transformations of the electronic matter in WSM. The parameter dependence of a microscopic model on pressure, is in principle derivable by the DFT and a corresponding adaptation of the elasticity theory [20]. Although there exist a qualitative theoretical description of the pressure dependence of the Coulomb repulsion[30], electron-phonon coupling and the topology of the Fermi surface of these novel materials [31], it is difficult to determine quantitatively the tilt \(\kappa\), inter layer spacing \(d\), electron density and other parameters. Therefore we use an experimentally parametrized (see for example a comprehensive study[32]) dependence of these parameters on the pressure. In the present paper to describe a specific material \(HfTe_{5}\) as an example we utilize experimental results of ref.[19]. Note that in many materials the robust electron gas exists only at certain pressure.
For not very large pressures (\(P<15\ GPa\)) several parameters dependencies can be accounted for as linear. In particular, the layer spacing are the tilt parameter are modified under pressure \(P\) as:
\[d\left(P\right) =\frac{d_{a}}{1+\sigma P/d_{a}}\approx d_{a}-\sigma\ P; \tag{11}\] \[\kappa\left(P\right) =\kappa_{a}+\gamma P.\]
The tilt parameter was estimated in ref.[20] for a wide range of \(\kappa\). For layered \(HfTe_{5}\) the stress parameter is \(\sigma=0.225A/GPa\). The "ambient" value is \(d_{a}=7.7A\). As noted above the electron gas exists[19] in this case only for \(P>3GPa\). For the tilt modulus \(\kappa_{a}=-0.3\) and \(\gamma=0.15/GPa\).
Measurements demonstrate that 3D electron density in the type I phase of layered WSM is exponential in pressure (for not very high pressures):
\[n^{3D}\left(P\right)=n_{a}e^{\beta P}.\]
It saturates upon approach to type II WSM. The ambient value is \(n_{a}=1.4\times 10^{19}cm^{-3}\), while \(\beta=0.77/GPa\). The two dimensional electron density in the layers is related to the measured density by \(n\left(P\right)=n^{3D}\left(P\right)d\left(P\right)\). The influence on the interactions will be discussed in the next Section. Having described the model let us turn to the spectrum and topology of the Fermi surface for different pressures.
### Topological phases of layered WSM
Upon increasing pressure the I to II transition occurs in two continuous steps. In the first step the cones of opposite chirality coalesce so that the chiral symmetry is restored, while the second transition to the Fermi surface extending
Figure 1: Evolution of the Fermi surface topology as the pressure of Weyl semimetal increase. Parameters like the tilt \(\kappa\left(P\right)\), electron density etc are given in Eqs.(11) of the Weyl semimetal. The upper raw depicts the Fermi surfaces of all three topological phases, while the lower row are the corresponding dispersion relation of both branches (brown and green surfaces) with respect to Fermi level (the blue plane). At relatively low pressure the FS consists of two small Dirac pockets. At intermediate pressures the two pockets merge into a single ellipsoidal large pocket (still type I). At very high pressures the electron liquid undergoes the type I to type II topological transition.
throughout the Brillouin zone occurs at higher pressures. Fig.1 describes the Fermi surface (blue areas depict the Fermi sea in upper contour plots) and dispersion relations (lower 3D plots) of three representative pressures value from the three phases. There are two branches (brown higher than green) crossing the Fermi level (blue plane).
The graphene - like dispersion relation for smallest value of pressure when the electron pockets exist, \(P=3\ GPa\), \(\kappa=0.15\) (left panel in Fig.1) represents the type I WSM below the chiral transition. A rhombic BZ (with coordinates \(k_{1}\) and \(k_{2}\) defined in Eq.(3), yellow area covers the BZ) is chosen. Location of the cones (see a lower 3D plot) are close to crystallographic \(K^{\pm}\) points. There are two slightly tilted Dirac cones of opposite chirality. Increasing the pressure towards the chiral transition (see more plots in SI2) at \(P_{\chi}=6.5\ GPa\), the two pockets of the Fermi surface become elongated and larger and eventually merge into a single pocket shown in the central figure for \(P=8\ GPa\). The tilt parameter is already significant \(\kappa=0.9\). At yet larger pressure \(P=8\ GPa\) (right panel) the material becomes a type II WSM with large \(\kappa>1.2\). In this case FS envelops the BZ that topologically is torus. See the segment on the boundary \(k_{2}=0=2\pi/a\). Obviously the upper band becomes flatter as the tilt (pressure) increases.
Fig.2 gives the 2D electron density and the density of states as function of the chemical potential for Hamiltonian of the previous Section.
Both the electron density and the density of states were calculated numerically for the Fermi distribution function at temperature \(T=1K\) (the density at zero temperature corresponds to an area inside the FS) at various values of the chemical potential. Then the density is matched with those determined phenomenologically in the previous subsection.
Figure 2: Electron density and density of states (DOS) as function of pressure \(P\).of WSM. The 2D electron density (the brown curve) monotonically increases, while DOS (the green curve) has cusps at both topological transitions. On the cusp the derivative of DOS with respect pressure changes sign.
### The first topological transition: spontaneous chiral symmetry breaking
At small pressures, \(3\ GPa<P<4\ GPa\), the Fermi surface consists of two well separated Dirac cones of opposite chirality. The tilt does not affect the basic chiral symmetry of the honeycomb lattice: two sublattices are related by a reflection. The sixfold symmetry in undistorted graphene is of course typically broken down to the reflection symmetry only. When the tilted cones FS pockets merge at the transition \(P=P_{\chi}=6.5\ GPa\) (see the brown line in Fig.2) the chiral symmetry of the ground state is restored. The overall chirality of the FS above \(P_{\chi}\) (a topological number) therefore is zero. Although we are not aware of a mathematical proof, this transition always precedes the \(I\to II\) topological transition, see cyan line in Fig.2. The chiral transition is also topological, but a more local sense: fracture of the Fermi surface like in graphene oxide[29] or Lifshitz transition in high \(T_{c}\) cuprates like \(La_{2-x}Sr_{x}CuO_{4}\). The \(I\to II\) is more "exotic"[33]: it involves the global topology of the Fermi surface (it is a torus). The DOS at transition (the green curve in Fig.2) has a finite maximum at which the derivative changes sign.
### The second topological transition: \(I\to II\)
The electron density in type I phase above the chiral transition grows quite fast, see red line in Fig.2, so that at large pressures a significant part of BZ for one of the branches of spectrum is occupied. Eventually at \(P_{I\to II}=9.9\)\(GPa\) the growing single pocket envelops the BZ torus and thus FS splits again into two curves, see the right panel in Fig.1. Density of electron saturates, while the DOS has another finite peak. The two transition lead to singularities in various physical quantities. In the next Section the screening of Coulomb interactions is discussed.
## Screening in layered Weyl semi - metal.
The screening in the layered system can be conveniently partitioned into the screening within each layer described by the polarization function \(\Pi_{\mathbf{q}n}\) and electrostatic coupling to carriers in other layers. We start with the former.
### Polarization function of the electron gas in Layered WSM
In a simple Fermi theory of the electron gas in normal state with Coulomb interaction between the electrons in RPA approximation the Matsubara polarization is calculated as a simple _minus_ "fish" diagram [28] in the form:
\[\Pi_{\mathbf{q}n}=2T\sum\nolimits_{\mathbf{p}m}\mathrm{Tr}\left[g_{\mathbf{p}m }g_{\mathbf{p+q},m+n}^{tr}\right]. \tag{12}\]
Using the GF (see Eq.(7)), one obtain:
\[\Pi_{\mathbf{q}n}=\frac{4T}{N_{s}^{2}}\sum\nolimits_{\mathbf{p}m}\frac{\left( i\omega_{m}+A\right)\left(i\omega_{m}+B\right)+C}{\left[\left(i\omega_{m}+A \right)^{2}-\alpha^{2}\right]\left[\left(i\omega_{m}+B\right)^{2}-\beta^{2} \right]}, \tag{13}\]
where
\[A = -d_{\mathbf{p}}^{0};B=i\omega_{n}-d_{\mathbf{p+q}}^{0};C=d_{ \mathbf{p}}^{x}d_{\mathbf{p+q}}^{x}-d_{\mathbf{p}}^{y}d_{\mathbf{p+q}}^{y} \tag{14}\] \[\alpha^{2} = d_{\mathbf{p}}^{x2}+d_{\mathbf{p}}^{y2};\beta^{2}=d_{\mathbf{p+ q}}^{x2}+d_{\mathbf{p+q}}^{y2},\]
Performing summation over \(m\), one obtains:
\[\Pi_{\mathbf{q}n}=-\frac{1}{N_{s}^{2}}\sum\nolimits_{\mathbf{p}}\left\{ \begin{array}{l}\frac{\alpha^{2}-\alpha(A-B)+C}{\alpha\left[(A-B-\alpha)^{2 }-\beta^{2}\right]}\tanh\frac{\alpha-A}{2T}+\frac{a^{2}+\alpha(A-B)+C}{\alpha \left[(A-B+\alpha)^{2}-\beta^{2}\right]}\tanh\frac{\alpha+A}{2T}\\ +\frac{\beta^{2}+\beta(A-B)+C}{\beta\left[(A-B+\beta)^{2}-\alpha^{2}\right]} \tanh\frac{\beta-B}{2T}+\frac{\beta^{2}-\beta(A-B)+C}{\beta\left[(A-B)-\beta \right]^{2}-\alpha^{2}}\end{array}\right\}. \tag{15}\]
The polarization function however is strongly differ from the usual Lindhard expression for a parabolic band.
### Screening due to electron gas in layered system
Coulomb repulsion between electrons in different layers \(l\) and \(l^{\prime}\) within the RPA approximation is determined by the following integral equation:
\[V^{RPA}_{{\bf q},l-l^{\prime},n}=v^{C}_{{\bf q},l-l^{\prime}}+\Pi_{{\bf q}n}\sum _{l^{\prime\prime}}v^{C}_{{\bf q},l-l^{\prime\prime}}V^{RPA}_{{\bf q},l^{\prime \prime}-l^{\prime},n}. \tag{16}\]
The polarization function \(\Pi_{{\bf q}n}\) in 2D was calculated in the previous subsection. This set of equations is decoupled by the Fourier transform in the \(z\) direction:
\[V^{RPA}_{{\bf q},q_{z},n}=\frac{v^{C}_{{\bf q},q_{z}}}{1-\Pi_{{\bf q}n}v^{C}_{{ \bf q},q_{z}}}\, \tag{17}\]
where
\[v^{C}_{{\bf q},q_{z}}=\sum_{l}v^{2D}_{{\bf q}}e^{iq_{z}l-qql|l|}=v^{2D}_{{\bf q }}\frac{\sinh\left(qd\right)}{\cosh\left(qd\right)-\cos\left(dq_{z}\right)}. \tag{18}\]
The screened interaction in a single layer therefore is is given by the inverse Fourier transform [28]:
\[V^{RPA}_{{\bf q},l-l^{\prime},n}=\frac{d}{2\pi}\int^{\pi/d}_{q_{z}=-\pi/d}e^{ iq_{z}d\left(l-l^{\prime}\right)}\frac{v^{C}_{{\bf q}q_{z}}}{1-\Pi_{{\bf q}n}v^{C}_{{ \bf q}q_{z}}}. \tag{19}\]
Considering screened Coulomb potential at the same layer \(l=l^{\prime}\), the integration gives,
\[V^{RPA}_{{\bf q}n}=\frac{v^{2D}_{{\bf q}}\sinh\left[qd\right]}{\sqrt{b^{2}_{{ \bf q}n}-1}}, \tag{20}\]
where \(b_{{\bf q}n}=\cosh\left(dq\right)-v^{2D}_{{\bf q}}\Pi_{{\bf q}n}\sinh\left(dq\right)\). This formula is reliable only away from plasmons \(b_{{\bf q}n}>1\). It turns out that to properly describe superconductivity, one can simplify the calculation at low temperature by considering the static limit \(\Pi_{{\bf q}n}\simeq\Pi_{{\bf q}0}\). Consequently the potential becomes static: \(V^{RPA}_{{\bf q}}\equiv V^{RPA}_{{\bf q},n=0}\).
## IV Superconductivity
Superconductivity in WSM is caused by a conventional phonon pairing. The leading mode is an optical phonon mode assumed to be dispersionless with energy \(\Omega\). The effective electron-electron interaction due to the electron - phonon attraction opposed by Coulomb repulsion (pseudo - potential) creates pairing below \(T_{c}\). Further we assume the singlet \(s\)-pairing channel and neglect the interlayer electrons pairing. It important to note that unlike in conventional 3D metal superconductors where a simplified pseudo - potential approach due to McMillan and other [27], in 2D and layered WSM, one have to resort to a more microscopic approach.
### Effective attraction due to phonon exchange opposed by the effective Coulomb repulsion
The free and the interaction parts of the effective electron action ("integrating phonons"+RPA Coulomb interaction) [35] in the quasi-momentum - Matzubara frequency representation, \(S=S^{e}+S^{int}\),
\[S^{e} = \frac{1}{T}\sum_{{\bf k},l,n}v^{*sA}_{{\bf k}ln}\left\{\left(-i \omega_{n}+d^{0}_{{\bf k}}\right)\delta^{AB}+\sigma^{AB}_{i}d^{i}_{{\bf k}} \right\}\psi^{SB}_{{\bf k}ln}; \tag{21}\] \[S^{int} = \frac{1}{2T}\sum_{{\bf q}nn^{\prime}mm^{\prime}}n_{{\bf q}ln} \left(\delta_{ll^{\prime}}V^{ph}_{{\bf q},m-m^{\prime}}+V^{RPA}_{{\bf q},l-l^{ \prime}}\right)n_{-{\bf q},-l^{\prime},-n^{\prime}}.\]
Here \(n_{\mathbf{q}ln}=\sum_{\mathbf{p}}\psi_{\mathbf{p}ln}^{*I}\psi_{\mathbf{q}- \mathbf{p},l,n}^{*I}\) the Fourier transform of the electron density. The effective electron - electron coupling due to phonons is:
\[V_{\mathbf{q}m}^{ph}=-\frac{g^{2}\Omega}{\omega_{m}^{b2}+\Omega^{2}}, \tag{22}\]
where the bosonic frequencies are \(\omega_{m}^{b}=2\pi mT\).
The pressure dependence on the frequency is approximated as:
\[\Omega\left(P\right)=\Omega_{a}\left(1+\zeta P\right). \tag{23}\]
For \(HfTe_{5}\) we take \(\Omega_{a}=15meV\) and \(\zeta=0.005/GPa\).
### Nambu Green's functions and Gorkov equations
Normal and anomalous (Matsubara) intra layer Nambu Green's functions are defined by expectation value of the fields, \(\left\langle\psi_{\mathbf{k}nl}^{Is}\psi_{\mathbf{k}nl}^{**J}\right\rangle= \delta^{ss^{\prime}}G_{\mathbf{k}n}^{IJ}\) and \(\left\langle\psi_{\mathbf{k}nl}^{Is}\psi_{-\mathbf{k},-n,l}^{Js^{\prime}} \right\rangle=\varepsilon^{ss^{\prime}}F_{\mathbf{k}n}^{IJ}\), while the gap function is
\[\Delta_{\mathbf{q}n}^{IJ}=\sum_{\mathbf{p}m}V_{\mathbf{q}-\mathbf{p},n-m}F_{ \mathbf{p}m}^{IJ}, \tag{24}\]
where \(V_{\mathbf{q}n}=V_{\mathbf{q}n}^{ph}+V_{\mathbf{q}n}^{RPA}\) is a sublattice scalar. The gap equations in the sublattice matrix form are derived from Gorkov equations[35]:
Figure 3: The critical temperature \(T_{c}\) as function of the hydrostatic pressure \(P\) with (red points) and without (blue points) the Coulomb electron-electron interaction. The dependence has spikes near the points of topological transformations of the electronic system. Position of spikes coinsides with that of the density of states (the green curve).
\[\Delta_{\mathbf{q}n}=-\sum\nolimits_{\mathbf{p}m}V_{\mathbf{q}-\mathbf{p},n-m}g_{ \mathbf{p}m}\left\{I+\Delta_{\mathbf{p}m}g^{t}_{-\mathbf{p},-m}\Delta^{*}_{- \mathbf{p},-m}g_{\mathbf{p}m}\right\}^{-1}\Delta_{\mathbf{p}m}g^{t}_{-\mathbf{ p},-m}. \tag{25}\]
This equation was solved numerically by iterations method. The momenta are discretized as \(q_{1.2}=2\pi j_{1,2}/N_{s}\) (where \(j_{1,2}=-N_{s}/2\)...\((N_{s}/2-1)\)) \(N_{s}=256\) while the frequency cutoff was \(N_{T}=128\) the interatomic in-plane distance \(a=3.5A\), electron-phonon coupling \(g=140meV\) and the dielectric constant \(\varepsilon=20\).
The critical temperature as a function on the pressure is presented in Fig.3. The blue points represent the \(T_{c}\) when the Coulomb repulsion is neglected. It clearly shows the spikes of the \(T_{c}\) near the points of the both topological transformation of the electronic system caused by the hydrostatic pressure. It amplifies the dependence of the density of states (green line) in these points that can be understood from the approximate exponential BCS dependence, \(T_{c}=\Omega e^{-D(\mu)g^{2}}\). A more realistic model includes the Coulomb repulsion, see red points in Fig.3. The critical temperatures are much smaller demonstrating that in the present case the repulsion plays the essential role. It turns out that it not possible to approximate this behavior using a simplistic pseudo - potential approach by McMillan[27] theory successfully applied to 3D good metals.
## V Conclusion
To summarize we have developed a theory of superconductivity in layered Weyl semi-metals under the hydrostatic pressure that properly takes into account the Coulomb repulsion. It is shown that in Weyl semi - metals the tilt of the dispersion relation cones, \(\kappa\), increases with pressure, so that one is able to reach type II (\(\kappa>1\)starting from the more conventional type I Weyl semi - metals, \(\kappa<1\)). It is found that upon increasing pressure the I to II transition occurs in two continuous steps. In the first step the cones of opposite chirality coalesce so that the chiral symmetry is restored, while the second transition to the Fermi surface extending throughout the Brillouin zone occurs at higher pressures. We show that the critical temperature is a very robust tool to study these transformations of the electronic system. The critical temperature shows spike in the points of topological transformation repeating the density of the electron states. The generalization goes beyond the simplistic pseudo - potential approach by McMillan[27] theory. Superconductivity demonstrated significant effect of the Coulomb repulsion on the critical temperature.
Acknowledgements. This work was supported by NSCof R.O.C.Grants No.101-2112-M-009-014-MY3.
|
2302.03233 | A black hole toy model with non-local and boundary modes from
non-trivial boundary conditions | We study gauge theories between two parallel boundaries with non-trivial
boundary conditions, which serve as a toy model for black hole background with
two boundaries near the horizon and infinite, aiming for a better understanding
of the Bekenstein-Hawking entropy. The new set of boundary conditions allows
boundary modes and non-local modes that interplay between the two boundaries.
Those boundary modes and Wilson lines stretched between the two boundaries are
carefully analyzed and are confirmed as physical variables in the phase space.
Along with bulk fluctuation modes and topological modes, the partition function
and entropy of all physical modes are evaluated via Euclidean path integral. It
is shown that there are transitions between the dominance of different modes as
we vary the temperature. The boundary fluctuation modes whose entropy is
proportional to the volume dominate at high temperatures, and the boundary-area
scaled boundary modes and Wilson lines are the more important at low
temperatures. At super-low temperatures, when all the fluctuation modes die
off, we see the topological modes whose entropy is the logarithm of the length
scales of the system. The boundary modes and non-local modes should have their
counterparts in a black hole system with similar boundary conditions, which
might provide important hints for black hole physics. | Peng Cheng | 2023-02-07T03:30:25Z | http://arxiv.org/abs/2302.03233v3 | # Gauge theories with non-trivial boundary conditions I: Would-be gauge degrees of freedom
###### Abstract
We study gauge theories between two parallel boundaries with non-trivial boundary conditions, which is aimed at better understanding the Bekenstein-Hawking entropy. Boundary modes due to the boundary conditions are carefully analyzed, besides which we also find Wilson lines stretched between different boundaries because of the interplay between the two boundaries. Those Wilson lines are non-local modes and are confirmed as physical variables in the phase space. The corresponding symplectic form and commutation relations are also derived in the canonical formulation. Other interesting physics, like the winding modes of those Wilson lines, are also studied. There are bulk fluctuation modes, boundary edge modes, Wilson lines, and other interesting modes in the phase space. We derive the partition function and entropy via the Euclidean path integral and demonstrate transitions between the dominance of different modes as we vary the temperature.
Introduction
Gauge theories with non-trivial boundary conditions are important aspects of theoretical physics and can be used to understand lots of interesting physical phenomena [1; 2; 3; 4; 5]. The would-be gauge degrees of freedom, which are no longer pure gauge, can become physical modes due to boundary conditions, which were suggested to explain the micro-states of black holes [4; 5; 6; 7; 8; 9; 10; 11]. Moreover, most theoretical physicists tend to believe that the boundary degrees of freedom are vital for a better comprehension of the quantum effects of gravity, for example in the path integral formulation [12] and in AdS/CFT [13; 14].
As suggested in [4; 5; 6; 7; 8; 9; 10; 11], it is important to carefully study the boundary would-be gauge degrees of freedom. However, simply saying the boundary pure gauge configurations \(\lambda_{\rm bdy}\) are physical due to the boundary conditions and counting the corresponding entropy of those modes does not bring us anything. A relatively proper way to deal with those modes is to introduce a boundary current \(J^{\mu}\) coupled with the boundary gauge fields in the action [15]. Functional integration of the current in the path integral naturally introduces an effective action for the boundary would-be gauge modes, which is more or less proportional to \((\partial_{\mu}\lambda_{\rm bdy})^{2}\). Note that the above procedure is not gauge invariant, and the non-gauge invariance of the key ingredient of the story. The weak points of the above procedure are that boundary conditions are less transparent and the introduction of the boundary current seems to be artificial.
We are aiming to better understand the physics related to would-be gauge degrees of freedom from a set of nontrivial boundary conditions. In this paper, we study gauge fields living between two parallel plates with non-trivial boundary conditions and carefully separate different parts of contributions in the presence of those boundaries. The boundary condition we are interested in is the one where we allow residual degrees of freedom for the component of \(A_{\mu}\) that is perpendicular to the boundaries to exist. The canonical formulation is carefully studied such that dynamical modes in the phase space (or Hilbert space) and the measure in the path integral is clear. The bulk fluctuation modes, boundary modes, bulk Wilson lines that stretched between two boundaries, and other topological modes should be considered as physical degrees of freedom in the current setup. The Wilson lines stretched between two boundaries are defined in equation (22), which captures the difference of boundary conditions on two boundaries. Then, we evaluate the thermal partition function and entropy via the
Euclidean path integral. As the temperature of the system varies from high temperatures to super-low temperatures, different modes dominate at different temperatures, and we can say that there are phase transitions between different modes.
The bulk fluctuation modes always dominate at high temperatures, whose entropy is proportional to the volume. The entropies of the boundary modes and Wilson lines are both proportional to the area of the boundaries, which should be useful in understanding the Benkestein-Hawking entropy on black hole backgrounds. In super-low temperature limits, all the fluctuation modes play a less important role, and we can see interesting competition between the constant modes and topological modes. Those behaviors in the super-low temperature limits are supposed to provide some hints for the extremal limit on black hole backgrounds. A careful analysis of the black hole case is devoted to further studies. Note that we are considering a more general set of boundary conditions compared to previous studies [5; 6; 7; 8; 9; 10]. Rather than only considering boundary modes on specific boundaries, the presence of the boundary-stretched Wilson lines makes the structure of the phase space richer, which can also be important for black hole entropy and building connection with black hole soft hairs.
The relevant research can be an important aspect of understanding the microscopic interpretation of the Bekenstein-Hawking entropy. Like in the brick wall model built by 't Hooft [16], there can be two boundaries: one near the horizon and the other at infinity. It was shown that black hole micro-states can be understood by studying the fluctuations (like a scalar field) on such backgrounds with two boundaries. The flat case we are going to study in this paper can be regarded as a toy model of the black hole case with gauge theories being considered, where effects due to the presence of two boundaries and non-local effects are the key ingredients of our model. We leave the study of the non-local effects on a black hole background for future research. Moreover, the study of bulk gauge theory with two boundaries might be helpful to the recent progress on wormhole geometry [17; 18] and the factorization puzzle [19; 20; 21; 22; 23]. We hope the non-local effects we study here can help us better understand the puzzles in AdS/CFT and quantum gravity.
The paper is organized as follows. In section II, we discuss the basic setup and interesting boundary conditions. The section III is devoted to studying the canonical formulation of the theory, where we focus on the symplectic form and phase space. We also discuss the relationship between the canonical formulation and path integral, which leads to the Eu
lidean path integral part in section IV. We carefully analyze the contributions from different modes via path integral in section IV. In section V, we demonstrate the transition between the dominance of different modes. Section VI is the conclusion section. The appendix A provides more details of the calculations in the main text.
## II Boundary conditions
This section studies the boundary conditions for a U(1) gauge theory living between flat parallel plates. The situation we are mainly interested in is shown in figure 1, where we have a Maxwell field theory living between two parallel boundaries on the left- and right-hand side. The original Maxwell theory has the action
\[S=-\frac{1}{4e^{2}}\int_{\mathcal{M}}d^{4}x\ F^{\mu\nu}F_{\mu\nu}\,, \tag{1}\]
where \(\mathcal{M}\) is the 4-dimensional manifold, and \(e^{2}\) is a dimensionless coupling constant. The 4-dimensional box has coordinate system \(x^{\mu}=(x^{a},r)=(t,x^{2},x^{3},r)\), where \(r\) is the radius direction, and \(x^{a}\) are the directions along the boundaries. To perform the finite temperature field theory calculations, we Wick rotate the time direction \(t\rightarrow-i\tau\) such that \(\tau\) becomes the Euclidean time with a periodicity \(\beta\), i.e. the inverse temperature of the spacetime.
If all the gauge fields die off near the boundary, this is just the blackbody radiation with two polarisation degrees of freedom after gauge fixing. However, interesting phenomena start to show up when we release the boundary conditions, meaning that there are extra
Figure 1: U(1) gauge theory living between two parallel boundaries. The orange surface is a Cauchy surface with constant time.
boundary degrees of freedom allowed due to the nontrivial boundary conditions. Let us suppose the boundaries shown in figure 1 are labeled by \(r=r_{\alpha}\), with the left and right boundaries located at \(r_{(l)}=0\) or \(r_{(r)}=L\). \(L\) is the distance between the two plates. Then, in order to have a well-defined Hilbert space and variation principle, there would be some constraints for the boundary conditions. To see those constraints, let us first look at the variation of the action (1), which can be written as
\[\delta S=\frac{1}{e^{2}}\int_{\mathcal{M}}d\tau d^{3}x\ \partial_{\mu}F^{\mu\nu} \delta A_{\nu}-\frac{1}{e^{2}}\int_{\partial\mathcal{M}}d^{3}x\ n_{\mu}F^{\mu \nu}\delta A_{\nu}\,. \tag{2}\]
For the boundaries with normal vector \(n^{\mu}\partial_{\mu}=\partial_{r}\), as shown in figure 1, the on-shell variation of the action can be written as
\[\delta S=-\frac{1}{e^{2}}\int_{\partial\mathcal{M}}d^{3}x\ F^{ra}\delta A_{a}\,. \tag{3}\]
Here, to have a well-defined variation principle without adding any Gibbons-Hawking-like terms, we have two obvious choices: \(\delta A_{a}\big{|}_{\partial\mathcal{M}}=0\) or \(F^{ra}\big{|}_{\partial\mathcal{M}}=0\). Let us discuss them separately:
* For the first choice, we can have Neumann-like boundary conditions \[F^{ra}\big{|}_{\partial\mathcal{M}}=0\,.\] (4) This is the metallic Casimir boundary condition, which has been studied in detail in the context of the Casimir effect [24; 25; 26; 27; 28; 29; 30; 31].
* For the second choice, the boundary condition can be set as \[\delta A_{a}\big{|}_{\partial\mathcal{M}}=0,\qquad\delta A_{r}\big{|}_{ \partial\mathcal{M}}=f(x^{a})\,,\] (5) where \(f(x^{a})\) can have local dependence of \(x^{a}\). \(A_{a}\big{|}_{\partial\mathcal{M}}\) are fixed configurations on boundaries, which don't need to be summed over in path integral. Meanwhile, there is no constraint for \(A_{r}\) at the boundary, and correspondingly, \(F^{ra}\big{|}_{\partial\mathcal{M}}\) can take arbitrary values. The Hilbert space is well-defined with the fixed boundary configurations \(A_{a}\big{|}_{\partial\mathcal{M}}\), and we need to sum over different boundary configurations of \(A_{r}\) in the path integral.
Note that for the first boundary condition (4), the physics related to boundary \(F^{ab}\big{|}_{\partial\mathcal{M}}\) can be worked out. If one insists that all the configurations that respect the boundary
condition should be added back to the phase space, we can also include boundary pure gauge configurations in the path integral. Moreover, one can find bulk on-shell configurations that have one-to-one correspondences with the boundary pure gauge configurations. Working with gauge invariant boundary conditions, like the metallic Casimir boundary condition (4), and then adding back the possible interesting gauge configurations can help us see the boundary pure gauge modes more clearly. But we are not sure why those boundary pure gauge configurations are physical in this case.
We mainly focus on the second choice in this paper. Let us see what are the configurations that respect the boundary condition (5). For the \(A_{a}\) components, the boundary configurations are fixed and we can let those fields be zero at the boundaries
\[A_{a}\Big{|}_{r=r_{\alpha}}=0\,. \tag{6}\]
Special attention is needed for the \(A_{r}\) component. \(A_{r}\big{|}_{\partial{\cal M}}\) can take different configurations at the left and right boundaries. Besides, the boundary configurations can fluctuate and have arbitrary \(x^{a}\) dependence. Thus; we need to separate the bulk and boundary configurations carefully. \(A_{r}\) can be separated as follows
\[A_{r}(x^{\mu})=\hat{A}_{r}(x^{\mu})+\frac{\phi(x^{a})}{L}\,, \tag{7}\]
where \(\phi(x^{a})\) are the configurations that makes \(\hat{A}_{r}\big{|}_{r=0}=0\), i.e. \(A_{r}(r,x^{a})\Big{|}_{r=0}=\phi(x^{a})/L\). \(L\) is added mainly for the dimension-counting reason. To write the fields in a more concordant way, we can also use \(\hat{A}_{a}=A_{a}\) to denote the bulk configurations that vanish on boundaries.
Note that with the above separation, \(\hat{A}_{r}\) does not equal zero at the boundary \(r=L\), and we can also further decompose \(\hat{A}_{r}\) into two parts. The part that satisfies \(\int_{0}^{L}dr\ \hat{A}_{r}=0\) will not be our main concern here, and the part that satisfies \(\int_{0}^{L}dr\ \hat{A}_{r}\neq 0\) will be an important ingredient in our later calculations. One can also decompose \(\hat{A}_{r}\) into the part that vanishes on both sides and the part that captures the difference between the two boundaries. Integrating the part that vanishes on both boundaries from one boundary to the other gives out zero, and \(\int_{0}^{L}dr\ \hat{A}_{r}\) more or less captures the difference of \(A_{r}\) between the two boundaries. The reason why we don't choose this decomposition is that from the bulk point of view, we need bulk on-shell configurations corresponding to those boundary degrees of freedom. The corresponding bulk configurations need to be on-shell such that we have a good separation of different modes in the action. The bulk solutions can be hard to find. What's more, as we will see in the next section, \(\int_{0}^{L}dr\ \hat{A}_{r}\) has a good physical interpretation.
We will eventually functionally integrate all the physical degrees of freedom in the Euclidean path integral to get the partition function of the system with the given boundary conditions. However, before we actually do the calculation, let us first analyze the theory's canonical phase space and see which degrees of freedom are physical. Only the physical degrees of freedom are supposed to be integrated over in the path integral. This is what we are going to do in the next section.
## III Canonical formulation
We are going to work out the phase space and the equipped symplectic form in canonical formulation for the theory with non-trivial boundary conditions (5). The canonical formulation is always related to a Cauchy surface where the Hilbert space is defined. For the flat parallel plates case, a Cauchy surface is shown in Fig. 2. The canonical formulation can help us to better understand what are the physical degrees of freedom in the phase space. The phase space can be represented by \(\Gamma\), which is an even-dimensional manifold with coordinates \(x^{I}=\{q^{i},p_{j}\}\), where \(q^{i}\) and \(p_{j}\) are the canonical coordinates and momenta. For field theories, we have infinite-dimensional phase spaces \(\Gamma\). After canonical quantization, the phase space can be turned into the Hilbert space of the theory.
For U(1) gauge theory with trivial boundary conditions, we can decompose the gauge fields into temporal and spatial directions and rewrite the gauge fields \(A_{\mu}\) as
\[A_{\mu}=(-V,A_{i}),\ \ \ \ A^{\mu}=(V,A^{i})\,, \tag{8}\]
we have the Lagrangian density written in terms of \(V\) and \(A_{i}\) as
\[\mathcal{L}=\frac{1}{2e^{2}}(\dot{A}^{i}+\partial^{i}V)(\dot{A}_{i}+\partial_ {i}V)-\frac{1}{2e^{2}}F^{ij}\partial_{i}A_{j}\,. \tag{9}\]
The corresponding conjugate momenta of fields \(V\) and \(A_{i}\) can be written as
\[\Pi_{V}=\frac{\partial\mathcal{L}}{\partial\dot{V}}\ \ \ \ \Pi^{i}=\frac{ \partial\mathcal{L}}{\partial\dot{A}_{i}}=\frac{1}{e^{2}}(\dot{A}^{i}+\partial ^{i}V)\,, \tag{10}\]
where we have denoted \(\Pi_{V}\) as the momentum for \(V\) and \(\Pi^{i}\) as the momenta for \(A_{i}\). So, for trivial boundary conditions, the phase space we start with is
\[\Gamma_{0}=\{V,A_{i},\Pi_{V},\Pi^{i}\}\,. \tag{11}\]
Gauge fixing conditions help us further get rid of the unphysical degrees of freedom in the phase space and we end up with the standard two polarizations of photon in Maxwell's theory. Note that it is natural to use temporal gauge \(A_{t}=0\) in the canonical formulation.
However, there can be boundary subtleties in the phase space when we have nontrivial boundary conditions, like (5). What are those boundary subtleties? This can be answered by turning to the symplectic form of the theory and working out the Poisson bracket between the fields to make those boundary subtleties more explicit. Moreover, an explicit phase space and symplectic form can help us figure out the canonical variables and momentum, which should be integrated over in the path integral. The phase space is equipped with a closed, non-degenerate symplectic two-form \(\Omega\), which is defined as
\[\Omega=\frac{1}{2}\Omega_{IJ}\,\mathrm{d}x^{I}\wedge\,\mathrm{d}x^{J}\,. \tag{12}\]
\(\Omega_{IJ}\) is invertible, and the inverse \(\Omega^{IJ}\) is defined by \(\Omega^{IK}\Omega_{KJ}=\delta^{I}_{J}\). Now equipped with the symplectic form, the classical Poisson bracket between functionals \(F\) and \(G\) can be defined as
\[\{F,G\}=\Omega^{IJ}\frac{\delta F}{\delta x^{I}}\frac{\delta G}{\delta x^{J} }\,. \tag{13}\]
Quantum commutators can be obtained by the canonical quantization procedure.
The symplectic form of a field theory can be directly worked out on a chosen Cauchy surface [41]. Let us consider a field theory with a Lagrange density \(\mathcal{L}[\Psi]\), where \(\Psi\) denotes an arbitrary collection of fields. Taking the variation of \(\mathcal{L}\), we have
\[\delta\mathcal{L}=E\cdot\delta\Psi+\,\mathrm{d}\Theta\,. \tag{14}\]
The equation of motion \(E=0\) kills the first term in (14). The (pre)-symplectic potential \(\Theta[\Psi,\delta\Psi]\) is a \(D-1\) form and can be integrated over the chosen \((D-1)\)-dimensional Cauchy surface. The symplectic current \(\omega\) can be defined as
\[\omega[\Psi,\delta_{1}\Psi,\delta_{2}\Psi]=\delta_{1}\Theta[\Psi,\delta_{2} \Psi]-\delta_{2}\Theta[\Psi,\delta_{1}\Psi]\,, \tag{15}\]
where \(\delta_{1}\) and \(\delta_{2}\) can be regarded as variations with respect to two different transformations. Integrating the symplectic current \(\omega\) over the Cauchy surface \(\Sigma\), we finally get the symplectic form \(\Omega\) written as
\[\Omega[\Psi,\delta_{1}\Psi,\delta_{2}\Psi]=\int_{\Sigma}\omega[\Psi,\delta_{1} \Psi,\delta_{2}\Psi]\,. \tag{16}\]
Note that the choice of the \((D-1)\)-form \(\omega[\Psi,\delta_{1}\Psi,\delta_{2}\Psi]\) also depends on the Cauchy surface. Specifying to the temporal surface \(\Sigma_{t}\) with normal vector \(n^{t}\), we can work out each component of \(\omega\).
### Symplectic form and phase space
Now, let us be specific to the U(1) gauge theory with non-trivial boundary conditions at hand. The variation of the action is similar to (2). The difference is \(\Theta\), as well as the symplectic form, is defined on Cauchy surface \(\Sigma\), as shown in Fig. 2. Thus, the symplectic form \(\Omega_{\Sigma}\) can be worked out as
\[\Omega_{\Sigma}=-\frac{1}{e^{2}}\int_{\Sigma}d^{3}x\ n^{\mu}\delta F_{\mu\nu} \wedge\delta A^{\nu}\,. \tag{17}\]
Specifying on the chosen Cauchy surface \(\Sigma_{t}\) with normal vector \(n^{\mu}\partial_{\mu}=\partial_{t}\), the above expression can be written in components as
\[\Omega_{\Sigma_{t}}=-\frac{1}{e^{2}}\int_{\Sigma_{t}}d^{3}x\ \delta F_{ti} \wedge\delta A^{i}\,. \tag{18}\]
The symplectic form is essential in defining the Hamiltonian dynamics; only the fields equipped with a nontrivial symplectic form with their conjugate momentum can be regarded as dynamical variables in the phase space. We have decomposed the gauge fields into several parts as we discussed at the end of the section II. We are going to see what are the symplectic partners of all the configurations in the symplectic form.
Now one can put the variation of fields into the symplectic form (18), and rewrite it as
\[\Omega_{\Sigma_{t}}=-\frac{1}{e^{2}}\int_{\Sigma_{t}}d^{3}x\ \left[\delta(\hat{F}_{ tr}+\frac{\dot{\phi}}{L})\wedge\delta(\hat{A}^{r}+\frac{\phi}{L})+\delta\hat{F}_{ t2}\wedge\delta\hat{A}^{2}+\delta\hat{F}_{t3}\wedge\delta\hat{A}^{3}\right]\,. \tag{19}\]
We can separate the \(\hat{A}_{i}\) part with other parts, and the symplectic form reads as
\[\Omega_{\Sigma_{t}}=-\frac{1}{e^{2}}\int_{\Sigma_{t}}d^{3}x\ \delta\hat{F}_{ti}\wedge\delta\hat{A}^{i}-\frac{1}{e^{2}}\int_{\Sigma_{t}}d^{2} xdr\ \left[\frac{\delta\dot{\phi}}{L}\wedge\delta\hat{A}^{r}+\frac{\delta\dot{\phi}}{L} \wedge\frac{\delta\phi}{L}+\delta\hat{F}_{tr}\wedge\frac{\delta\phi}{L}\right]\,. \tag{20}\]
The first term in (20) gives us the usual Poisson bracket of Maxwell's theory. Integrating over \(r\) in the second term gives out
\[\Omega_{\rm bdy}=-\frac{1}{L\cdot e^{2}}\int d^{2}x\ \left[\delta\dot{ \phi}\wedge\delta(\int_{0}^{L}dr\hat{A}^{r})+\delta\dot{\phi}\wedge\delta\phi +\delta(\int_{0}^{L}dr\dot{\hat{A}}_{r})\wedge\delta\phi\right]\,, \tag{21}\]
where we have used the boundary condition \(\delta\hat{A}_{t}\big{|}_{\partial\Sigma}=0\). \(\Omega_{\rm bdy}\) can be regarded as the symplectic form of the dynamical variables due to the presence of boundary condition (5).
The above symplectic form tells us what are the extra physical degrees of freedom in the system, besides phase space (11). The non-local part in (21) can be denoted as a separate variable. Inspired by that, we define the quantity \(W\) as
\[W=i\int_{0}^{L}dr\ \hat{A}_{r}\,, \tag{22}\]
which will be called _Wilson lines1_
Footnote 1: The actual Wilson lines stretched between the two boundaries can be written as
\[{\cal W}\propto{\cal P}\exp\left[i(\int_{0}^{L}dr\ \hat{A}_{r})+i\phi \right]\,, \tag{23}\]
with \({\cal P}\) denoting the path ordering. \(\phi\) is the boundary configuration of \(A_{r}\). The definition of the Wilson line can be exactly matched if we choose a different separation of degrees of freedom as discussed at the end of section II. Note that field \(W\) captures the difference of \(A_{r}\) on the two boundaries. From now on, we will extract \(W\) modes out of \(\hat{A}_{r}\) such that \(\hat{A}_{r}\) equals zero at both boundaries.
The boundary symplectic form, defined on a codimension-2 surface, can be written as
\[\Omega_{\rm bdy}=-\frac{1}{e^{2}L}\int d^{2}x\ \left[-i\delta\dot{ \phi}\wedge\delta W-i\delta\dot{W}\wedge\delta\phi+\delta\dot{\phi}\wedge \delta\phi\right]\,. \tag{24}\]
The cross term between \(W\) and \(\phi\) can be canceled by refining the fields. For example, shifting \(\phi\rightarrow\phi+iW\), the above symplectic form can be written as
\[\Omega_{\rm bdy}=-\frac{1}{e^{2}L}\int d^{2}x\ \left[\delta\dot{\phi} \wedge\delta\phi+\delta\dot{W}\wedge\delta W\right]\,. \tag{25}\]
With the above redefinition of fields, the overall symplectic form (20) can be expressed as
\[\Omega_{\Sigma_{t}} =-\tfrac{1}{e^{2}}\int_{\Sigma_{t}}d^{3}x\ \delta\hat{F}_{ti}\wedge\delta\hat{A}^{i}-\tfrac{1}{e^{2}L}\int d^{2}x\ \left[\delta\dot{\phi}\wedge\delta\phi+\delta\dot{W}\wedge\delta W\right] \tag{26}\] \[=-\tfrac{1}{e^{2}}\int_{\Sigma_{t}}d^{3}x\ \delta\hat{\Pi}_{i} \wedge\delta\hat{A}^{i}-\tfrac{1}{e^{2}L}\int d^{2}x\ \left[\delta\Pi_{W}\wedge\delta W+\delta\Pi_{\phi} \wedge\delta\phi\right]\,,\]
where \(\hat{\Pi}^{i}\) denotes the conjugate momentum of \(\hat{A}_{i}\). Note that, beside the conjugate momentum defined in (10), we further define the conjugate momentum of \(W\) and \(\phi\) as
\[\Pi_{W}=\dot{W},\ \ \ \ \Pi_{\phi}=\dot{\phi}\,. \tag{27}\]
The Poisson brackets can be derived as
\[\frac{1}{e^{2}}[ \hat{\Pi}^{i}(r,x^{2},x^{3}),\hat{A}_{j}(r^{\prime},x^{\prime 2},x^ {\prime 3})\ ]=i\delta^{i}_{j}\ \delta(r-r^{\prime})\ \delta^{2}(x-x^{\prime})\,, \tag{28}\] \[\frac{1}{e^{2}L}[ \Pi_{W}(x^{2},x^{3}),W(x^{\prime 2},x^{\prime 3})\ ]=i\delta^{2}(x-x^{\prime})\,,\] (29) \[\frac{1}{e^{2}L}[ \Pi_{\phi}(x^{2},x^{3}),\phi(x^{\prime 2},x^{\prime 3})\ ]=i\delta^{2}(x-x^{\prime})\,. \tag{30}\]
As discussed at the beginning of this section, we need to add the degrees of freedom related to the boundary subtleties back to the phase space and perform path integration over those configurations in the path integral. Those boundary subtleties are the degrees of freedom related to the boundary configurations of \(A_{r}\). By the detailed symplectic form analysis in this subsection, we have found the zero modes \(\phi(x^{a})\) that take zero value for the longitudinal momentum along \(r\) direction and Wilson lines stretched between the two boundaries \(W\) have non-trivial symplectic partners and Poisson brackets. Those are the degrees of freedom needed to be added back. So the actual phase space should be
\[\Gamma=\left\{\ \hat{\Pi}^{i},\hat{A}_{i},\Pi_{\phi},\phi,\Pi_{W},W\ \right\}\,. \tag{31}\]
Further gauge fixing conditions would help us to get rid of the bulk gauge redundancy of \(\{\hat{\Pi}^{i},\hat{A}_{i}\}\) such that we are only left with two bulk polarizations. This isn't our main concern in this paper. The new ingredients are \(\phi\) and \(W\), where \(\phi\) is the zero modes along the \(r\) direction of \(A_{r}\), and \(W\) is the Wilson lines stretched between those two boundaries.
### Canonical formula and partition function
As said at the beginning, the modes that appear in the canonical formula should be included in the path integral method of calculating the partition function. What exactly is
the relationship between the canonical formula and Euclidean path integral? In this subsection, we briefly review the origin of Euclidean path integral, and use a simple scalar field example to demonstrate the relationship between the canonical formula and the Euclidean path integral. The Lagrangian density of a 4-dimensional scalar field \(\Psi\) can be written as
\[\mathcal{L}=-\frac{1}{2}\partial^{\mu}\Psi\partial_{\mu}\Psi-\frac{1}{2}\mu^{2} \Psi^{2}-V(\Psi) \tag{32}\]
The transition amplitude between two states, \(\left|\Psi_{1}\right\rangle\) and \(\left|\Psi_{2}\right\rangle\), can be represented by path integral as
\[\left\langle\Psi_{2}\right|e^{-iHt_{0}}\left|\Psi_{1}\right\rangle =\int\mathcal{D}\Pi\int_{\Psi_{1}}^{\Psi_{2}}\mathcal{D}\Psi\times \exp\left[i\int_{0}^{t_{0}}dt\int d^{3}x(\Pi\dot{\Psi}-\mathcal{H}[\Pi,\Psi]) \right]\,, \tag{33}\]
where \(\Pi\) is the conjugate momentum of the dynamical variable \(\Psi\). \(H\) is the Hamiltonian of the theory, and \(\mathcal{H}\) is the Hamiltonian density. The partition function can be written as a path integral over \(\Psi(0)=\Psi(\beta)\), with the Euclidean time \(\tau=it\). Writing the path integral more explicitly, we have
\[Z=\text{tr}\ e^{-\beta H}=\int\mathcal{D}\Pi\int\mathcal{D}\Psi \times\exp\left[\int_{0}^{\beta}d\tau\int d^{3}x(i\Pi\dot{\Psi}-\mathcal{H}[ \Pi,\Psi])\right]\,. \tag{34}\]
Let us now discretize the path integral by defining \(\Delta\tau=\beta/N\) with a large number \(N\). Then the discretized partition function can be written as
\[Z =\mathcal{N}\cdot\prod_{j}^{N}\int d\Pi_{j}\int d\Psi_{j}\times \exp\left[\Delta\tau\int d^{3}x\times\left(i\Pi_{j}\frac{\Psi_{n+1}-\Psi_{j}}{ \Delta\tau}-\frac{1}{2}\Pi_{j}^{2}-\frac{1}{2}(\nabla\Psi_{j})^{2}-\mu^{2} \Psi_{j}^{2}-V\right)\right]\] \[=\mathcal{N}^{\prime}\cdot\prod_{j}^{N}\int d\Psi_{j}\times\exp \left[\Delta\tau\int d^{3}x\times\left(-\frac{1}{2}\left(\frac{\Psi_{n+1}-\Psi _{j}}{\Delta\tau}\right)^{2}-\frac{1}{2}(\nabla\Psi_{j})^{2}-\mu^{2}\Psi_{j}^ {2}-V\right)\right]\] \[=\mathcal{N}^{\prime}\cdot\int\mathcal{D}\Psi\times\exp\left[- \int_{0}^{\beta}d\tau\int d^{3}x[\frac{1}{2}\dot{\Psi}^{2}+\frac{1}{2}(\nabla \Psi)^{2}+\mu^{2}\Psi^{2}+V]\right]\] \[=\mathcal{N}^{\prime}\cdot\int\mathcal{D}\Psi\times e^{-S_{E}}\,, \tag{35}\]
where \(\mathcal{N}\) and \(\mathcal{N}^{\prime}\) are constants. The integrals over conjugate momentum \(\Pi_{j}\) in the second line are Gaussian integrals, and thus can be easily worked out.
Now we have shown the relation between Hamiltonian formula tr \(e^{-\beta H}\) and Euclidean path integral \(\int\mathcal{D}\Psi\ e^{-S_{E}}\). In the next section, we are going to include all the physical degrees of freedom into the Euclidean path integral to evaluate their contribution to entropy. Note that the canonical analysis gives us some hints about what degrees of freedom are physical.
Although we can use the canonical results as input and use the relation between the canonical formula and the Euclidean path integral to work out the partition function, we will study the path integral carefully. We may use different gauge fixing conditions if they are more convenient.
## IV Euclidean path integral
After the canonical analysis in the previous section, it is clear that the dynamical modes are the bulk fluctuation modes \(\hat{A}_{\mu}\), zero modes along \(r\) direction \(\phi\), and the Wilson lines stretched between the two boundaries \(W\). Those fields are the ingredients that need to be included in the Euclidean path integral. Note that there should only be two bulk physical polarizations for the fields \(\hat{A}_{\mu}\) after gauge fixing. Specially caution is needed when handling gauge fixing, and we will deal with bulk gauge fixing conditions after the physics are clear to avoid gauging too much or too little.
As discussed in the previous subsection, the partition function can be written as a Euclidean path integral
\[Z=\int\mathcal{D}A_{\mu}\ e^{-S_{E}}\,, \tag{36}\]
with the Euclidean action \(S_{E}\) written as
\[S_{E}=\frac{1}{4e^{2}}\int_{\mathcal{M}}d\tau d^{3}x\ F^{\mu\nu}F_{\mu\nu}\,. \tag{37}\]
The above is the original formula, and we are going to do some massage according to the hints from the canonical analysis. We can separate the \(x^{a}\) directions with \(r\) in the action
\[S_{E}=\frac{1}{4e^{2}}\int_{\mathcal{M}}d\tau d^{3}x\ F^{ab}F_{ab}+\frac{1}{2 e^{2}}\int_{\mathcal{M}}d\tau d^{3}x\ F^{ra}F_{ra}\,. \tag{38}\]
Again, we are going to separate the gauge fields into different parts
\[A_{a}=\hat{A}_{a}\,,\qquad A_{r}=\hat{A}_{r}+\frac{\phi(x^{a})}{L}\,. \tag{39}\]
The Euclidean action can be written in terms of those modes as
\[S_{E}=\frac{1}{4e^{2}}\int_{\mathcal{M}}d\tau d^{3}x\ \hat{F}^{\mu\nu}\hat{F}_{ \mu\nu}+\frac{1}{2e^{2}}\int_{\mathcal{M}}d\tau d^{3}x\ \left[-\frac{2}{L}\ \hat{F}^{ra}\partial_{a}\phi+\frac{ \partial^{a}\phi\partial_{a}\phi}{L^{2}}\right]\,. \tag{40}\]
The second part of the above expression contains the interesting ingredients of our story and can be denoted as
\[S_{\rm bdy}=\frac{1}{2e^{2}}\int_{\mathcal{M}}d\tau d^{2}xdr\ \left[\frac{ \partial^{a}\phi\partial_{a}\phi}{L^{2}}+\frac{2}{L}\ \hat{F}^{ar}\partial_{a}\phi\right]\,. \tag{41}\]
\(\phi(x^{a})\) is not a function of radius direction \(r\), so the integral over \(r\) can just pass through \(\phi\), and gives out an extra \(L\) in the first term. Noticing that \(\hat{A}_{a}\) equal zero at the boundaries \(\partial\mathcal{M}\), integrating over \(r\) in the above effective action gives out
\[S_{\rm bdy}=\frac{1}{2e^{2}L}\int d\tau d^{2}x\ \left[\partial^{a}\phi \partial_{a}\phi-2i\ \partial^{a}(i\int dr\hat{A}^{r})\partial_{a}\phi\right]\,. \tag{42}\]
Denoting \(W(x^{a})=i\int dr\hat{A}_{r}\), the original action can be rewritten as
\[S_{E}=\frac{1}{4e^{2}}\int_{\mathcal{M}}d\tau d^{3}x\ \hat{F}^{\mu\nu}\hat{F}_{ \mu\nu}+\frac{1}{2e^{2}L}\int d\tau d^{2}x\ \left[\partial^{a}\phi\partial_{a}\phi-2i\ \partial^{a}W\partial_{a}\phi \right]\,. \tag{43}\]
We have several remarks regarding the Euclidean path integral and the effective action (43):
* The original path integral (36) is a path integral over \(A_{\mu}\), while we have different variables in action (43). There is no difference between \(\hat{A}_{a}\) and \(A_{a}\). The integral over \(A_{r}\) component can be divided into several different pieces. The zero modes \(\phi\) and the Wilson lines \(W\) are the parts capturing the boundary configurations of \(A_{r}\). The bulk modes \(\hat{A}_{r}\) that satisfy the conditions \(\hat{A}_{r}\big{|}_{r=0}=0\) and \(\int_{0}^{L}dr\hat{A}_{r}=0\) should be regarded as the bulk contribution. We can always gauge fix \(\hat{A}_{r}\) to zero, which doesn't kill any important physics.
* One of the main purposes of the canonical analysis in the previous section is to make clear the measure of different modes in the path integral. From the symplectic form (26) and Poisson brackets (28-30), the measure can be easily determined.
* Putting the effective action (43) in the path integral (36), we can first work out the Gaussian integral over \(\phi\), which gives out \(\det(\partial^{2})^{-1/2}\) in the partition function. The above procedure also gives out an effective action for \(W\), which is the action for a 3-dimensional massless scalar field. The determinant \(\det(\partial^{2})^{-1/2}\) getting from integrating \(\phi\) can be rewritten as path integral. With all the above arguments, the effective action can be expressed as \[S_{E}=\frac{1}{4e^{2}}\int_{\mathcal{M}}d\tau d^{3}x\ \hat{F}^{\mu\nu}\hat{F}_{\mu\nu}+\frac{1}{2e^{2}L}\int d \tau d^{2}x\ \left[\partial^{a}\phi\partial_{a}\phi+\partial^{a}W\partial_{a}W \right]\,.\] (44) One can also check the path integrals with actions (43) and (44) give out the same result. So we will use (44) as the effective action in evaluating the partition function later on. As a direct analogy of the canonical analysis, the cross term between fields
\(W\) and \(\phi\) in (43) can also be canceled by shifting \(\phi\to\phi+iW\), which would give out the same result as (44).
* We have been super-careful about the gauge fixing such that we didn't gauge fix any interesting physics. For example, we can gauge fix part of the bulk fields \(\hat{A}_{\mu}\) later, but we always need to make sure \(\phi\) and \(W\) are not gauged away. There might be other interesting modes that are needed to be added back. Let us suppose we are dealing with a compact U(1) gauge theory. In the Euclidean background, the map between the background time circle \(\tau\sim\tau+\beta\) and the compact gauge parameter allows us to include some topological modes for component \(A_{\tau}\). The fundamental group of \(S_{1}\) is \(\mathbf{Z}\). So the modes can be expressed as \[A_{\tau}\ni\frac{2\pi n}{\beta}\,,\ \ \ \ \ n\in\mathbf{Z}\,.\] (45) Those modes correspond to the large gauge transformation and might be physical. Those modes respect the boundary condition (5), but not (6). So, we do not include the modes (45) in the current calculation because of the more strict boundary condition (6).
Now, let us evaluate the Euclidean path integral. For the first term corresponding to a Maxwell theory with vanishing boundary conditions, we can denote the partition function as \(Z_{\hat{A}}\). The field \(\phi\) and \(W\) are 3-dimensional scalar fields living on a surface with coordinate \(x^{a}\), which will be regarded as the boundary contribution. We can separate the path integral as bulk and boundary parts
\[Z=Z_{\hat{A}}\times\int\mathcal{D}\phi\ \mathcal{D}W\ \exp\left[-\frac{1}{2e^{2} L}\int d\tau d^{2}x(\partial^{a}\phi\partial_{a}\phi+\partial^{a}W\partial_{a}W) \right]\,. \tag{46}\]
The main task left is to evaluate the bulk and boundary partition functions. We are going to discuss those different modes for the remainder of this section, and evaluate the partition function and demonstrate the possible phase transitions in the next section.
### Bulk fluctuation modes
First of all, let us evaluate the partition function for bulk fluctuation modes \(Z_{\hat{A}}\). We will use the Faddeev-Popov method [42] to evaluate the partition function, by inserting the
following identity
\[1=\int\mathcal{D}\lambda\det\left(\frac{\partial G}{\partial\lambda}\right) \delta(G-0)\,, \tag{47}\]
with gauge fixing condition \(G=\partial_{\mu}\hat{A}^{\mu}-c(x)\). Following the standard gauge fixing procedure, in Feynman gauge, we eventually get
\[Z_{\hat{A}}=\int\mathcal{D}\hat{A}_{\mu}\mathcal{D}C\mathcal{D}\bar{C}\ e^{-\frac{1}{2e^{2}}\int_{\mathcal{M}}d\tau d ^{3}x\ [\hat{A}^{\mu}(\partial^{2})\hat{A}_{\mu}+\bar{C}(\partial^{2})C]}=\det( \partial^{2})^{-1}\,, \tag{48}\]
where \(C\) and \(\bar{C}\) are ghost fields. After gauge fixing, the final result is the partition function for two bosonic polarizations
\[Z_{\hat{A}}=\det(\partial^{2})^{-1/2}\times\det(\partial^{2})^{-1/2} \tag{49}\]
If we define the energy and momenta of the gauge fields as \((\omega,p_{r},p_{2},p_{3})\), the logarithm of \(Z_{\hat{A}}\) can be calculated by working out the determinantal operator
\[\ln Z_{\hat{A}}=-\sum_{\omega}\sum_{p_{r},p_{2},p_{3}}\ln\left[\beta^{2}( \omega^{2}+p_{r}^{2}+p_{2}^{2}+p_{3}^{2})\right]\,. \tag{50}\]
One can further evaluated the partition function by taking different limits of the length scales in the theory. We are going to evaluate the logarithm of the partition function in section V when we discuss different temperature limits.
Note that we can also use different gauge fixing conditions, like the axial gauge or temporal gauge. The gauge fixing condition does not make much difference for the fluctuation modes as far as we keep two physical polarizations in the final result.
### Fluctuation modes of \(\phi\) and \(W\)
Let us evaluate the partition function of fields \(\phi\) and \(W\) here. The action for \(\phi\) and \(W\) can be written as
\[S_{\phi,W}=\frac{1}{2e^{2}L}\int d\tau d^{2}x\ (\partial_{a}\phi\partial^{a} \phi+\partial_{a}W\partial^{a}W)\,, \tag{51}\]
which is the action for two massless scalar fields living on the boundary. Denoting the area of the boundary as "Area", the fluctuation modes of field \(\phi\) and \(W\) can be expanded as
\[\phi(x^{a}) =\sqrt{\frac{2e^{2}\beta L}{\text{Area}}}\sum_{\omega,p_{2},p_{3} }\tilde{\phi}(\omega,p_{2},p_{3})e^{i(\omega\tau+p_{2}x^{2}+p_{3}x^{3})}\,, \tag{52}\] \[W(x^{a}) =\sqrt{\frac{2e^{2}\beta L}{\text{Area}}}\sum_{\omega,p_{2},p_{3} }\tilde{W}(\omega,p_{2},p_{3})e^{i(\omega\tau+p_{2}x^{2}+p_{3}x^{3})}\,. \tag{53}\]
The coefficient is chosen such that \(\tilde{\phi}\)s and \(\tilde{W}\)s are dimensionless and thus the integrals over \(d\tilde{\phi}\) in the path integral give out dimensionless quantities. With this mode expansion, the corresponding partition function can be expressed as
\[Z_{F}=\prod_{\omega,p_{2},p_{3}}[\beta^{2}(\omega^{2}+p_{2}^{2}+p_{3}^{2})]^{-1}\,, \tag{54}\]
the logarithm of which is
\[\ln Z_{F}=-\sum_{\omega,p_{2},p_{3}}\ln[\beta^{2}(\omega^{2}+p_{2}^{2}+p_{3}^{2 })]\,. \tag{55}\]
\(Z_{F}\) is the partition function for two 3-dimensional massless scalar fields. We can then calculate the free energy and entropy of those modes in different temperature limits, and compare it with the bulk fluctuation modes, which will be the task for the next section.
### Other interesting modes
There are some other interesting topological modes of \(W\). The Wilson lines stretched between the two boundaries can be denoted as
\[\mathcal{W}_{\gamma}=\mathcal{P}\exp[i\int_{0}^{L}dr\hat{A}_{r}]\,. \tag{56}\]
Because it's always inside of an exponential function, \(\int_{0}^{L}dr\hat{A}_{r}\) is compact with periodicity \(2\pi\). The requirement that the Wilson lines are single-valued allows us to include the elements of fundamental group \(S^{1}\). In the Euclidean background, the background time circle \(\tau\sim\tau+\beta\) allows the field \(W\) to wind around the \(S^{1}\) circle and have some winding modes \(2\pi n\tau/\beta\).
Now the field \(W\) has compact constant modes and novel winding modes which are interesting to deal with. The constant modes contribution of \(W\) can always be written as
\[Z_{0}=\int_{0}^{2\pi\sqrt{\frac{\Lambda\text{rea}}{e^{2}L\beta}}}\,d\tilde{W}_ {0}=2\pi\sqrt{\frac{1}{e^{2}L}}\times\sqrt{\frac{\Lambda\text{rea}}{\beta}}\,. \tag{57}\]
The winding mode contribution can be written as
\[Z_{w}=\sum_{n}e^{-\frac{\Lambda\text{rea}}{2e^{2}L\beta}(2\pi n)^{2}}\,. \tag{58}\]
\(Z_{w}\) equals 1 when the coefficient \(\frac{\Lambda\text{rea}}{2e^{2}\beta L}\) is very large, since the mode with \(n=0\) dominants. When the coefficient is very small, we can change the sum into a Gaussian integral. The
overall partition function of \(\phi\) and \(W\) is the product of constant modes \(Z_{0}\), winding modes \(Z_{w}\), and fluctuation modes \(Z_{F}\) discussed previously.
As a summary of the different modes and corresponding partition functions we have got, we can express the overall partition function as
\[\ln Z=\ln Z_{\hat{A}}+\ln Z_{F}+\ln Z_{0}+\ln Z_{w}\,. \tag{59}\]
We have two bulk polarizations in \(Z_{\hat{A}}\), two collections of fluctuation modes from \(W\) and \(\phi\) in \(\ln Z_{F}\), constant modes \(Z_{0}\) and winding modes \(Z_{w}\). The logarithm of the overall partition function can be expressed as
\[\ln Z = -2\times\frac{1}{2}\sum_{\omega,p_{r},p_{2},p_{3}}\ln[\beta^{2}( \omega^{2}+p_{r}^{2}+p_{2}^{2}+p_{3}^{2})]-2\times\frac{1}{2}\sum_{\omega,p_{2},p_{3}}\ln[\beta^{2}(\omega^{2}+p_{2}^{2}+p_{3}^{2})] \tag{60}\] \[+\frac{1}{2}\ln[\frac{\text{Area}}{\beta L}]-\ln e+\ln\sum_{n}e^{ -\frac{\text{Area}}{2e^{2}L_{B}}(2\pi n)^{2}}\,,\]
We will directly evaluate different parts of (59) in the next section.
## V Transition between different phases
In this section, we evaluate and compare the partition function shown in (59) in different temperature limits. The partition function contains contributions from the bulk fluctuation modes \(\hat{A}_{\mu}\), the zero modes \(\phi\), and the Wilson lines \(W\). The detailed calculations of the partition function are included in Appendix A, such that we don't drown in the tasteless details. The main content of this section is devoted to the discussion of different behaviors and phase transitions.
There are three different dimensional length scales in the theory, the inverse temperature \(\beta\), the distance between the two boundaries \(L\), and the length scale of the boundary \(\sqrt{\text{Area}}\). The coupling constant \(e^{2}\) is dimensionless. We are going to compare the inverse temperature \(\beta\) with other length scales in the theory and call them different temperature limits. In the different temperature limits, we can study different behaviors of the partition function. We are mainly interested in the following three different temperature limits.
* The so-called _high-temperature limit_ is the limit when we have \(\beta\ll L\ll\sqrt{\text{Area}}\). \(\beta\) is the smallest length scale in the system. In this temperature limit, the bulk fluctuation modes \(\hat{A}_{\mu}\) should be the most important contribution.
* The second temperature limit we are interested in is the _low-temperature limit_, where we have \(L\ll\beta\ll\sqrt{\text{Area}}\). The distance between the two boundaries is way smaller than the inverse temperature \(\beta\), and all the high-frequency modes along the \(r\) direction will be gapped. The zero modes \(\phi\) and the Wilson lines \(W\) start to play the most important role in this limit.
* The last case is the _super-low temperature limit_, where we have \(L\ll\sqrt{\text{Area}}\ll\beta\). The temperature is super low, and all the fluctuation partition functions that proportional to the temperature are disappeared. The logarithm contributions of fields \(W\) shown in the previous section become the most important ones.
Let us discuss those three temperature limits separately. The qualitative behavior of the entropy is illustrated in figure 4. The overall entropy is a summation of different contributions, while figure 4 illustrates the contributions from different modes. The solid red curves show the dominant contributions. More details of the calculations can be found in Appendix A, the main results and behaviors are discussed below.
#### Case I: High temperature limit
In the high-temperature limit, nothing is special, and we expect to see the usual result of black body radiation in a box because the bulk fluctuation modes, whose entropy is proportional to \(T^{3}\), are the most important contribution. The partition function \(Z_{\hat{A}}\) and entropy \(\mathcal{S}_{\hat{A}}\) of bulk modes \(\hat{A}_{\mu}\) are shown below
\[\ln Z_{\hat{A}} =-\frac{1}{8\pi^{2}}\beta V\times\Lambda^{4}+\frac{\pi^{2}}{45} \frac{V}{\beta^{3}}\,, \tag{61}\] \[\mathcal{S}_{\hat{A}} =(1-\beta\partial_{\beta})\ln Z_{\hat{A}}=\frac{4\pi^{2}}{45}VT^{ 3}\,, \tag{62}\]
which is exactly the blackbody radiation result in a flat box. In order to avoid confusion with action \(S\), we use \(\mathcal{S}_{\hat{A}}\) to denote the corresponding entropy. When the temperature is high, the contributions from \(\phi\) and \(W\) proportional to the area of the boundary are small compared with the bulk radiation. So in the high-temperature limit, the dominant contribution always comes from the bulk fluctuation modes \(\hat{A}_{\mu}\), which scales as the volume between the two boundaries multiplied by the temperature cubed.
**Case II: Low temperature limit**
For lower temperatures, when we have \(L\ll\beta\ll\sqrt{\text{Area}}\), the situation starts to change. In this temperature range, finite \(\beta\) means that \(1/L\) is very big, and the energy needed to excite high-frequency modes along the \(r\) is super high. Thus, the modes along the \(r\) direction
Figure 3: A sketch of the entropy of fields \(\phi\) and \(W\) with varying temperature. At high temperatures, the entropy scales as \(\text{Area}\times T^{2}\). For lower temperatures, the entropy scales as the logarithm of temperature and coupling constant. The second picture is an enlarged version of the low-temperature region. The red dashed line is an auxiliary line showing \(\ln T\). As can be seen from the figure, the entropy goes to zero in the super-low temperature limit because the contributions coming from zero modes and winding modes cancel each other.
are gapped, and we are only left with zero modes along this direction. Even the zero modes of \(\hat{A}_{a}\) are killed by the boundary conditions \(\hat{A}_{a}\big{|}_{\partial\mathcal{M}}=0\), so we get no contribution from \(\hat{A}_{a}\) components. Fortunately, the zero modes of \(A_{r}\), i.e. \(\phi\), is a survivor. Moreover, \(W\) plays a similar role as \(\phi\). The partition function and entropy for \(\phi\) and \(W\) can be obtained as
\[\ln Z = \frac{1}{2}\ln\frac{2\pi^{2}\text{Area}}{e^{2}\beta L}-\frac{1}{6 \pi}\beta\text{Area}\cdot\Lambda^{3}+\frac{\zeta(3)}{\pi}\frac{\text{Area}}{ \beta^{2}}+\ln Z_{w}\,, \tag{63}\] \[\mathcal{S} = (1-\beta\partial_{\beta})\ln Z=\frac{3\zeta(3)}{\pi}\text{Area} \times T^{2}+\text{logarithm corrections}\,. \tag{64}\]
The entropy of the thermal fluctuation modes along the boundary direction is proportional to the area of the plates times temperature squared. There are extra contributions from constant modes and winding modes if \(e^{2}\) is not very large, which is proportional to the logarithm of temperature and the coupling constant. The logarithm contribution is mainly controlled by the coupling constant \(e^{2}\), which can surpass the area contribution for suitable
Figure 4: A sketch of the entropy of the whole system in different temperature limits. The actual entropy is the sum of different contributions, and the red line demonstrates the dominant contribution. There are two transitions of the dominants shown in the figure. The bulk fluctuation modes always dominate in the high-temperature limit, and the entropy scales as the volume multiplied by temperature cubed. For lower temperature becomes, the area contribution starts to dominate. At super-low temperatures, the fluctuation contribution is not important anymore, and the only contribution is from the constant modes and winding modes of field \(W\). A more clear curve of the entropy near the origin is shown in the second panel of figure 3.
\(e^{2}\). We will discuss the logarithm contribution later because those terms will be the most important contribution as the temperature is even lower. So in this temperature limit, the entropy of the system mainly comes from the fluctuation modes of \(\phi\) and \(W\), which is more or less proportional to the area of the plates times the temperature squared.
It is worth noticing that the situation here is similar to the Kaluza-Klein (K-K) reduction along the radius direction. The energy of the K-K tower is proportional to \(1/R\), where \(R\) is the length scales of extra dimensions. When \(R\) is very small, we can only see the zero modes of the K-K tower, and the effective theory is lower-dimensional.
**Case III: Super-low temperature limit**
As the temperature becomes even lower, all the thermal fluctuation contributions proportional to the temperature will not survive. In the so-called super-low temperature limit, we have \(\frac{L\text{-Area}}{\beta^{3}}\gg 1\). The bulk fluctuation modes are already frozen to death in the previous stage, and now it is the turn for the ones of \(\phi\) and \(W\). The logarithm contributions from the constant modes and winding modes can be written as
\[\ln Z=\frac{1}{2}\ln\frac{2\pi^{2}\text{Area}}{e^{2}\beta L}+\ln Z_{w}\,. \tag{65}\]
with
\[\ln Z_{w}=\begin{cases}\phantom{-}0\ \ ;&\frac{\text{Area}}{2e^{2}L\beta} \gg 1\\ -\frac{1}{2}\ln[\frac{\text{Area}}{\beta L}]+\ln e\.&\frac{\text{Area}}{2e^{2}L \beta}\ll 1\end{cases} \tag{66}\]
As can be seen from (66), the coupling constant \(1/e^{2}\) is a controller of constant modes. For weak coupling, where we have \(1/e^{2}\ll 1\) such that
\[\frac{1}{e^{2}}\frac{\text{Area}}{\beta L}\ll 1\,, \tag{67}\]
the constant modes contribution of \(\phi\) is canceled by the winding modes. However, in the strong coupling limit
\[\frac{1}{e^{2}}\frac{\text{Area}}{\beta L}\gg 1\,, \tag{68}\]
we can always see the contribution of the constant modes, which always scales as the logarithm of the coupling constant and temperature. The corresponding behavior of the entropy of constant modes and winding modes is shown in the second picture of figure 3.
As a summary of this section, let us qualitatively illustrate the basic behavior of the entropy corresponding to (60). The bulk fluctuation modes are the most important modes at super-high temperatures, whose entropy should be the blackbody radiation, i.e. \(T^{3}\times V\). The new ingredient of our story is the modes due to the boundary condition. At high temperatures, the entropy of the fluctuation modes of \(\phi\) and \(W\) is proportional to \(T^{2}\times\text{Area}\). While as temperature goes lower and lower, the fluctuation modes play a less and less important role, and the zero modes and winding modes that contribute as the logarithm of the temperature and \(e^{2}\) start to dominate. However, as the temperature becomes super low, the contribution from \(\ln Z_{w}\) cancels the one from zero modes, and the overall entropy goes to a constant. The entropy of the gauge theory with the given boundary condition is illustrated in figure 4.
## VI Conclusion
We analyze the partition function of the U(1) gauge field living between two parallel boundaries with boundary condition (5) in this paper. The canonical analysis helps us understand what are the dynamical variables in the phase space (or Hilbert space). We also get the measure of different fields by working out the symplectic form of the theory with the given boundary conditions. As shown in figure 1, the radius coordinate is labeled by \(r\) and the transverse coordinates are \(x^{a}\). Besides the edge modes due to the boundary condition, there are also non-local modes due to the physics interplay between the two boundaries. Those modes are non-local effects, like the Wilson lines stretched between the boundaries, but behave like co-dimension-one fields which are pretty similar to the boundary edge modes. So, the physical modes of the theory at hand contain four different parts: bulk fluctuation modes \(\hat{A}_{\mu}\), zero longitudinal momentum modes of \(A_{r}\) which is \(\phi(x^{a})\), boundary stretched Wilson lines \(W(x^{a})\), constant modes and winding modes.
Putting all of the above modes into the Euclidean path integral, we can work out the partition function of the theory that contains contributions from the four parts. The bulk fluctuation modes always play the dominant role at very high temperatures, whose entropy scales as the volume of the bulk multiplied by temperature curbed
\[\mathcal{S}_{\hat{A}}\propto\text{Volume}\times T^{3}\,. \tag{69}\]
The modes arising because of the boundary conditions become more and more important as the temperature becomes lower. For lower temperatures, the ratio between the distance of the two boundaries \(L\) and inverse temperature \(\beta\) becomes small, and the zero modes \(\phi\) and the Wilson lines \(W\) that behave like boundary scalar fields give out the dominant contributions. The entropy of those modes scales as
\[\mathcal{S}_{\phi,W}\propto\text{Area}\times T^{2}\,. \tag{70}\]
As the temperature becomes super-low, no fluctuation mode can be seen, and we are left with some constant modes and topological modes contributions. The entropy of those modes is approximately the logarithm of the coupling constant and the temperature. The qualitative behavior of the entropy of different modes is shown in figures 3 and 4.
The flat parallel plates case is supposed to serve as a good toy model for the more general situation in curved spacetime. We would like to see if a similar phenomenon also shows up in the black hole background (or even for the wormhole background). We leave the related issues for future research.
At the end of the paper, let us briefly comment on the first set of boundary conditions, shown in (4). It's not ridiculous to add all configurations that respect the boundary condition, including the boundary gauge modes. For the U(1) gauge theory, there are bulk on-shell configurations that have one-to-one correspondences with those boundary configurations, and the entropy of those modes can be counted. Moreover, the non-gauge invariance of the boundary condition we are mainly interested in this paper naturally captures that kind of physics. Those boundary pure gauge modes are soft modes because of the vanishing Hamiltonian for the would-be gauge modes. Moreover, it was suggested there are possible connections between those modes and soft hair degrees of freedom [32; 33; 34; 35; 36; 37; 38], Barnich's non-proper degrees of freedom [4; 5; 39; 40], and edge modes [7; 8; 9; 10; 44; 45; 46; 47; 48; 49]. More concrete connections between those boundary effects due to boundary conditions are worth further understanding.
###### Acknowledgements.
We would like to thank Ankit Aggarwal, Jan de Boer, Diego Hofman, and Pujian Mao for their useful discussions. This work is supported by the National Natural Science Foundation
of China (NSFC) under Grant No. 11905156 and No. 11935009.
## Appendix A Different temperature limits
In this appendix, we will evaluate and compare the partition functions of fields \(\hat{A}_{\mu}\), \(\phi\), \(W\), and other modes in different temperature limits. The conclusions are summarized in the main context of section V. Here we would like to provide more details about the calculation. The partition function we intend to evaluate is
\[\ln Z =-\sum_{\omega}\sum_{p_{r},p_{2},p_{3}}\ln\big{[}\beta^{2}(\omega^ {2}+p_{r}^{2}+p_{2}^{2}+p_{3}^{2})\big{]}\] \[\quad-\sum_{\omega,p_{2},p_{3}}\ln[\beta^{2}(\omega^{2}+p_{2}^{2} +p_{3}^{2})]+\frac{1}{2}\ln[\frac{\text{Area}}{\beta L}]-\ln e+\ln Z_{w}\,, \tag{10}\]
with
\[\ln Z_{w}=\begin{cases}\phantom{-}0\,\,\,;&\qquad\qquad\frac{\text{Area}}{2e^ {2}L\beta}\gg 1\\ -\frac{1}{2}\ln[\frac{\text{Area}}{\beta L}]+\ln e\,\,\,.&\qquad\frac{\text{ Area}}{2e^{2}L\beta}\ll 1\end{cases} \tag{11}\]
### High temperature limit
First of all, let us take the high-temperature limit \(\beta\ll L\ll\sqrt{\text{Area}}\). The first task is to evaluate the partition function \(Z_{\hat{A}}\) for the bulk fluctuation modes. In this temperature limit, we have
\[\frac{V}{\beta^{3}}=\frac{L\cdot\text{Area}}{\beta^{3}}\gg 1\,, \tag{12}\]
which means that we can write
\[\omega =\omega_{m}=\frac{2\pi m}{\beta} \tag{13}\] \[\sum_{p_{r}}\sum_{p_{2}}\sum_{p_{3}} =\frac{V}{(2\pi)^{3}}\int dp_{r}dp_{2}dp_{3}\,. \tag{14}\]
One can further write the first part in (10) as
\[\ln Z_{\hat{A}}=-2V\int\frac{d^{3}p}{(2\pi)^{3}}\left[\frac{1}{2}\beta\omega+ \ln(1-e^{-\beta\omega})\right]\,, \tag{15}\]
where we have \(\omega=\sqrt{|p_{r}^{2}+p_{2}^{2}+p_{3}^{2}|}\). This is the result for two copies of bosonic fields. The first part in \(\ln Z_{\hat{A}}\) is ultraviolet (UV) divergent and can be evaluated in the presence of a
regulator \(\Lambda\). The integrand of the second part is exponentially small as \(p\) goes up; thus, the integral is convergent. After introducing UV cutoff \(\Lambda\), we have
\[\ln Z_{\hat{A}}=-\frac{1}{8\pi^{2}}\beta V\times\Lambda^{4}+\frac{\pi^{2}}{45} \frac{V}{\beta^{3}}\,.\] (A7)
Note that the first part involving UV cutoff \(\Lambda\) is a constant in the free energy because the logarithm of the partition function is linear in \(\beta\). Therefore, the entropy of those modes can be written as
\[{\cal S}_{\hat{A}}=(1-\beta\partial_{\beta})\ln Z_{\hat{A}}=\frac{4\pi^{2}}{45 }\frac{V}{\beta^{3}}\,.\] (A8)
For a similar reason, one can show that the partition function for \(\phi(x^{a})\) and \(W\) can be written as
\[\ln Z=-\frac{1}{6\pi}\beta\text{Area}\times\Lambda^{3}+\frac{1}{2}\ln\frac{2 \pi^{2}\text{Area}}{e^{2}\beta L}+\frac{\zeta(3)}{\pi}\frac{\text{Area}}{\beta ^{2}}+\ln Z_{w}\,.\] (A9)
The winding modes contribution \(Z_{w}\) shown above depends on the value of coupling constant \(e^{2}\). For the case
\[\frac{1}{e^{2}}\frac{\text{Area}}{\beta L}\gg 1\,,\] (A10)
we have
\[Z_{w}=\sum_{n}e^{-\frac{\Lambda\text{Area}}{2e^{2}\beta L}(2\pi n)^{2}}\approx e ^{-\frac{\Lambda\text{Area}}{2e^{2}\beta L}(2\pi n)^{2}}\big{|}_{n=0}=1\,,\] (A11)
thus \(\ln Z_{w}=0\). However, when \(e^{2}\) is big enough such that
\[\frac{1}{e^{2}}\frac{\text{Area}}{\beta L}\ll 1\,,\] (A12)
the coefficient inside of the exponential function is very small, and we can change the sum into an integral. Thus we have
\[Z_{w}\approx\int dn\ e^{-\frac{\Lambda\text{Area}}{2e^{2}\beta L}(2\pi n)^{2} }=\left(\frac{2\pi\text{Area}}{e^{2}\beta L}\right)^{-1/2}\,,\] (A13)
the logarithm of which can be written as
\[\ln Z_{w}=-\frac{1}{2}\ln\frac{2\pi\text{Area}}{e^{2}\beta L}\,.\] (A14)
So when \(e^{2}\) is large enough, the contribution of the constant modes \(\ln Z_{0}\) can be canceled. Nevertheless, this does not matter because the constant modes contribution of \(\phi\) is always
much smaller than the fluctuation modes contributions in this temperature limit. The statistical entropy of the fluctuation modes of \(\phi\) and \(W\) can be computed as
\[\mathcal{S}=(1-\beta\partial_{\beta})\ln Z=\frac{3\zeta(3)}{\pi}\frac{\text{Area }}{\beta^{2}}\,. \tag{101}\]
Compared with the volume contribution, all the area and logarithm contributions are not going to be important. As shown in equation (62), the most important contribution always comes from the bulk fluctuation modes \(\hat{A}_{\mu}\), which scales as the volume times the temperature cubed.
### Low temperature limit
High temperature is boring because we can only see the bulk fluctuation modes. As the temperature goes lower, when we have \(L\ll\beta\ll\sqrt{\text{Area}}\), interesting phenomena due to the boundary condition start to show up.
First, let us look at the bulk fluctuation modes \(\ln Z_{\hat{A}}\). In this limit, the distance between the two plates is very small compared to the inverse temperature \(\beta\). Assuming finite temperature, we have \(\omega_{m}=2\pi m/\beta\). Small \(L\) implies the high-frequency modes along the \(r\) direction are gapped, and we would only see zero modes along the \(r\) direction. \(\hat{A}_{\mu}\) vanish on the boundary, so the zero modes of \(\hat{A}_{\mu}\) along the \(r\) direction is killed by the boundary conditions. The only surviving zero modes are the zero modes of \(A_{r}\) namely \(\phi\), which will be discussed separately. In the low-temperature limit and also in the super-low temperature limit, we will never see any contribution from bulk fluctuation modes anymore. So we can conclude that the entropy from the bulk fluctuation modes is
\[\mathcal{S}_{\hat{A}}=0\,. \tag{102}\]
However, the zero modes \(\phi\) survived and the partition functions for \(\phi\) and \(W\) are not changed. We still have
\[\ln Z=\frac{1}{2}\ln\frac{2\pi^{2}\text{Area}}{e^{2}\beta L}-\frac{1}{6\pi} \beta\text{Area}\cdot\Lambda^{3}+\frac{\zeta(3)}{\pi}\frac{\text{Area}}{\beta ^{2}}+\ln Z_{w}\,. \tag{103}\]
For the case \(\frac{1}{e^{2}}\frac{\text{Area}}{\beta L}\gg 1\), we have \(\ln Z_{w}=0\), and the overall partition function can be written as
\[\ln Z=\frac{1}{2}\ln\frac{2\pi^{2}\text{Area}}{e^{2}\beta L}-\frac{1}{6\pi} \beta\text{Area}\cdot\Lambda^{3}+\frac{\zeta(3)}{\pi}\frac{\text{Area}}{\beta ^{2}}\,. \tag{104}\]
The corresponding entropy can be calculated as
\[\mathcal{S}=(1-\beta\partial_{\beta})\ln Z\approx\frac{3\zeta(3)}{\pi}\frac{ \text{Area}}{\beta^{2}}+\frac{1}{2}\ln\frac{2\pi^{2}\text{Area}}{e^{2}\beta L}+ \frac{1}{2}\,. \tag{101}\]
For the case where \(e^{2}\) is very large \(\frac{1}{e^{2}}\frac{\text{Area}}{\beta L}\ll 1\), the contribution of the constant modes \(\ln Z_{0}\) is canceled by the winding modes contribution, we only left with fluctuation modes contribution. The overall entropy is
\[\mathcal{S}=(1-\beta\partial_{\beta})\ln Z=\frac{3\zeta(3)}{\pi}\frac{\text{ Area}}{\beta^{2}}\,. \tag{102}\]
The entropy of the system now scales the area times the temperature squared.
### Super-low temperature limit
As the temperature becomes even lower, we have \(L\ll\sqrt{\text{Area}}\ll\beta\), which is the low-temperature limit. In this temperature limit, not only the contribution from \(\hat{A}_{\mu}\) can be ignored, the fluctuation modes of \(\phi\) and \(W\) are not important at all. As for the constant modes and winding modes, we have
\[\ln Z=\frac{1}{2}\ln\frac{2\pi^{2}\text{Area}}{e^{2}\beta L}+\ln Z_{w}\,. \tag{103}\]
And in the limit \(\frac{1}{e^{2}}\frac{\text{Area}}{\beta L}\gg 1\), we have \(\ln Z_{w}=0\). The overall entropy can be written as
\[\mathcal{S}=\frac{1}{2}\ln\frac{2\pi^{2}\text{Area}}{e^{2}\beta L}+\frac{1}{2}\,. \tag{104}\]
Whereas in the limit \(\frac{1}{e^{2}}\frac{\text{Area}}{\beta L}\ll 1\), we have
\[\ln Z_{w}=-\frac{1}{2}\ln\frac{2\pi\text{Area}}{e^{2}\beta L} \tag{105}\]
which cancels the constant modes and the overall entropy tends to be a small constant.
|
2308.16103 | CuIn(Se,Te)2 absorbers with bandgaps < 1 eV for bottom cells in tandem
applications | Thin-film solar cells reach high efficiencies and have a low carbon footprint
in production. Tandem solar cells have the potential to significantly increase
the efficiency of this technology, where the bottom-cell is generally composed
of a Cu(In,Ga)Se2 absorber layer with bandgaps around 1 eV or higher. Here, we
investigate CuIn(Se1-xTex)2 absorber layers and solar cells with bandgaps below
1 eV, which will bring the benefit of an additional degree of freedom for
designing current-matched 2-terminal tandem devices. We report that
CuIn(Se1-xTex)2 thin films can be grown single phase by co-evaporation and that
the bandgap can be reduced to the optimum range for a bottom cell (0.92 - 0.95
eV). From photoluminescence spectroscopy it is found that no additional
non-radiative losses are introduced to the absorber. However, Voc losses occur
in the final solar cell due to non-optimised interfaces. Nevertheless, a record
device with 9 % power conversion efficiency is demonstrated with a bandgap of
0.96 eV and x=0.15. Interface recombination is identified as a major
recombination channel for larger Te contents. Thus, further efficiency
improvements are anticipated for improved absorber/buffer interfaces. | Thomas Paul Weiss, Mohit Sood, Aline Vanderhaegen, Susanne Siebentritt | 2023-08-30T15:51:35Z | http://arxiv.org/abs/2308.16103v1 | ###### Abstract
###### Abstract
Thin-film solar cells reach high efficiencies and have a low carbon footprint in production. Tandem solar cells have the potential to significantly increase the efficiency of this technology, where the bottom-cell is generally composed of a Cu(In,Ga)Se\({}_{2}\) absorber layer with bandgaps around 1 eV or higher. Here, we investigate CuIn(Se\({}_{1\cdot x}\)Te\({}_{x}\))\({}_{2}\) absorber layers and solar cells with bandgaps below 1 eV, which will bring the benefit of an additional degree of freedom for designing current-matched 2-terminal tandem devices. We report that CuIn(Se\({}_{1\cdot x}\)Te\({}_{x}\))\({}_{2}\) thin films can be grown single phase by co-evaporation and that the bandgap can be reduced to the optimum range (0.92 - 0.95 eV) for a bottom cell. From photoluminescence spectroscopy it is found that no additional non-radiative losses are introduced to the absorber. However, \(V_{OC}\) losses occur in the final solar cell due to non-optimised interfaces. Nevertheless, a record device with 9 % power conversion efficiency is demonstrated with a bandgap of 0.96 eV and \(x=0.15\). Interface recombination is identified as a major recombination channel for larger Te contents. Thus, further efficiency improvements are anticipated for improved absorber/buffer interfaces.
## 1 Introduction
Compound thin film photovoltaics (PV) enable high power conversion efficiency in combination with a low carbon footprint [1] and are therefore an important technology to combat the climate crisis. Ideal single-junction solar cells are limited to power conversion efficiencies around 33 % set by the Shockley-Queisser limit [2], whereas multi-junction solar cells allow to reach higher efficiencies. The reasons are lower thermalization losses and the possibility to use a larger range of the solar photon spectrum. The highest efficiencies are indeed achieved by multi-junction solar cells [3]. With the success of thin-film perovskite solar cells, these absorbers are used as top cells and enabled efficient tandem solar cells with Cu(In,Ga)Se\({}_{2}\)[4, 5] or Si as a bottom cell [3, 6]. |
2301.05312 | Coulomb blockade of chiral Majorana and complex fermions far from
equilibrium | We study charge transport in a single-electron transistor implemented as an
interferometer such that the Coulomb blockaded middle island contains a
circular chiral Majorana or Dirac edge mode. We concentrate on the regime of
small conductance and provide an asymptotic solution in the limit of high
transport voltage exceeding the charging energy. The solution is achieved using
an instanton-like technique. The distinctions between Majorana and Dirac cases
appears when the tunnel junctions are unequal. The main difference is in the
offset current at high voltages which can be higher up to $50\%$ in Majorana
case. It is caused by an additional particle-hole symmetry of the distribution
function in the Majorana case. There is also an eye-catching distinction
between the oscillations patterns of the current as a function of the gate
charge. We conjecture this distinction survives at lower transport voltages as
well. | Dmitriy S. Shapiro, Alexander D. Mirlin, Alexander Shnirman | 2023-01-12T21:49:42Z | http://arxiv.org/abs/2301.05312v1 | # Coulomb blockade of chiral Majorana and complex fermions far from equilibrium
###### Abstract
We study charge transport in a single-electron transistor implemented as an interferometer such that the Coulomb blockaded middle island contains a circular chiral Majorana or Dirac edge mode. We concentrate on the regime of small conductance and provide an asymptotic solution in the limit of high transport voltage exceeding the charging energy. The solution is achieved using an instanton-like technique. The distinctions between Majorana and Dirac cases appears when the tunnel junctions are unequal. The main difference is in the offset current at high voltages which can be higher up to 50% in Majorana case. It is caused by an additional particle-hole symmetry of the distribution function in the Majorana case. There is also an eye-catching distinction between the oscillations patterns of the current as a function of the gate charge. We conjecture this distinction survives at lower transport voltages as well.
## I Introduction
The effect of Coulomb blockade in a single-electron transistor (SET) [1; 2; 3; 4; 5; 6; 7], a device where Fermi-liquid leads are mediated by a quantum dot, plays an essential role in condensed matter physics, mesoscopics and open quantum systems. The Coulomb spectroscopy and transport through a quantum dot are sensitive to the precise nature of the non-equilibrium steady state, the mechanisms of relaxation, electronic interactions and topological order [8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20]. The "orthodox" theory of Coulomb blockade is based on rate equations formulated in the basis of different charged states in the island [1; 2; 3; 5]. Such states are well defined for almost isolated quantum dots which have weak tunnel coupling to the leads, i.e, with a small dimensionless conductance, \(g\ll\)1. In this theory, the distribution function in the island is not affected by the the coupling to contacts, i.e., the internal relaxation in the island is assumed to prevail on the characteristic tunneling timescale. In multi-channel limit with \(g\gg\)1, when the Coulomb blockade is weak and the charge is ill defined, the description via the dissipative Ambegaokar-Eckern-Schon (AES) action [21; 22] for the phase becomes more convenient [8]. In equilibrium, the saddle points of the Matsubara AES action are known as Korshunov instantons [23]. These instantons allow one to take into account charge discreteness and obtain the residual, exponentially small gate charge oscillations of the conductance. If the relaxation is weak then the distribution function in the island is a non-Fermi one at finite voltages. It causes a non-equilibrium steady state that can not be captured by the "orthodox" theory or the imaginary time technique, and consequently the real time Keldysh formalism [24; 25] is required. Important recent achievements include the theoretical analysis of strongly non-equilibrium transport using the AES action [10], and the generalization of Korshunov instantons to the real-time Keldysh formalism [15] at \(g\gg\)1. In particular, the results of Ref. [10] lead directly to the conclusion that the Coulomb blockade is lifted at transport voltages lower than in the "orthodox" theory due to the non-equilibrium distribution function in the island.
In this work, we study the strongly non-equilibrium regime of high voltages that exceed significantly the charging energy of an island, and we assume the strong Coulomb blockade, i.e., the dimensionless conductance is small, \(g\ll\)1. At low voltages one thus expects a strong suppression of the charge transport. At higher voltages the almost Ohmic behavior is accompanied by the offset (deficit) current and residual gate charge oscillations. Here, we were able to describe this high voltage regime asymptotically exact. Instead of using the kinetic equations, which is challenging due to a large number of relevant charge states, we employ a path integral technique and succeed solving the problem by finding a dominant path, an alternative kind of instanton, for the phase variable.
We apply our solution for calculations of the non-equilibrium tunneling density of states (TDoS) and current-voltage relations using the formalism developed by Meir and Wingreen [26]. The devices under consideration are chiral interferometers implemented in hybrid structures with superconductors, topological insulators or quantum anomalous Hall insulators (QAHI) [27; 28; 29] (see Fig. 1). They have two Fermi-liquid leads biased by the voltages \(\pm\)V/2, and tunnel coupled
Figure 1: Sketch of single-electron transistor realized as chiral Majorana or Dirac interferometer. Normal metal leads (Ohmic contacts) cover spinless channels with chiral Dirac fermions, \(\psi_{\rm L,R}\), which are the edges of a quantum anomalous Hall insulator (QAHI) film. Gate voltage controls the offset charge \(Q_{\rm g}\) in the island of capacitance \(C_{0}\). The island is in the topological superconducting or normal state. The chiral mode \(\chi\) hosts Majorana or Dirac fermions. Tunnel amplitudes are \(\gamma_{\rm L,R}\), and bias voltages are symmetric, \(V_{\rm L,R}\)=\(\pm\)V/2, the positive direction of the current is marked by black arrows.
to the central island. The island hosts a single conducting channel which is either a real Majorana fermion edge mode (the case when the island is in a proximity induced topological superconducting phase) or a complex Dirac one (the case of a normal island in a quantum-Hall topological insulator state). There is an electrostatic gate that induces an offset charge in the island, \(Q_{\rm g}\). The charging energy of the island is \(E_{\rm c}=e^{2}/(2C_{0})\) with \(e\) is electron charge and \(C_{0}\) is the total capacitance of the island. The chiral fermions propagate with a velocity \(v\) along the edge channels. In this case, the Thouless energy, \(E_{\rm Th}=\frac{2\mu v}{L}\) (\(\hbar\) is Plank constant), is nothing but level spacing in the ring of the perimeter \(L\). (Hereafter we set \(e\)=\(\hbar\)=1.) We assume metallic spectrum of the edge modes which means that \(E_{\rm Th}\) is sufficiently small. The voltages are limited from above by the energy scale \(\Delta_{0}\)--the absolute value of the superconducting order parameter induced in the topological part of the island--above which other conducting channels or 2D scattering states become relevant. Ultimately, we work under the following assumption,
\[\Delta_{0}>eV\gg\{E_{\rm Th},E_{\rm c}\}. \tag{1}\]
Moreover, we assume no relaxation to phonons, no electron-electron scattering and zero temperature. The only relaxation mechanism is due to the tunneling to the leads. In this regime the single-particle distribution function is expected to develop a multi-step structure which will play a substantial role below.
## II Model
### Keldysh action for the Majorana device
The microscopic description of the charge transport is provided by the Keldysh generating functional
\[{\cal Z}[\eta]\!=\!\!\int\!D[\Psi]D[\chi]D[\varphi]e^{iS[\Psi,\chi,\varphi, \eta]}. \tag{2}\]
The first path integral is taken over complex fermions, \(\Psi\)=\(\{\Psi_{\rm L},\Psi_{\rm R}\}\), collected in Nambu spinors \(\Psi_{\rm L}\!=\!\!\{\psi_{\rm L,k},\bar{\psi}_{\rm R,-k}\}\) and \(\Psi_{\rm R}\!=\!\!\{\psi_{\rm R,k},\bar{\psi}_{\rm R,-k}\}\) where \(\psi_{\rm L,k}\) (\(\psi_{\rm R,k}\)) are Grassmann fields in left (right) leads; these are chiral states of momenta \(k\in[-\infty,\infty]\). The second variable \(\chi(x)\) is Majorana edge mode in the island being real Grassmann field defined on a ring with a coordinate \(x\in[0,L]\). The third one, \(\varphi\), is the phase of the superconducting order parameter in the island. \({\cal Z}\) depends on a pair of counting fields \(\eta_{L}\) and \(\eta_{R}\) (source variables) collected in \(\eta=\{\eta_{\rm L},\eta_{\rm R}\}\). They generate the charges \(Q_{l}\) that flow from the left (\(l={\rm L}\)) or right (\(l={\rm R}\)) lead into the island during the measurement interval \(t\!e[0;t_{0}]\):
\[Q_{l}=i\left.\frac{\partial{\cal Z}[\eta]}{\partial\eta_{l}}\right|_{\eta\to 0}. \tag{3}\]
The total action is
\[S=S_{\rm c}+S_{\rm L}+S_{\rm R}+S_{\rm M}+S_{\rm L}^{\rm(tun)}+S_{\rm R}^{\rm(tun )}. \tag{4}\]
In the Keldysh technique, the physical time \(t\in[-\infty,\infty]\) gets doubled, \(t_{\pm}\), with the index \(\pm\) denoting the upward and backward parts of Keldysh contour \({\cal C}\). Then, the Keldysh rotation to classical and quantum components of the boson field, \(\varphi_{\rm cl}\) and \(\varphi_{\rm q}\), is performed: \(\varphi(t_{\pm})=\varphi_{\rm cl}(t)\pm\varphi_{\rm q}(t)/2\).
A coherent dynamics of \(\varphi\) is governed by the action \(S_{\rm c}=\int{\cal L}_{\rm c}[\varphi]dt\) where the Lagrangian is
\[{\cal L}_{\rm c}=\frac{\dot{\varphi}_{\rm d}\dot{\varphi}_{\rm cl}}{8E_{\rm c }}-\frac{1}{2}\dot{\varphi}_{\rm q}Q_{\rm g}. \tag{5}\]
Complex fermion dynamics is described by Keldysh actions, \(S_{l}\!=\!\!\int\!dt(\int\frac{\dot{\mu}_{\rm d}\dot{\mu}}{2\pi}\ddot{\psi}_ {l}\dot{\mu}_{\rm d}\dot{\mu}_{\rm d}\dot{\mu}_{\rm d}-H)\), where \(H_{l}\!=\!\!\hbar v\int\frac{\dot{\mu}_{\rm d}\dot{\mu}}{2\pi}k\ddot{\psi}_{l }\dot{\mu}_{\rm d}\psi_{L}\) is a Hamiltonian of chiral fermions and the lead index is \(l\!=\!\!{\rm L,R}\). The action in \(\pm\)-basis reads \(S_{l}\!=\!\!\sum\limits_{\sigma,\sigma^{\prime}}\!\int\!\!\int\!\frac{d\omega \omega}{(2\pi)^{2}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Here, Majorana field \(\chi(x)\) couples to the local fermions in the leads \(\psi_{l}^{(0)}{=}\int\frac{d\omega}{2\pi}\psi_{l,k}\) at two points, \(x{=}\chi_{\rm L,R}\). The tunnel amplitudes \(\gamma_{l}\) are chosen real. The matrix
\[U[\varphi,\eta_{l}]=e^{-iq\pi t+\frac{i}{2}\varphi_{0}(t)}e^{-i\frac{2\pi(0+2 \varphi_{0}t)}{4}\sigma^{z}} \tag{9}\]
captures the gauge transformation mentioned above and the counting field. The yet unknown chemical potential of the island \(\mu\) will be found after solving an appropriate kinetic equation. This transformation also means that all energies in the island are counted from \(\mu\). The charge counting variable \(\eta_{l}\) is an amplitude of the auxiliary quantum field, \(\frac{\eta\pi(t)}{2}\sigma^{z}\). It generates the transmitted charge \(Q_{l}[\varphi]\) which is a classical observable. Further, \(z(t){=}1\) for \(t\in[0,t_{0}]\) and \(z(t){=}0\) otherwise. It switches the charge counting on and off at \(t{=}0\) and \(t{=}t_{0}\), respectively.
### Non-equilibrium effective theory for the phase
After the integration over the complex fermions, \(\Psi\), and then over the Majorana ones, \(\chi\), the generating functional (2) becomes
\[\mathcal{Z}[\eta]=\int D[\varphi]e^{iS_{\rm L}[\varphi]+\frac{i}{2}\mathrm{tr }\ln((i\varphi)+i\varphi_{\rm L})\sigma^{z}-\Sigma[\varphi,\eta])}\;. \tag{10}\]
Here, \(\Sigma\) is the self-energy for Majorana fermions. It reads
\[\Sigma[\varphi,\eta]_{\chi,\nu,\tau}{=}\sum_{l={\rm L},{\rm R}}\gamma_{l}^{2} \left(\mathcal{G}_{l}^{\prime}[\varphi]_{l,\tau}-[\mathcal{G}_{l}^{\prime}[ \varphi]_{\nu,\tau}]^{T}\right)\delta(x-x_{l}), \tag{11}\]
where \(\mathcal{G}_{l}^{\prime}[\varphi]_{l,\tau}{=}U_{l}^{+}(t)\sigma^{z}\mathcal{G }_{l}(t{-}t^{\prime})\sigma^{z}U_{l}(t^{\prime})\) is the boson-dressed Green function of the lead. The self-energy \(\Sigma\) is singular at the points where the tunnel contacts are located. The presence of two contributions to the self-energy (\(\mathcal{G}^{\prime}\) and \([\mathcal{G}^{\prime}]^{T}\)) reflects the Majorana nature of the island excitations. In the time domain the local Green functions of the leads read \(\mathcal{G}_{l}(t)=\int\frac{d\omega}{2\pi}\mathcal{G}_{l,\omega\mu}e^{-i\omega t}\) where
\[\mathcal{G}_{l,\omega\mu}{=}\!\!\int\frac{dk}{2\pi}G_{l,\omega\mu}{=}\frac{i }{4\pi\nu}((\sigma^{x}-\sigma^{0})f_{l,\omega}-i\sigma^{y}). \tag{12}\]
To analyze the phase dynamics, we develop here an expansion scheme for the logarithm in (10). A naive expansion in the small tunneling amplitude \(\gamma_{l}\) would force us to introduce an infinitesimal broadening in the island with an arbitrary distribution function. However, in such an approach a physical distribution function, dictated by the leads, would emerge only after the infinite summation of higher order contributions. Instead, we extract from \(\Sigma[\varphi,\eta]\) a part with a constant in time classical phase \(\varphi_{0}\), \(\Sigma[\varphi,\eta]{=}\Sigma[\varphi_{0},0]{+}(\Sigma[\varphi,\eta]{-} \Sigma[\varphi_{0},0])\). We transfer the extracted part, \(\Sigma[\varphi_{0},0]\), into the zeroth order propagator [16],
\[\mathbf{G}_{0}^{-1}{=}(i\partial_{t}{+}i\partial_{x})\sigma^{z}{-}\Sigma[ \varphi_{0},0], \tag{13}\]
which we can invert exactly, and perform an expansion in \(\delta\Sigma[\varphi,\eta]{=}\Sigma[\varphi,\eta]{-}\Sigma[\varphi_{0},0]\). Due to the gauge invariance of the problem, \(\varphi_{0}\) does not appear in the final results.
We expand the logarithm in (10) up to the first order in \(\delta\Sigma\). After that, \(\delta\Sigma\) itself is expanded up to the linear order in \(\eta\) since we are interested in the current only. Omitting a constant term, we obtain the following result for the logarithm in (10):
\[\frac{1}{2}\mathrm{tr}\ln\left((i\partial_{t}{+}i\nu\partial_{ x})\sigma^{z}{-}\Sigma[\varphi,\eta]\right){\approx}\\ iS_{\rm AES}[\varphi]{-}i\eta_{\rm L}Q_{\rm L}[\varphi]{-}i \eta_{\rm R}Q_{\rm R}[\varphi]. \tag{14}\]
The first term in (14) is the dissipative AES action [21; 22],
\[S_{\rm AES}[\varphi]=i\frac{1}{2}\mathrm{tr}\Big{[}\mathbf{G}_{0}\delta\Sigma[ \varphi,\eta=0]\Big{]}, \tag{15}\]
and the second and third terms contain the charges \(Q_{l}[\varphi]\) (cf. Eq. (3)) calculated for a certain path, \(\varphi_{\rm c}(t)\) and \(\varphi_{\rm q}(t)\). These are given by
\[Q_{l}[\varphi]=\frac{i}{2}\lim_{\eta\to 0}\partial_{\eta}\mathrm{tr}\Big{[} \mathbf{G}_{0}\delta\Sigma[\varphi,\eta]\Big{]}. \tag{16}\]
We will denote the frequencies related to the island by \(\epsilon\) keeping \(\omega\) for the leads. This helps us to remember that the energies on the island are counted from the chemical potential. Since \(\Sigma\) is singular in coordinate representation (cf. Eq. 11), one needs to know the Green function \(\mathbf{G}_{0,\epsilon}\) at coincident coordinates, \(x{\rightarrow}x_{l}\) and \(x^{\prime}{\rightarrow}x_{l}\). Comparing the tunneling self-energy and the ballistic propagator, we conclude that the weak tunneling limit corresponds to the condition \(\gamma_{l}{\ll}\varphi_{\rm N}\). This limit is fully equivalent to the condition of small broadening of levels compared to the distance between them, \(\frac{\sqrt{\gamma_{\rm L}^{2}+\gamma_{\rm R}^{2}}}{L}{\ll}E_{\rm Th}\). In this regime, the Green function reads (see Appendix B):
\[\mathbf{G}_{0,\epsilon}=-i\frac{E_{\rm Th}}{2\nu}\sum_{n}\delta(\epsilon- \epsilon_{n})((\sigma^{0}-\sigma^{x})f_{{\rm M},\epsilon_{n}}+i\sigma^{y}), \tag{17}\]
which involves the four-step function
\[f_{{\rm M},\epsilon}=\frac{\gamma_{\rm L}^{2}(f_{{\rm L},\mu+\epsilon}{-}f_{{ \rm L},\mu-\epsilon}){+}\gamma_{\rm R}^{2}(f_{{\rm R},\mu+\epsilon}-f_{{\rm R}, \mu-\epsilon})}{2(\gamma_{\rm L}^{2}+\gamma_{\rm R}^{2})} \tag{18}\]
describing the non-Fermi distribution of Majorana fermions. It has the symmetry, \(f_{{\rm M},\epsilon}{=}{-}f_{{\rm M},-\epsilon}\), which is preserved for arbitrary \(\gamma_{\rm L}\) and \(\gamma_{\rm R}\).
The function \(\mathbf{G}_{0,\epsilon}\) is singular at \(\epsilon{=}\epsilon_{n}\) where \(\epsilon_{n}{=}(n{+}n_{\nu}/2)E_{\rm Th}\) are the energy levels of the island (\(n{\in}\mathbb{Z}\) and \(n_{\nu}\) is the number of vortices). To calculate the chemical potential we neglect the fluctuations of phase and assume the constant trajectory \(\varphi_{\rm cl}{=}\varphi_{0}\) and \(\varphi_{\rm q}{=}0\). Then we employ the the charge conservation constraint, \(Q_{\rm L}[\varphi_{0}]{+}Q_{\rm R}[\varphi_{0}]{=}0\), for \(t_{0}{\rightarrow}\infty\) (cf. Eq. (16)) and obtain
\[\mu=\frac{\gamma_{\rm L}^{2}-\gamma_{\rm R}^{2}}{2(\gamma_{\rm L}^{2}+\gamma_{ \rm R}^{2})}V\;. \tag{19}\]
Unlike in the Dirac case, in which the distribution function is a two-step one governed exclusively by the voltages in the leads, in the current Majorana case there are more steps and the chemical potential is important.
## III Formalism and analytical results
### Dissipative action
We evaluate the AES action of Eq. (15), and obtain
\[S_{\rm AES}=\iint\!\!dtdt^{\prime}\Big{[}u_{\rm cl}^{*}\ \ \ u_{\rm t}^{*}\Big{]}_{ \alpha}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
action \(S_{\rm D}{=}\sum\limits_{\sigma,\sigma^{\prime}}\int\limits^{L}_{0}dx\int\frac{ d\omega}{2\pi}\tilde{\chi}_{\sigma}(i\partial_{t}+i\omega\partial_{x})\sigma^{ \ast}_{\sigma,\sigma^{\prime}}\chi_{\sigma^{\prime}}\) is expressed now in terms of a complex field \(\chi\neq\tilde{\chi}\). There are following distinctions from the Majorana case. First, the non-equilibrium distribution function has the well-known double step structure, \(f_{\rm D,\epsilon}\), which is not particle-hole symmetric, i.e. \(f_{\rm D,\epsilon}{\neq}-f_{\rm D,\epsilon}\), except the limit of fully symmetric setup, \(\gamma_{\rm L}{=}\gamma_{\rm R}\). Second, in the formula for the current,
\[I_{\rm D}=g\int\nu_{\rm D,\omega}(n_{\rm L,\omega}-n_{\rm R,\omega})d\omega, \tag{33}\]
we have \(\nu_{\rm D,\omega}=1-\frac{1}{4\pi}\int e^{i(\omega-\epsilon-\mu)\pi}{\cal D} (\tau)f_{\rm D,\epsilon}d\epsilon d\tau\). Note that the chemical potential does not influence the result in the Dirac case. Indeed, introducing \(\tilde{f}_{\rm D,\epsilon}{=}f_{\rm D,\epsilon-\mu}\), i.e., counting the energy from zero, we see that the distribution function \(\tilde{f}_{\rm D,\epsilon}\) does not depend on \(\mu\). The third distinction is that the prefactor \(\xi_{\rm M}{=}p_{\rm M}q_{\rm H}\) in (29) is replaced by
\[\xi_{\rm D}{=}p_{h}. \tag{34}\]
It follows from the Keldysh kernel in the Dirac case, which reads
\[\alpha^{K}_{\rm D,\epsilon}=\Gamma\frac{(\gamma^{4}_{\rm L}+\gamma^{4}_{\rm R })|\omega|+\gamma^{2}_{\rm L}\gamma^{2}_{\rm R}(|\omega-V|+|\omega+V|)}{(\gamma ^{2}_{\rm L}+\gamma^{2}_{\rm R})^{2}}. \tag{35}\]
Similar to the Majorana case, the frequency dependence of this kernel can be neglected and our approach is accurate if \(V{\gg}E_{\rm c}{\rm max}\{h,h^{-1}\}\).
### Path integration and the instanton. Boson propagator
In this Subsec. III.4 we consider the cases of Majorana and Dirac in parallel, thus, we omit the indices "M" and "D" in \(\xi\). Calculation of boson exponents in (32) is based on the following representation of the path integral,
\[\langle e^{-\frac{i}{2}\varphi(\tau_{\rm A})+\frac{i}{2}\varphi(\tau_{\rm A})} \rangle{=}\!\int\!\!D[\varphi]e^{i{\cal S}_{\rm a}[\varphi_{\rm A}]+\int\dot{ \varphi}_{\rm a}{\cal H}[\varphi_{\rm a}]\mu}. \tag{36}\]
Here, we have introduced
\[{\cal S}_{\rm a}{=}\frac{\varphi_{\rm a}(0){+}\varphi_{\rm a}(\tau)}{4}\!+ \!\frac{1}{2}\int\big{(}i\xi\Gamma|V|\sin^{2}\frac{\varphi_{\rm a}}{4}+\dot{ \varphi}_{\rm a}Q_{\rm g}\big{)}dt, \tag{37}\]
which does not contain classical components of the phase. We have also introduced
\[{\cal A}=\frac{\dot{\varphi}_{\rm a}(t)}{8E_{\rm c}}-\frac{\Gamma}{4}\sin \frac{\varphi_{\rm a}(t)}{2}+\frac{\theta(t{-}\tau)-\theta(t)}{2}, \tag{38}\]
which couples linearly to \(\varphi_{\rm cl}\). The linearity of the action in (36) with respect to \(\varphi_{\rm cl}\) plays the central role in our solution. We remind that the exclusively linear dependence on \(\varphi_{\rm cl}\) is based on two approximations: the high voltage and the Ohmic spectrum of the island. The linearity in \(\dot{\varphi}_{\rm cl}\) allows one to integrate this field out and obtain a functional delta-distribution, \(\int\!D[\varphi_{\rm cl}]e^{i\int\dot{\varphi}_{\rm a}{\cal H}[\varphi_{\rm a }]d}{=}\bar{\phi}({\cal A}[\varphi_{\rm a}])\). Therefore, the remaining path integral over \(\varphi_{\rm a}\) is restricted by a manifold of trajectories satisfying \({\cal A}[\varphi_{\rm a}]{=}0\) with the boundary condition \(\varphi_{\rm q}(-\infty){=}0\). Analysing \({\cal S}_{\rm a}\) we now restrict the allowed trajectories. We note that \({\cal S}_{\rm a}\) has an imaginary part \(\sim\!i\int\!\sin^{2}\frac{\varphi_{\rm a}}{4}dt\). It provides a selection rule for the trajectories: \(\exp(i{\cal S}_{\rm a}){\neq}0\) only if \(\varphi_{\rm q}(+\infty)=4\pi n,\,n\in{\mathbb{Z}}\). We find that only a single solution of the first order differential equation \({\cal A}[\varphi_{\rm q}]{=}0\) satisfies this selection rule, \(\varphi_{\rm q}(t){=}\Phi_{\tau}(t)\). Therefore, the result of the path integration for the boson propagators reads
\[P^{\gtrless}(\tau)=e^{i{\cal S}_{\rm a}[\Phi_{\tau}(t)]}. \tag{39}\]
Note the combination of theta functions in the r.h.s. of the equation for the quantum trajectory, \(\frac{\dot{\varphi}_{\rm q}(t)}{8E_{\rm c}}\!-\!\frac{\Gamma}{4}\sin\frac{ \varphi_{\rm q}(t)}{2}{=}\frac{-\theta(t{-}\tau)-\theta(t)}{2}\), plays a role of an external force. It switches on at \(t{=}0\) and off at \(t{=}\tau\) (assuming \(\tau{>}0\)). Consider first the case of zero force when the equation is uniform, \(\dot{\varphi}_{\rm q}{=}2E_{\rm c}\Gamma\sin\frac{\varphi_{\rm a}}{2}\), i.e., when \(t{\in}[-\infty;0]\backslash[\tau;\infty]\). It has a set of trivial solutions,
\[\varphi_{\rm q}{=}2\pi n\, \tag{40}\]
and the instanton-like ones,
\[\varphi_{\rm q}=\pm 4{\rm arctan}(Ae^{\frac{\Gamma}{2\pi}})+4\pi n\, \tag{41}\]
with \(n{\in}{\mathbb{Z}}\). Here, the constant \(A\) determines the instanton center and the time scale \(\tau_{\rm c}\) is inverse proportional to the charging energy, \(\tau_{\rm c}{=}\frac{\pi\hbar}{E_{\rm c}}\). The instanton has slow dynamics on the long \(RC\)-like time scale, \(\sim\!\frac{\tau_{\rm c}}{\pi\hbar}\), due to \(\Gamma{\ll}1\).
Consider the second case when the force is switched on (\(t{\in}[0;\tau]\)) and the equation becomes \(\frac{\dot{\varphi}_{\rm q}(t)}{8E_{\rm c}}\!-\!\frac{\Gamma}{4}\sin\frac{ \varphi_{\rm q}(t)}{2}{=}\frac{1}{2}\). We neglect the sine term due to small \(\Gamma{\ll}1\) prefactor and obtain a rapidly growing linear solution
\[\varphi_{\rm q}=4E_{\rm c}t+B \tag{42}\]
up to small oscillations with an amplitude \(\sim\!\Gamma\).
We have to match the linear solution (42) in the region \(t{\in}[0;\tau]\) with the solutions in the remaining two regions, \([-\infty;0]\) and \([\tau;\infty]\). The analysis shows that, in order to satisfy the above selection rules, the solution in the region \(t{<}0\) should be of the instanton form (41) and in the other region, \(t{>}\tau\), the constant one, Eq. (40), with an even \(n\). The matching conditions (continuity of the solution) uniquely determine free parameters \(A_{\tau}\) and \(B_{\tau}\), as well as the sign of the instanton function, and the even integer \(n_{\tau}{=}2N_{\tau}\), where we added the subscript \(\tau\) to emphasize the \(\tau\)-dependence. In particular, we have
\[N_{\tau}=\lfloor\tau/\tau_{\rm c}+1/2\rfloor \tag{43}\]
where the floor function \(\lfloor x\rfloor\) returns the greatest integer less than or equal to \(x\). Note that the integer valued function \(N_{\tau}\) is odd, \(N_{\tau}{=}{-}N_{\tau}\).
After some algebra, we find the final result for the quantum trajectory at \(\tau>0\):
\[\Phi_{\tau}(t)=\begin{cases}\phi_{\tau}(t)\,\ t<0;\\ 4\pi N_{\tau}{+}4E_{\rm c}(t{-}\tau),0{<}t{<}\tau;\\ 4\pi N_{\tau}\,\ t>\tau.\end{cases} \tag{44}\]
The view of the solution is shown in Fig. 2 for different \(\tau/\tau_{\rm c}\) ratios. The instanton tail in (44) reads
\[\phi_{\tau}(t)=4(-1)^{m_{\tau}}\text{arctan}\left[e^{\pi\frac{\tau_{\rm D}}{ \tau_{\rm c}}}\tan\frac{\arccos(\cos(2\pi\frac{\tau}{\tau_{\rm c}}))}{2}\right] \tag{45}\]
where the discrete valued function \(m_{\tau}\), which determines the overall sign of (45), is given by \(m_{\tau}\)=\(1\)+\(\left(\frac{\tau}{\tau_{\rm c}}+\frac{1}{2}\right]\)+\(\left[\frac{\tau}{\tau_{\rm c}}\right]\)).
Finally, one finds for the boson correlator for arbitrary \(\tau\):
\[\mathcal{D}(\tau)=2ie^{2\pi Q_{\rm g}N_{\rm c}}[\cos(E_{\rm c}\tau)]^{\frac{( \tau)}{2\tau_{\rm c}}}\sin\left(\frac{\pi\tau}{\tau_{\rm c}}\right)e^{-\kappa_ {\tau}}. \tag{46}\]
This is an oscillating function of \(\tau\) multiplied by a decaying envelope determined by \(\kappa_{\tau}\)=\(\frac{\Gamma}{4}\xi|V|(|\tau|-\frac{\Gamma}{2E_{\rm c}}\sin(2E_{\rm c}|\tau|))\).
Neglecting small decay \(\kappa_{\tau}\) in (46), the following spectrum of excitations is obtained,
\[\mathcal{D}_{\omega}=\sum_{n}\mathcal{W}_{\omega}\delta(\omega-\omega_{n}). \tag{47}\]
It is a ladder of levels, \(\omega_{n}\)=\(2E_{\rm c}(n\)\(-\)\(Q_{\rm g}\)\(-\)\(\frac{1}{2}\)), corresponding to excitations between the states with energies \(E_{n}\) and \(E_{n-1}\) where \(E_{n}=E_{\rm c}(n-Q_{\rm g})^{2}\) is the energy of a state with \(n\) excess electrons in the island. (We note that the singularities will be slightly smoothened by the frequency \(\sim\)\(\Gamma E_{\rm c}\) when the \(\kappa_{\tau}\) is taken into account.) The envelope spectral function \(\mathcal{W}_{\omega}\) is
\[\mathcal{W}_{\omega}=-8E_{\rm c}\int\limits_{0}^{\tau_{\rm c}}|\text{cos}E_{ \rm c}\tau|^{\frac{\mu\mu}{2\tau_{\rm c}}}\sin(E_{\rm c}\tau)\sin(\omega\tau)d \tau. \tag{48}\]
It is an odd function of \(\omega\), \(\mathcal{W}_{\omega}\)=\(-\)\(\mathcal{W}_{-\omega}\). Its negative (positive) values correspond to rates of an absorption (emission) of an electron by a lead at the energy \(\hbar\omega\).
In the high voltage regime we estimate that the weights \(\mathcal{W}_{\omega_{\omega}}\) are significant up to \(n\)\(\sim\)\(\sqrt{\frac{V}{E_{\rm c}}}\). This estimate for \(n\) determines the number of the charge states that participate in the transport.
### Asymptotic expressions
Let us come back to the expressions for the currents (30) and (33) and analyze some important cases. We focus on the asymptotic behavior at voltages \(V\)\(\gg\)\(E_{\rm c}\)max\(\{h,h^{-1}\}\) and integrate over \(\epsilon\) and \(\omega\) analytically. In this regime the decay \(\kappa_{\tau}\) in (46) is negligible. The following identities for the Fourier transformations of distribution functions, \(f_{\rm M(D)}(\tau)=\int\frac{d\epsilon}{2\pi}e^{-i\epsilon\tau}f_{\rm M(D),\epsilon}\), are used: \(f_{\rm M}(\tau)\)=\(-i\frac{\gamma_{\rm L}^{2}\cos\left\{\frac{1}{V\tau+\mu\tau}\right\}+\gamma_{ \rm g}^{2}\cos\left\{\frac{1}{V\tau-\mu\tau}\right\}}{\pi\gamma_{\rm L}^{2}+ \gamma_{\rm g}^{2}\tau}\) and \(f_{\rm D}(\tau)\)=\(i\epsilon^{i\mu\tau}\frac{\gamma_{\rm L}^{2}-\gamma_{\rm L}^{2}+ \gamma_{\rm g}^{2}\tau^{2}\omega^{2}}{\pi\gamma_{\rm L}^{2}+\gamma_{\rm g}^ {2}\tau}\). For the difference in occupation numbers in the leads, \(\Delta n_{\omega}\)=\(n_{\rm L,\omega}\)-\(n_{\rm R,\omega}\), we have \(\Delta n(\tau)\)= \(\int\frac{d\omega}{2\pi}e^{-i\omega\tau}\Delta t_{\omega}\)=\(\frac{\sin\frac{\gamma}{2\pi}}{\pi\tau}\). Then the currents read:
\[I_{\rm M(D)}=gV-\pi g\int e^{-i\pi t}\mathcal{D}(\tau)f_{\rm M(D)}(\tau)\Delta n (-\tau)d\tau. \tag{49}\]
The integrals in (49) can be split into a sum of integrals over intervals \(\tau\in[\tau_{\rm c}(m-\frac{1}{2});\tau_{\rm c}(m+\frac{1}{2})]\), \(m\in\mathbb{Z}\). Their integrands have each a narrow peak in every interval. The peak at \(m\)=0 is given by a smoothened singularity in \(\Delta n(\tau)\). In this case, \(\mathcal{D}(\tau)\approx 1\) at the relevant time scale of \(\tau\)\(\sim\)\(\frac{1}{V}\). The integral for \(m\)=0 yields the offset current, \(I_{\rm offs}\). Note that it does not depend on \(Q_{\rm g}\) and is proportional to \(E_{\rm c}\). Integrals over the other \(m\)\(\neq\)0 peaks are responsible for the part of the current, \(I_{\rm osc}\), showing the gate charge oscillations. Their amplitude is much smaller than that of the offset current. This means that at high voltages the strong Coulomb blockade is weakened and the charge on the island is not well defined. For the calculation of \(I_{\rm osc}\) the boson correlator \(\mathcal{D}\) becomes important. In a \(\tilde{\tau}\)-vicinity of \(m\)-th peak, where \(\tau\)=\(\tau_{\rm c}m\)+\(\tilde{\tau}\), it reads \(\mathcal{D}(\tau)=2i(-1)^{m}\sin(E_{\rm c}\tilde{\tau})e^{i2\pi Q_{\rm g}n} \exp\left(-\frac{\xi}{4}|V|E_{\rm c}\tilde{\tau}^{2}\right)\). Therefore, for both systems the current can be written as
\[I_{\rm M(D)}=gV-I_{\rm M(D),offs}-I_{\rm M(D),osc}. \tag{50}\]
### Offset current
The difference between Majorana and Dirac fermions is most prominent for asymmetric systems. We obtain the following asymptotic results (\(V\)\(\gg\)\(E_{\rm c}\)max\(\frac{\gamma_{\rm R}^{2}}{\gamma_{\rm L}^{2}},\frac{\gamma_{\rm L}^{2}}{ \gamma_{\rm R}^{2}})\) for the offset current. In Majorana case we get
\[I_{\rm M,offs}=g\left(1+\frac{|\gamma_{\rm R}^{2}-\gamma_{\rm L}^{2}|}{2( \gamma_{\rm R}^{2}+\gamma_{\rm L}^{2})}\right)E_{\rm c}. \tag{51}\]
In comparison, in Dirac case we find
\[I_{\rm D,offs}=gE_{\rm c} \tag{52}\]
for an arbitrary value of \(\gamma_{\rm R}^{2}/\gamma_{\rm L}^{2}\). Therefore, the deficit current in the Majorana case (cf. Eq. (51)) is up to 3/2 times larger that that in the Dirac case.
### Gate charge oscillations
We obtain the following asymptotic result at large voltages for Dirac case:
\[I_{\text{D,osc}} = g\text{sign}(V)\frac{\sinh\frac{2}{\xi_{\text{D}}}}{\sqrt{\pi\xi_{ \text{D}}}}\sqrt{\frac{E_{\text{x}}^{3}}{|V|}}e^{-\frac{21}{6\lambda_{\text{D}} \xi_{\text{D}}}}\times \tag{53}\] \[\times\frac{F(\frac{V}{2E_{\text{c}}},Q_{\text{g}})+hF(\frac{-V}{ 2E_{\text{c}}},Q_{\text{g}})}{1+h}.\]
There is an exponential decay of oscillations' amplitude as a function of \(V\). The gate charge oscillations pattern is given by the function \(F_{x,y}\)=(2mod\({}_{1}\)(\(-x\)+\(\frac{1}{2}\))\(-\)1)\({}^{2}\)\(-\)\(\frac{1}{3}\). (The function mod\({}_{1}\)(\(z\)) returns a fractional part of \(z\)) It is found after the integration over \(\tilde{\tau}\) in Gaussian approximation and further summation over \(m\)\(\neq\)\(0\)[30].
The function \(F\) has discontinuous derivatives. Namely, \(V\) and \(Q_{\text{g}}\), which satisfy the condition \(F(\frac{sV}{2E_{\text{c}}},Q_{\text{g}})\)=1, determine border lines of the so called Coulomb diamond in the differential conductance map. As seen from (53), there are two sets of border lines, \(Q_{\text{g}}^{(1,2)}\)\(\pm\)\(\frac{V}{2E_{\text{c}}}\)\(+\)\(\frac{1}{2}\)\(+\)\(n\) (\(n\)\(\in\)\(\mathbb{Z}\)). In asymmetric limit with \(h\)\(\ll\)1 the lines \(Q_{\text{g}}^{(2)}\) are suppressed.
In Majorana case the result in general case is more cumbersome:
\[I_{\text{M,osc}} = \frac{g\text{sign}(V)}{2\sqrt{\pi\xi_{\text{D}}}}\sqrt{\frac{E_{ \text{c}}^{3}}{|V|}}\Bigg{\{}\frac{(h\text{$-$1})\sinh\frac{4\frac{\Phi}{\xi_{ \text{M}}}}{2E_{\text{c}}V_{\text{M}}}}{e^{\frac{2V^{2}}{2E_{\text{c}}V_{ \text{M}}}}}F\Big{(}\frac{\mu}{E_{\text{c}}},Q_{\text{g}}\Big{)}+ \tag{54}\] \[\sum_{j,s=\pm 1}h^{\frac{1+j}{2}}\frac{\sinh\frac{2+(j+\text{$ \frac{3\nu}{2E_{\text{c}}}$})}{\xi_{\text{M}}}}{1+h}e^{-|V|\frac{1+2(j+\text{$ \frac{3\nu}{2E_{\text{c}}}$})+\mu}{E_{\text{c}}V_{\text{M}}}}F\Big{(}\frac{jsV \!+\!(j\!+\!1)\mu}{2E_{\text{c}}},Q_{\text{g}}\Big{)}\Bigg{\}}.\]
The terms with \(j\)=\(-\)1 and \(s\)=\(\pm\)1 in the sum (54) provide the same border lines, \(Q_{\text{g}}^{(1,2)}\)\(\equiv\)\(\frac{sV}{2E_{\text{c}}}\)\(+\)\(\frac{1}{2}\)\(+\)\(n\), as in the Dirac case. Other terms define three additional sets of lines: \(Q_{\text{g}}^{(3,4)}\)\(=\)\(\frac{sV+\nu_{\text{g}}}{2E_{\text{c}}}\)\(+\)\(\frac{1}{2}\)\(+\)\(n\) and \(Q_{\text{g}}^{(5)}\)\(=\)\(\frac{\mu}{E_{\text{c}}}\)\(+\)\(\frac{1}{2}\)\(+\)\(n\). Note that \(Q_{\text{g}}^{(3,4,5)}\) depend on \(\mu\)\(=\)\(\frac{(1-h)V}{2(1+h)}\) and, hence, make the patterns more complicated. We also find that in two particular cases, fully symmetric (\(h\)=1) and absolutely asymmetric systems (\(h\)=0 or \(h\)=\(\infty\)), the patterns for \(I_{\text{D}}\) and \(I_{\text{M}}\) coincide.
### Graphical presentation of the results
In Fig. 3 we plot the normalized differential conductance as a function of the gate charge and transport voltage for the Dirac [panels (a)-(d)] and Majorana [panels (e)-(h)] devices. Data are found after an exact integration over time in (49). These plots demonstrate the vanishing of the border line \(Q_{\text{g}}^{(2)}\) at small \(h\) in strongly asymmetric Dirac devices according to asymptotic result (53). Also, we observe the emerging of three additional border lines [panel (f)], \(Q_{\text{g}}^{(3,4,5)}\), in Majorana device at \(h\)\(\neq\)1 predicted by (54).
In Fig. 3 we used our formalism down to zero voltages. Quantitatively, the differential conductance at lower voltages, obtained in our formalism, is not accurate. We are confident, however, that the pattern is qualitatively correct and reflects features of the strong Coulomb blockade behavior.
In asymmetric junctions, the exponentialy small oscillatory contributions are not symmetric under change of the voltage sign \(V\)\(\rightarrow\)\(-\)\(V\), i.e., \(I_{\text{osc}}(V,Q_{\text{g}})\)\(\neq\)\(-\)\(I_{\text{osc}}(-V,Q_{\text{g}})\). The exception is the points \(Q_{\text{g}}\)\(=\)\(\frac{\mu}{2}\) (\(n\)\(\in\)\(\mathbb{Z}\)) where the poles of \(\mathcal{W}_{\omega}\) are symmetric with respect to \(\omega=0\). This asymmetry is more visible at low voltages, as shown in the insets in [panels (c), (g)]. It points to a possible diode-like behavior of the asymmetric devices at low \(V\).
In Fig. 4 we plot non-equilibrium TDoS, \(\nu_{\text{M,\omega}}\) and \(\nu_{\text{D,\omega}}\), which demonstrate a structure of the Coulomb gap. Note that in the Majorana case we always have symmetric TDoS around the chemical potential \(\mu\) (dashed line) for any \(h\). It is dictated by the particle-hole symmetry of \(f_{\text{M,e}}\). In Fig. 5 we demonstrate the broadening of the Coulomb gap when the voltage increases. Shaded regions stand for the energy domain, the states from which contribute to the electric current.
## IV Discussion
The instanton-like solution presented in Eq. (44) provides only the quantum component of the phase. At the same time, the classical component is not uniquely defined and we have to integrate over all its realizations. This is precisely the difference from the non-equilibrium instanton approach developed in Ref. [15]. This approach is valid for \(g\)\(\gg\)1, i.e., for the weak Coulomb blockade, and the instanton trajectory fixes both quantum and classical phase components. In contrast, in our case \(g\)\(\ll\)1, the Coulomb blockade is strong at low voltages and is lifted at voltages higher than the charging energy.
Our approach based on the AES action allows us to reproduce quantitatively some already known results obtained within charge representation. This is the offset current, which is a characteristic feature of charging effects at high voltages. In the Dirac case we found \(I_{\text{D,offs}}\)=\(gE_{\text{c}}\), which fully coincides with the offset current in the single tunnel junction with dimensionless conductance \(g\)[2]. Another example is the threshold voltage \(V\)=\(E_{\text{c}}\) for \(Q_{\text{g}}\)=0 and \(\gamma_{\text{R}}\)=\(\gamma_{\text{L}}\), which is two times lower than that in the "orthodox" theory. This result can be easily obtained from Ref. [10] and is due to the non-equilibrium distribution function in the dot. The Coulomb blockade is lifted above this threshold. We also reproduce the threshold value \(V\)=\(E_{\text{c}}\) (see Fig. 3 [panel (a)]).
One of the central results is the unconventional offset current found for the Majorana island, \(I_{\text{M,offs}}\)=\(q_{\text{B}}\)\(E_{\text{c}}\), where the non-universal prefactor 1\(\leq\)\(q_{\text{B}}\)\(\leq\)\(\frac{3}{2}\) (cf. Eq. (28)) depends on the asymmetry of the SET. This could serve as an evidence of the non-equilibrium chiral Majorana fermions in the island. In addition, the gate charge oscillations show distinctive features in the Majorana case, as shown in Fig. 3 [panel (f)]. Such measurements could provide an alternative to the interferometry [31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41] or time resolved transport [42; 43; 44; 45; 46; 47] in Majorana devices.
## V Summary
In this work, we studied the non-equilibrium transport in single-electron transistors where the strong Coulomb blockade is suppressed by large voltages. Two different kinds of quantum dots were considered. These are the islands with chiral Dirac or chiral Majorana circular modes. These could be the edge states of the usual or anomalous quantum Hall insulators (Dirac) or of the proximity-induced 2D topological superconductors (Majorana). The results of this work are twofold. First, we calculated the non-equilibrium tunneling density of states and current-voltage relations. We found an unusual behavior of the offset current in the Majorana case. There are also distinctive features in the residual gate charge oscillations of the transport current (Coulomb diamond) in the Majorana case. Second, on the methodological level, we developed an instanton-like approach in the Keldysh formalism in the limit of small conductances and high voltages.
###### Acknowledgements.
This research was financially supported by the DFG Grants No. MI 658/12-1, MI 658/13-1, SH 81/6-1, and by RFBR Grant No. 20-52-12034.
|
2303.13309 | New Results on Single User Massive MIMO | Achieving high bit rates is the main goal of wireless technologies like 5G
and beyond. This translates to obtaining high spectral efficiencies using large
number of antennas at the transmitter and receiver (single user massive
multiple input multiple output or SU-MMIMO). It is possible to have a large
number of antennas in the mobile handset at mm-wave frequencies in the range
$30 - 300$ GHz due to the small antenna size. In this work, we investigate the
bit-error-rate (BER) performance of SU-MMIMO in two scenarios (a) using
serially concatenated turbo code (SCTC) in uncorrelated channel and (b)
parallel concatenated turbo code (PCTC) in correlated channel. Computer
simulation results indicate that the BER is quite insensitive to
re-transmissions and wide variations in the number of transmit and receive
antennas. Moreover, we have obtained a BER of $10^{-5}$ at an average
signal-to-interference plus noise ratio (SINR) per bit of just 1.25 dB with 512
transmit and receive antennas ($512\times 512$ SU-MMIMO system) with a spectral
efficiency of 256 bits/transmission or 256 bits/sec/Hz in an uncorrelated
channel. Similar BER results have been obtained for SU-MMIMO using PCTC in
correlated channel. A semi-analytic approach to estimating the BER of a turbo
code has been derived. | Kasturi Vasudevan, Surendra Kota, Lov Kumar, Himanshu Bhusan Mishra | 2023-03-23T14:43:13Z | http://arxiv.org/abs/2303.13309v1 | # New Results on Single User Massive MIMO
###### Abstract
Achieving high bit rates is the main goal of wireless technologies like 5G and beyond. This translates to obtaining high spectral efficiencies using large number of antennas at the transmitter and receiver (single user massive multiple input multiple output or SU-MMIMO). It is possible to have a large number of antennas in the mobile handset at mm-wave frequencies in the range \(30-300\) GHz due to the small antenna size. In this work, we investigate the bit-error-rate (BER) performance of SU-MMIMO in two scenarios (a) using serially concatenated turbo code (SCTC) in uncorrelated channel and (b) parallel concatenated turbo code (PCTC) in correlated channel. Computer simulation results indicate that the BER is quite insensitive to re-transmissions and wide variations in the number of transmit and receive antennas. Moreover, we have obtained a BER of \(10^{-5}\) at an average signal-to-interference plus noise ratio (SINR) per bit of just 1.25 dB with 512 transmit and receive antennas (\(512\times 512\) SU-MMIMO system) with a spectral efficiency of 256 bits/transmission or 256 bits/sec/Hz in an uncorrelated channel. Similar BER results have been obtained for SU-MMIMO using PCTC in correlated channel. A semi-analytic approach to estimating the BER of a turbo code has been derived.
Single user massive multiple input multiple output (SU-MMIMO), Rayleigh fading, serially concatenated turbo code (SCTC), parallel concatenated turbo code (PCTC), spectral efficiency (SE), signal-to-interference plus noise ratio (SINR) per bit, spatial multiplexing, bit-error-rate (BER).
## I Introduction
As wireless technologies evolve beyond 5G [1, 2, 3], there is a growing need to attain peak data rates of about gigabits per second per user, which is required for high definition video, remote surgery, autonomous vehicles, gaming and so on, while at the same time consuming minimum transmit power. This can only be achieved by using multiple antennas at the transmitter and receiver [4, 5, 6, 7, 8], small constellations like quadrature shift keying (QPSK) and powerful error correcting codes like turbo or low density parity check (LDPC) codes. Having a large number of antennas in the mobile handset is feasible in mm-wave frequencies [9, 10, 11, 12] (\(30-300\) GHz) due to the small antenna size. The main concern about mm wave communications has been its rather high attenuation in outdoor environments with rain and snow [13]. Therefore, at least in the initial stages, mm wave could be deployed indoors. The second issue relates to the poor penetration characteristics of mm wave through walls, doors, windows and other materials. This points towards to usage of mm wave [9] in a single room, say a big auditorium or underground parking and so on. Reconfigurable intelligent surface (RIS) [14, 15, 16, 17] could be used to boost the propagation of mm waves, both indoors and outdoors.
Most of the massive MIMO systems discussed in the literature are multi-user (MU) [18, 19, 20, 21, 22, 23, 24, 25, 26], that is, the base station has a large number of antennas and the mobile handset has only a single antenna (\(N_{t}=1\)). A large number of users are served simultaneously by the base station. A comparison between MU-MMIMO and SU-MMIMO is given in Table I[27],[28].
The base station in MU-MMIMO uses beamforming to improve the signal-to-noise ratio at the mobile handset. On the other hand, SU-MMIMO uses spatial multiplexing to improve the spectral efficiency in the downlink and uplink. The comparison between beamforming and spatial multiplexing is given in Table II[27],[28]. The total transmit power of SU-MMIMO using uncoded QPSK versus MU-MMIMO using \(M\)-ary QAM is shown in Table III. The minimum Euclidean distance between symbols of all constellations is taken to be 2. The peak-to-average power ratio (PAPR) for SU-MMIMO using QPSK is compared with MU-MMIMO using \(M\)-ary QAM in Table IV[27]. Of course in the case of frequency selective fading channels, OFDM needs to be used, which would result in PAPR greater than 0 dB even for QPSK signalling. It is clear from Tables I - IV that technologies that use SU-MMIMO have a lot to gain.
Moreover, since all transmit antennas use the same carrier frequency, there is no increase in bandwidth.
SU-MMIMO with equal number of transmit and receive antennas is given in [29], [30]. The probability of erasure in MIMO-OFDM is presented in [31]. A practical SU-MMIMO receiver with estimated channel, carrier frequency offset and timing is described in [32], [33]. SU-MMIMO with unequal number of transmit and receive antennas and precoding is discussed in [34], [35] and the case without precoding in [36], [37]. All the earlier research on SU-MMIMO involved the use of a parallel concatenated turbo code (PCTC) and uncorrelated channel. In this work, we investigate the performance of SU-MMIMO using (a) serial concatenated turbo code (SCTC) in uncorrelated channel and (b) PCTC in correlated channel. Throughout this article we assume that the channel is known perfectly at the receiver. Perfect carrier and timing synchronization is also assumed.
This work is organized as follows. Section II discusses SU-MMIMO with SCTC in uncorrelated channel, the procedure for bit-error-rate (BER) estimation and computer simulation results. Section III deals with SU-MMIMO using PCTC in correlated channel with and without precoding along with computer simulation results. Section IV presents the conclusions and scope for future work.
## II SU-MMIMO with SCTC
### _System Model_
Consider the block diagram in Figure 1[36], [38]. The input bits \(a_{i}\), \(1\leq i\leq L_{d1}\) is passed through an outer rate-\(1/2\) recursive systematic convolutional (RSC) encoder to obtain the coded bit stream \(b_{i}\), \(1\leq i\leq L_{d}\), where
\[L_{d}=2L_{d1}. \tag{1}\]
Now \(b_{i}\) is input to an interleaver to generate \(c_{i}\), \(1\leq i\leq L_{d}\). Next \(c_{i}\) is passed through an inner rate-\(1/2\) RSC encoder and mapped to symbols \(S_{i}\), \(1\leq i\leq L_{d}\), in a quadrature phase shift keyed (QPSK) constellation having symbol coordinates \(\pm 1\pm\) j, where j \(=\sqrt{-1}\). Throughout this article we assume that bit "0" maps to \(+1\) and bit "1" maps to \(-1\). The set of \(L_{d}\) QPSK symbols constitute a "frame" and are transmitted using \(N_{t}\) antennas. We assume that
\[\frac{L_{d}}{N_{t}}=\text{an integer} \tag{2}\]
so that all symbols in the frame are transmitted using \(N_{t}\) antennas. The set of QPSK symbols transmitted simultaneously using \(N_{t}\) antennas constitute a "block". The generator matrix for both the inner and outer rate-\(1/2\) RSC encoder is given by
\[\mathbf{G}(D)=\left[\begin{array}{cc}1&\frac{1+D^{2}}{1+D+D^{2}}\end{array} \right]. \tag{3}\]
Hence, both encoders have \(S_{E}=4\) states in the trellis. Assuming uncorrelated Rayleigh flat fading, the received signal for the \(k^{th}\) re-transmission (\(0\leq k\leq N_{rt}-1\), \(k\) is an integer) is given by (2) of [36], which is repeated here for convenience
\[\mathbf{\tilde{R}}_{k}=\mathbf{\tilde{H}}_{k}\mathbf{S}+\mathbf{\tilde{W}}_{k} \tag{4}\]
where \(\mathbf{S}\in\mathbb{C}^{N_{t}\times 1}\) whose elements are drawn from the QPSK constellation, \(\mathbf{\tilde{H}}_{k}\in\mathbb{C}^{N_{r}\times N_{t}}\) whose elements are mutually independent and \(\mathscr{N}(0,\,2\sigma_{H}^{2})\) and and \(\mathbf{\tilde{W}}_{k}\in\mathbb{C}^{N_{r}\times 1}\) is the additive white Gaussian noise (AWGN) vector whose elements are mutually independent and \(\mathscr{N}(0,\,2\sigma_{W}^{2})\). Note that \(\sigma_{H}^{2}\), \(\sigma_{W}^{2}\) denote the variance per dimension (real part or imaginary part) and \(N_{r}\) is the number of receive antennas. We assume that \(\mathbf{\tilde{H}}_{k}\) and \(\mathbf{\tilde{W}}_{k}\) are independent across blocks and re-transmissions, hence (4) in [29] is valid with \(N\) replaced by \(N_{t}\). Recall that (see also (16) of [36])
\[N_{\text{tot}}=N_{t}+N_{r}. \tag{5}\]
Following the procedure given in Section 4 of [36] we get (see (36) of [36])
\[\tilde{Y}_{i}=F_{i}S_{i}+\tilde{U}_{i}\qquad\text{for }1\leq i\leq N_{t}. \tag{6}\]
After concatenation over blocks, \(\tilde{Y}_{i}\) in (6) for \(1\leq i\leq L_{d}\) is sent to the turbo decoder (see also the sentence after (25) in [29]). For the sake of consistency with earlier work [38], we re-index \(i\) as \(0\leq i\leq L_{d}-1\) and use the same index \(i\) for \(a_{i}\), \(b_{i}\), \(c_{i}\) and \(Y_{i}\) without any ambiguity. In the next subsection, we discuss the turbo decoding (BCJR) algorithm [39], [40] for the inner code.
### _BCJR for the Inner Code_
Let \(\mathscr{D}_{n}\) denote the set of states that diverge from state \(n\) in the trellis [38], [40]. Similarly, let \(\mathscr{C}_{n}\) denote the set of states that converge to state \(n\). Let \(\alpha_{i,\,n}\) denote the forward sum-of-products (SOP) at time \(i\), \(0\leq i\leq L_{d}-2\), at state \(n\), \(0\leq n\leq S_{E}-1\). Then the forward SOP can be recursively computed as follows (see also (30) of [38]):
\[\alpha^{\prime}_{i+1,\,n}=\sum_{m\in\mathscr{C}_{n}}\alpha_{i,\,m}\gamma_{i,\, m,\,n}P(c_{i,\,m,\,n})\]
\[\alpha_{0,\,n} = 1\] \[\alpha_{i+1,\,n} = \alpha^{\prime}_{i+1,\,n}\Bigg{/}\Bigg{(}\sum_{n=0}^{S_{E}-1}\alpha ^{\prime}_{i+1,\,n}\Bigg{)} \tag{7}\]
where \(P(c_{i,m,n})\) denotes the _a priori_ probability of the systematic bit corresponding to the transition from encoder state \(m\) to \(n\), at time \(i\) (this is set to 0.5 at the beginning of the first iteration). The last equation in (7) is required to prevent numerical instabilities [40]. We have
\[\gamma_{i,\,m,\,n}=\exp\left(-\frac{\left(\tilde{Y}_{i}-S_{m,\,n}\right)^{2}}{ 2\sigma_{U}^{2}}\right) \tag{8}\]
where \(\tilde{Y}_{i}\) is given by (6), \(S_{m,\,n}\) is the QPSK symbol corresponding to the transition from encoder state \(m\) to \(n\) and \(\sigma_{U}^{2}\) is given by (38) of [36] which is repeated here for convenience:
\[E\left[\left|\tilde{U}_{i}\right|^{2}\right] = \frac{8\sigma_{H}^{4}N_{r}(N_{t}-1)+4\sigma_{W}^{2}\sigma_{H}^{2} N_{r}}{N_{rt}} \tag{9}\] \[\overset{\Delta}{=}\sigma_{U}^{2}.\]
Robust turbo decoding (see section 4.2 of [41]) can be employed to compute \(\gamma_{i,\,m,\,n}\) in (8). Similarly, let \(\beta_{i,\,m}\) denote the backward SOP at time \(i\), \(1\leq i\leq L_{d}-1\)
Figure 1: SU-MMIMO with serially concatenated turbo code.
at state \(m,\,0\leq m\leq S_{E}-1\). Then the backward SOP can be recursively computed as (see also (33) of [38]):
\[\beta_{i,\,m}^{\prime} =\sum_{n\in\mathscr{D}_{m}}\beta_{i+1,\,n}\gamma_{i,\,m,\,n}P(c_{i, \,m,\,n})\] \[\beta_{L_{d},\,m} =1\] \[\beta_{i,\,m} =\beta_{i,\,m}^{\prime}\Bigg{/}\Bigg{(}\sum_{m=0}^{S_{E}-1}\beta_ {i,\,m}^{\prime}\Bigg{)} \tag{10}\]
Let \(\rho^{+}(n)\) denote the state that is reached from encoder state \(n\) when the input symbol is \(+1\). Similarly let \(\rho^{-}(n)\) denote the state that can be reached from encoder state \(n\) when the input symbol is \(-1\). Then for \(0\leq i\leq L_{d}-1\) we compute
\[C_{i+} =\sum_{n=0}^{S_{E}-1}\alpha_{i,\,n}\gamma_{i,\,n,\,\rho^{+}(n)} \beta_{i+1,\,\rho^{+}(n)}\] \[C_{i-} =\sum_{n=0}^{S_{E}-1}\alpha_{i,\,n}\gamma_{i,\,n,\,\rho^{-}(n)} \beta_{i+1,\,\rho^{-}(n)}. \tag{11}\]
Finally, the extrinsic information that is fed to the BCJR algorithm for the outer code is computed as, for \(0\leq i\leq L_{d}-1\), (see (36) of [38]):
\[E\left(c_{i}=+1\right) =C_{i+}/(C_{i+}+C_{i-})\] \[E\left(c_{i}=-1\right) =C_{i-}/(C_{i+}+C_{i-}). \tag{12}\]
Next, we describe the BCJR for the outer code.
### BCJR for the Outer Code
Let \(\alpha_{i,\,n}\) denote the forward SOP at time \(i,\,0\leq i\leq L_{d1}-2\), at state \(n,\,0\leq n\leq S_{E}-1\). Then the forward SOP is recursively computed as follows:
\[\alpha_{i+1,\,n}^{\prime} =\sum_{m\in\mathscr{V}_{n}}\alpha_{i,\,m}\gamma_{\text{sys},\,i, \,m,\,n}\gamma_{\text{par},\,i,\,m,\,n}P(a_{i,\,m,\,n})\] \[\alpha_{0,\,n} =1\] \[\alpha_{i+1,\,n} =\alpha_{i+1,\,n}^{\prime}\Bigg{/}\Bigg{(}\sum_{n=0}^{S_{E}-1} \alpha_{i+1,\,n}^{\prime}\Bigg{)} \tag{13}\]
where \(P(a_{i,\,m,\,n})\) denotes the _a priori_ probability of the systematic bit corresponding to the transition from state \(m\) to state \(n\), at time \(i\). In the absence of any other information, we assume \(P(a_{i,\,m,\,n})=0.5\)[42]. We also have for \(0\leq i\leq L_{d1}-1\) (similar to (38) of [38])
\[\gamma_{\text{sys},\,i,\,m,\,n} =\left\{\begin{array}{ll}E\left(c_{\pi(2i)}+1\right)&\text{if } \mathscr{H}_{1}\\ E\left(c_{\pi(2i)}=-1\right)&\text{if }\mathscr{H}_{2}\end{array}\right.\] \[\gamma_{\text{par},\,i,\,m,\,n} =\left\{\begin{array}{ll}E\left(c_{\pi(2i+1)}=+1\right)&\text{if } \mathscr{H}_{3}\\ E\left(c_{\pi(2i+1)}=-1\right)&\text{if }\mathscr{H}_{4}\end{array}\right. \tag{14}\]
where \(\pi(\cdot)\) denotes the interleaver map and
\[\mathscr{H}_{1}:\text{systematic bit from state $m$ to $n$ is $+1$}\] \[\mathscr{H}_{2}:\text{systematic bit from state $m$ to $n$ is $-1$}\] \[\mathscr{H}_{3}:\text{parity bit from state $m$ to $n$ is $+1$}\] \[\mathscr{H}_{4}:\text{parity bit from state $m$ to $n$ is $-1$}. \tag{15}\]
Observe that in (14) and (15) it is assumed that after the parallel-to-serial conversion in Figure 1, \(b_{2i}\) corresponds to the systematic (data) bits and \(b_{2i+1}\) corresponds to the parity bits for \(0\leq i\leq L_{d1}-1\).
Similarly, let \(\beta_{i,\,m}\) denote the backward SOP at time \(i,\,1\leq i\leq L_{d1}-1\), at state \(m,\,0\leq m\leq S_{E}-1\). Then the backward SOP can be recursively computed as:
\[\beta_{i,\,m}^{\prime} =\sum_{n\in\mathscr{D}_{m}}\beta_{i+1,\,n}\gamma_{\text{sys},\,i, \,m,\,n}\gamma_{\text{par},\,i,\,m,\,n}P(a_{i,\,m,\,n})\] \[\beta_{L_{d1},\,m} =1\] \[\beta_{i,\,m} =\beta_{i,\,m}^{\prime}\Bigg{/}\Bigg{(}\sum_{m=0}^{S_{E}-1} \beta_{i,\,m}^{\prime}\Bigg{)}\,. \tag{16}\]
Next, for \(0\leq i\leq L_{d1}-1\) we compute
\[B_{2i+} =\sum_{n=0}^{S_{E}-1}\alpha_{i,\,n}\gamma_{\text{par},\,i,\,n,\, \rho^{+}(n)}\beta_{i+1,\,\rho^{+}(n)}\] \[B_{2i-} =\sum_{n=0}^{S_{E}-1}\alpha_{i,\,n}\gamma_{\text{par},\,i,\,n,\, \rho^{-}(n)}\beta_{i+1,\,\rho^{-}(n)}. \tag{17}\]
Let \(\mu^{+}(n)\) and \(\mu^{-}(n)\) denote the states that are reached from state \(n\) when the parity bit is \(+1\) and \(-1\), respectively. Similarly for \(0\leq i\leq L_{d1}-1\) compute
\[B_{2i+1+} =\sum_{n=0}^{S_{E}-1}\alpha_{i,\,n}\gamma_{\text{sys},\,i,\,n,\,\mu^ {+}(n)}\beta_{i+1,\,\mu^{+}(n)}\] \[B_{2i+1-} =\sum_{n=0}^{S_{E}-1}\alpha_{i,\,n}\gamma_{\text{sys},\,i,\,n,\, \mu^{-}(n)}\beta_{i+1,\,\mu^{-}(n)}. \tag{18}\]
The extrinsic information that is sent to the inner decoder for \(0\leq i\leq L_{d}-1\) is computed as
\[E\left(b_{i}=+1\right) =B_{i+}/(B_{i+}+B_{i-})\] \[E\left(b_{i}=-1\right) =B_{i-}/(B_{i+}+B_{i-}) \tag{19}\]
where \(B_{i+},\,B_{i-}\) are given by (17) or (18) depending on whether \(i\) is even or odd respectively. Note that \(P(c_{i,\,m,\,n})\) for \(0\leq i\leq L_{d}-1\) in (7) and (10) is equal to
\[P\left(c_{i,\,m,\,n}\right)=\left\{\begin{array}{ll}E\left(b_{ \pi^{-1}(i)}=+1\right)&\text{if }\mathscr{H}_{1}\\ E\left(b_{\pi^{-1}(i)}=-1\right)&\text{if }\mathscr{H}_{2}\end{array}\right. \tag{20}\]
where \(\pi^{-1}(\cdot)\) denotes the inverse interleaver map. Note that \(c_{i,\,m,\,n}\) are the systematic (data) bits for the inner encoder.
After the convergence of the BCJR algorithm in the last iteration, the final _a posteriori_ probabilities of \(a_{i}\) for \(0\leq i\leq L_{d1}-1\) is given by
\[P\left(a_{i}=+1\right) =E\left(b_{2i}=+1\right)E\left(c_{\pi(2i)}=+1\right)\] \[P\left(a_{i}=-1\right) =E\left(b_{2i}=-1\right)E\left(c_{\pi(2i)}=-1\right) \tag{21}\]
where \(E\left(c_{i}=\pm 1\right)\) and \(E\left(b_{i}=\pm 1\right)\) are given by (12) and (19) respectively. Finally note that for \(0\leq i\leq L_{d1}-1\)
\[a_{i}=b_{2i}=c_{\pi(2i)}. \tag{22}\]
In the next section we present the estimation of the bit-error-rate (BER) of the SCTC.
Figure 3: Normalized histogram over two frames (\(F=2\)) for \(N_{\rm tot}=1024\), \(N_{t}=512\), \(N_{rt}=2\) (a) \(L_{d1}=1024\), \(\text{SNR}_{\text{av},\,b}=1.25\) dB and (b) \(L_{d1}=50176\), \(\text{SNR}_{\text{av},\,b}=0.5\) dB.
### _Estimation of BER_
The estimation of BER of SCTC is based on the following propositions:
_Proposition 1: The extrinsic information as computed in (12) and (19) lies in the range \([0,\,1]\) (0 and 1 included). The extrinsic information in the range \((0,\,1)\), 0 and 1 excluded, is Gaussian distributed [43] for each frame._
This is illustrated in Figure 2 for different values of the frame length \(L_{d1}\), over many frames (\(F\)). We find that for large values of \(L_{d1}\), the histogram better approximates the Gaussian characteristic. It may be noted that the extrinsic information at the output of one decoder is equal to the _a priori_ probabilities for the other decoder.
_Proposition 2: After convergence of the BCJR algorithm in the final iteration, the extrinsic information at a decoder output has the same mean and variance as that of the a priori probability at its input._
_Proposition 3: The mean and variance of the Gaussian distribution may vary from frame to frame._
This is illustrated in Figure 3 over two frames, that is, \(F=2\).
Based on _Propositions 1 & 2_ and (22), after convergence of the BCJR algorithm, we can write for \(0\leq i\leq L_{d1}-1\)
\[E\left(b_{2i}=+1\right) =\frac{1}{\sigma\sqrt{2\pi}}\mathrm{e}^{-\left(r_{1,\,i}-A\right) ^{2}/(2\sigma^{2})}\] \[E\left(c_{\pi(2i)}=+1\right) =\frac{1}{\sigma\sqrt{2\pi}}\mathrm{e}^{-\left(r_{2,\,i}-A\right) ^{2}/(2\sigma^{2})} \tag{23}\]
where it is assumed that bit "0" maps to \(A\) and bit "1" maps to \(-A\) and
\[r_{1,\,i} =\pm A+w_{1,\,i}\] \[r_{2,\,i} =\pm A+w_{2,\,i} \tag{24}\]
where \(w_{1,\,i}\), \(w_{2,\,i}\) denote real-valued samples of zero-mean additive white Gaussian noise (AWGN) with variance \(\sigma^{2}\). Similarly we have
\[E\left(b_{2i}=-1\right) =\frac{1}{\sigma\sqrt{2\pi}}\mathrm{e}^{-\left(r_{1,\,i}+A\right) ^{2}/(2\sigma^{2})}\] \[E\left(c_{\pi(2i)}=-1\right) =\frac{1}{\sigma\sqrt{2\pi}}\mathrm{e}^{-\left(r_{2,\,i}+A\right) ^{2}/(2\sigma^{2})}. \tag{25}\]
Clearly
\[\ln\left(\frac{E\left(b_{2i}=+1\right)}{E\left(b_{2i}=-1\right)} \right) =\frac{2A}{\sigma^{2}}r_{1,\,i}\] \[\ln\left(\frac{E\left(c_{\pi(2i)}=+1\right)}{E\left(c_{\pi(2i)}= -1\right)}\right) =\frac{2A}{\sigma^{2}}r_{2,\,i}. \tag{26}\]
From (21) and (26) we have for \(0\leq i\leq L_{d1}-1\)
\[\ln\left(\frac{P\left(a_{i}=+1\right)}{P\left(a_{i}=-1\right)}\right) =\frac{2A}{\sigma^{2}}\left(r_{1,\,i}+r_{2,\,i}\right)\] \[\triangleq\frac{2A}{\sigma^{2}}r_{3,\,i}. \tag{27}\]
Consider the average
\[\mathscr{Y} =\frac{2A}{\sigma^{2}L_{d2}}\sum_{i=0}^{L_{d2}-1}a_{i}r_{3,\,i}\] \[=\frac{4A^{2}}{\sigma^{2}}+\mathscr{Z} \tag{28}\]
where
\[\mathscr{Z} =\frac{2A}{\sigma^{2}L_{d2}}\sum_{i=0}^{L_{d2}-1}a_{i}\left(w_{1, \,i}+w_{2,\,i}\right)\] \[L_{d2} \leq L_{d1}. \tag{29}\]
Note that the average in (28) is done over less than \(L_{d1}\) terms to avoid situations like
\[P\left(a_{i}=\pm 1\right)=1\text{ or }0. \tag{30}\]
In fact, only those time instants \(i\) have been considered in the summation of (28) for which
\[P\left(a_{i}=\pm 1\right)>\mathrm{e}^{-500}. \tag{31}\]
Now
\[E\left[\mathscr{Z}\right] =0\] \[E\left[\mathscr{Z}^{2}\right] =\frac{4A^{2}}{\sigma^{4}L_{d2}^{2}}\sum_{i=0}^{L_{d2}-1}2\sigma^{2}\] \[=\frac{8A^{2}}{\sigma^{2}L_{d2}} \tag{32}\]
where we have used the fact that \(w_{1,\,i}\), \(w_{2,\,i}\) are independent. Now, we know that the probability of error for the BPSK signal in (27), that is
\[r_{3,\,i}=r_{1,\,i}+r_{2,\,i}=\pm 2A+w_{1,\,i}+w_{2,\,i} \tag{33}\]
is equal to [40]
\[P(e)=\frac{1}{2}\mathrm{erfc}\left(\sqrt{\frac{A^{2}}{\sigma^{2}}}\right). \tag{34}\]
Therefore from (28), (32) and (34) we have
\[P_{f}(e)\approx\frac{1}{2}\mathrm{erfc}\left(\sqrt{\frac{|\mathscr{Y}|}{4}}\right) \tag{35}\]
where \(P_{f}(e)\) denotes the probability of bit error for frame "\(f\)" and
\[E\left[\mathscr{Z}^{2}\right]\to 0\qquad\text{for }L_{d2}\gg 1. \tag{36}\]
Observe that it is necessary to take the absolute value of \(\mathscr{Y}\) in (35) since there is a possibility that it can be negative. The average probability of bit error over \(F\) frames is given by
\[P(e)=\frac{1}{F}\sum_{f=0}^{F-1}P_{f}(e). \tag{37}\]
In the next section we present computer simulation results for SU-MMIMO using SCTC in uncorrelated channel.
Figure 4: Simulation results for \(N_{\rm tot}=1024\).
Figure 5: Simulation results for \(N_{\rm tot}=32\).
### _Simulation Results_
The simulation parameters are given in Table V. We can make the following observations from Figures 4 - 6[36]:
* The theoretical prediction of BER closely matches with simulations.
* For \(N_{\rm tot}=32,\,1024\), the BER is quite insensitive to wide variations in the total number of antennas \(N_{\rm tot}\), transmit antennas \(N_{t}\) and retransmissions \(N_{rt}\).
* For \(N_{\rm tot}=2\), the BER improves significantly with increasing retransmissions.
In Figure 4(c) we observe that there is more than 1 dB improvement in SINR compared to Figures 4(a, b), 5 and 6. However, large values of \(L_{d1}\) may introduce more latency which is contrary to the requirements of 5G and beyond. In the next section we present SU-MMIMO using PCTC in correlated channel.
## III SU-MMIMO using PCTC in Correlated Channel
### _System Model_
The block diagram of the system is identical to Figure 2 in [36] and the received signal is given by (4). Note that in (4), the channel autocorrelation matrix is given by
\[\mathbf{R}_{\mathbf{\tilde{H}}\mathbf{\tilde{H}}} =\frac{1}{2}E\left[\mathbf{\tilde{H}}_{k}^{H}\mathbf{\tilde{H}}_ {k}\right]\] \[=N_{r}\mathbf{I}_{N_{t}} \tag{38}\]
where the superscript "\(H\)" denotes Hermitian and \(\mathbf{I}_{N_{t}}\) denotes the \(N_{t}\times N_{t}\) identity matrix. In this section, we investigate the situation where \(\mathbf{R}_{\mathbf{\tilde{H}}\mathbf{\tilde{H}}}\) is not an identity matrix, but is a valid autocorrelation matrix [40]. As mentioned in [36], the elements of \(\mathbf{\tilde{H}}_{k}\) - given by \(\tilde{H}_{k,\,i,\,j}\) for the \(k^{th}\) re-transmission, \(i^{th}\) row, \(j^{th}\) column of \(\mathbf{\tilde{H}}_{k}\) - are zero-mean, complex Gaussian random variables with variance per dimension equal to \(\sigma_{H}^{2}\). The in-phase and quadrature components of \(\tilde{H}_{k,\,i,\,j}\) - denoted by \(H_{k,\,i,\,j,\,I}\) and \(H_{k,\,i,\,j,\,Q}\) respectively - are statistically independent. Moreover, we assume that the rows of \(\mathbf{\tilde{H}}_{k}\) are statistically independent. Following the procedure in [36] for the case without precoding, we now find the expression for the average SINR per bit before and after averaging over retransmissions (\(k\)). All symbols and notations have the usual meaning, as given in [36].
### _SINR Analysis_
The \(i^{th}\) element of \(\mathbf{\tilde{H}}_{k}^{H}\mathbf{\tilde{R}}_{k}\) is given by (25) of [36] which is repeated here for convenience
\[\tilde{Y}_{k,\,i}=\tilde{F}_{k,\,i,\,i}S_{i}+\tilde{I}_{k,\,i}+ \tilde{V}_{k,\,i}\quad\text{for }1\leq i\leq N_{t} \tag{39}\]
where
\[\tilde{V}_{k,\,i} =\sum_{j=1}^{N_{r}}\tilde{H}_{k,\,j,\,i}^{*}\tilde{W}_{k,\,j}\] \[\tilde{I}_{k,\,i} =\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N_{t}}\tilde{F}_{k,\,i,\,j}S_{j}\] \[\tilde{F}_{k,\,i,\,j} =\sum_{l=1}^{N_{r}}\tilde{H}_{k,\,l,\,i}^{*}\tilde{H}_{k,\,l,\,j}. \tag{40}\]
We have
\[E\left[\tilde{F}_{k,\,i,\,i}^{2}\right] =E\left[\sum_{l=1}^{N_{r}}\left|\tilde{H}_{k,\,l,\,i}\right|^{2} \sum_{m=1}^{N_{r}}\left|\tilde{H}_{k,\,m,\,i}\right|^{2}\right]\] \[=E\left[\sum_{l=1}^{N_{r}}\left(H_{k,\,l,\,i,\,I}^{2}+H_{k,\,l,\,i, \,Q}^{2}\right)\right.\] \[\left.\sum_{m=1}^{N_{r}}\left(H_{k,\,m,\,i,\,I}^{2}+H_{k,\,m,\,i, \,Q}^{2}\right)\right]\] \[=4\sigma_{H}^{4}N_{r}(N_{r}+1) \tag{41}\]
which is identical to (27) in [36] and we have used the following properties
1. The in-phase and quadrature components of \(\tilde{H}_{k,\,i,\,j}\) are independent.
2. The rows of \(\mathbf{\tilde{H}}_{k}\) are independent.
3. For zero-mean, real-valued Gaussian random variable \(X\) with variance equal to \(\sigma_{X}^{2}\), \(E\left[X^{4}\right]=3\sigma_{X}^{4}\).
The interference power is
\[E\left[\left|\tilde{I}_{k,\,i}\right|^{2}\right] =E\left[\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N_{r}}\tilde{F}_{k,\,i,\,j}S_{j}\sum_{\begin{subarray}{c }l=1\\ l\neq i\end{subarray}}^{N_{r}}\tilde{F}_{k,\,i,\,l}^{*}S_{l}^{*}\right]\] \[=\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N_{r}}\sum_{\begin{subarray}{c}l=1\\ l\neq i\end{subarray}}^{N_{t}}E\left[\tilde{F}_{k,\,i,\,j}\tilde{F}_{k,\,i,\,l }^{*}\right]E\left[S_{j}S_{l}^{*}\right]\] \[=P_{\text{av}}\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N_{t}}E\left[\left|\tilde{F}_{k,\,i,\,j}\right|^{2} \right]. \tag{42}\]
where we have used (9) in [36]. Similarly the noise power is
\[E\left[\left|\tilde{V}_{k,\,i}\right|^{2}\right] =E\left[\sum_{j=1}^{N_{r}}\tilde{H}_{k,\,j,\,i}^{*}\tilde{W}_{k, \,j}\sum_{m=1}^{N_{r}}\tilde{H}_{k,\,m,\,i}\tilde{W}_{k,\,m}^{*}\right]\] \[=\sum_{j=1}^{N_{r}}\sum_{m=1}^{N_{r}}E\left[\tilde{H}_{k,\,j,\,i} ^{*}\tilde{H}_{k,\,m,\,i}\right]E\left[\tilde{W}_{k,\,m}^{*}\tilde{W}_{k,\,j}\right]\] \[=\sum_{j=1}^{N_{r}}\sum_{m=1}^{N_{r}}2\sigma_{H}^{2}\delta_{K}(j -m)2\sigma_{W}^{2}(j-m)\] \[=4N_{r}\sigma_{H}^{2}\sigma_{W}^{2} \tag{43}\]
which is identical to (29) in [36] and we have used the following properties:
1. Rows of \(\tilde{\mathbf{H}}_{k}\) are independent.
2. Sifting property of the Kronecker delta function.
3. Noise and channel coefficients are independent.
Now in (42)
\[E\left[\left|\tilde{F}_{k,\,i,\,j}\right|^{2}\right] =E\left[\sum_{l=1}^{N_{r}}\tilde{H}_{k,\,l,\,i}^{*}\tilde{H}_{k, \,l,\,j}\sum_{m=1}^{N_{r}}\tilde{H}_{k,\,m,\,i}\tilde{H}_{k,\,m,\,j}^{*}\right]\] \[=\sum_{l=1}^{N_{r}}E\Bigg{[}\tilde{H}_{k,\,l,\,i}^{*}\tilde{H}_{k,\,l,\,j}\Bigg{(}\tilde{H}_{k,\,l,\,i}\tilde{H}_{k,\,l,\,j}^{*}\] \[\qquad+\sum_{\begin{subarray}{c}m=1\\ m\neq l\end{subarray}}^{N_{r}}\tilde{H}_{k,\,m,\,i}\tilde{H}_{k,\,m,\,j}^{*} \Bigg{)}\Bigg{]}\] \[=\sum_{l=1}^{N_{r}}E\Bigg{[}\left|\tilde{H}_{k,\,l,\,i}\right|^{ 2}\left|\tilde{H}_{k,\,l,\,j}\right|^{2}\] \[\qquad+\Bigg{(}\sum_{\begin{subarray}{c}m=1\\ m\neq l\end{subarray}}^{N_{r}}\tilde{H}_{k,\,l,\,i}^{*}\tilde{H}_{k,\,l,\,j} \tilde{H}_{k,\,m,\,i}\tilde{H}_{k,\,m,\,j}^{*}\Bigg{)}\Bigg{]}. \tag{44}\]
Now the first summation in (44) is equal to
\[E_{1} =E\left[\left|\tilde{H}_{k,\,l,\,i}\right|^{2}\left|\tilde{H}_{k, \,l,\,j}\right|^{2}\right]\] \[=E\left[\left(H_{k,\,l,\,i}^{2}+H_{k,\,l,\,i,\,Q}^{2}\right) \left(H_{k,\,l,\,j,\,I}^{2}+H_{k,\,l,\,j,\,Q}^{2}\right)\right]\] \[=4\sigma_{H}^{4}+4R_{\tilde{H}\tilde{H},\,j-i}^{2} \tag{45}\]
where we have used the property that for real-valued, zero-mean Gaussian random variables \(X_{i}\), \(1\leq i\leq 4\)[44], [45]
\[E\left[X_{1}X_{2}X_{3}X_{4}\right] =C_{12}C_{34}+C_{13}C_{24}+C_{14}C_{23} \tag{46}\]
where
\[C_{ij}=E\left[X_{i}X_{j}\right]\qquad\text{for $1\leq i$, $j\leq 4$} \tag{47}\]
and
\[R_{\tilde{H}\,\tilde{H},j-i} =E\left[H_{k,\,l,\,i,\,I}H_{k,\,l,\,j,\,l}\right]\] \[=E\left[H_{k,\,l,\,i,\,Q}H_{k,\,l,\,j,\,Q}\right]\] \[=\frac{1}{2}E\left[\tilde{H}_{k,\,l,\,i}^{*}\tilde{H}_{k,\,l,\,j }\right]\] \[=R_{\tilde{H}\,i,-j} \tag{48}\]
is the real-valued autocorrelation of \(\tilde{H}_{k,\,m,\,n}\) and we have made the assumption that the in-phase and quadrature components of \(\tilde{H}_{k,\,m,\,n}\) are independent. The second summation in (44) can be written as
\[E_{2} =\sum_{\begin{subarray}{c}m=1\\ m\neq l\end{subarray}}^{N_{r}}E\left[\tilde{H}_{k,\,l,\,i}^{*}\tilde{H}_{k, \,l,\,j}\tilde{H}_{k,\,m,\,i}\tilde{H}_{k,\,m,\,j}^{*}\right]\] \[=\sum_{\begin{subarray}{c}m=1\\ m\neq l\end{subarray}}^{N_{r}}E\left[\tilde{H}_{k,\,l,\,i}^{*}\tilde{H}_{k,\,l,\,j }\right]E\left[\tilde{H}_{k,\,m,\,i}\tilde{H}_{k,\,m,\,j}^{*}\right]\] \[=\sum_{\begin{subarray}{c}m=1\\ m\neq l\end{subarray}}^{N_{r}}4R_{\tilde{H}\tilde{H},\,j-i}^{2}\] \[=4(N_{r}-1)R_{\tilde{H}\tilde{H},\,j-i}^{2} \tag{49}\]
where we have used the property that the rows of \(\tilde{\mathbf{H}}_{k}\) are independent. Therefore (44) becomes
\[E\left[\left|\tilde{F}_{k,\,i,\,j}\right|^{2}\right] =N_{r}(E_{1}+E_{2})\] \[=4N_{r}\left[\sigma_{H}^{4}+R_{\tilde{H}\tilde{H},\,j-i}^{2}+(N_{ r}-1)\,R_{\tilde{H}\tilde{H},\,j-i}^{2}\right]\] \[=4N_{r}\left[\sigma_{H}^{4}+N_{r}R_{\tilde{H}\tilde{H},\,j-i}^{2} \right]. \tag{50}\]
The total power of interference plus noise is
\[E\left[\left|\tilde{I}_{k,\,i}+\tilde{V}_{k,\,i}\right|^{2}\right] =E\left[\left|\tilde{I}_{k,\,i}\right|^{2}\right]+E\left[\left| \tilde{V}_{k,\,i}\right|^{2}\right]\] \[=4P_{\text{av}}N_{r}\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N_{t}}\left[\sigma_{H}^{4}+N_{r}R_{\tilde{H}\tilde{H},\,j-i}^{2}\right]\] \[\qquad+4N_{r}\sigma_{H}^{2}\sigma_{W}^{2} \tag{51}\]
where we have made the assumption that noise and symbols are independent. The average SINR per bit for the \(i^{th}\) transmit antenna is similar to (31) of [36] which is repeated here for convenience
\[\text{SINR}_{\text{av},\,b,\,i}=\frac{E\left[\left|\tilde{F}_{k,\,i,\,i}S_{i} \right|^{2}\right]\times 2N_{rt}}{E\left[\left|\tilde{I}_{k,\,i}+\tilde{V}_{k,\,i} \right|^{2}\right]}\qquad\text{for $1\leq i\leq N_{t}$} \tag{52}\]
into which (41) and (51) have to be substituted. The upper bound on the average SINR per bit for the \(i^{th}\) transmit antenna is obtained by setting \(\sigma_{W}^{2}=0\) in (51), (52) and is given by, for \(1\leq i\leq N_{t}\)
\[\text{SINR}_{\text{av},\,b,\,\text{UB},\,i}=\frac{\sigma_{H}^{4} \left(1+N_{r}\right)\times 2N_{rt}}{\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N_{t}}\left[\sigma_{H}^{4}+N_{r}R_{\tilde{H}\tilde{H},\,j-i}^{2} \right]}. \tag{53}\]
Observe that in contrast to (31) and (32) in [36], the average SINR per bit and its upper bound depend on the transmit antenna. Let us now compute the average SINR per bit after averaging over retransmissions. The received signal after averaging over retransmissions is given by (6) with (see also (20) of [36])
\[F_{i} =\frac{1}{N_{rt}}\sum_{k=0}^{N_{rt}-1}\tilde{F}_{k,\,i,\,i}\] \[\tilde{U}_{i} =\frac{1}{N_{rt}}\sum_{k=0}^{N_{rt}-1}\left(\tilde{I}_{k,\,i}+ \tilde{V}_{k,\,i}\right)\] \[=\frac{1}{N_{rt}}\sum_{k=0}^{N_{rt}-1}\tilde{U}^{\prime}_{k,\,i} \qquad\text{(say)} \tag{54}\]
where \(\tilde{F}_{k,\,i,\,i}\), \(\tilde{I}_{k,\,i}\) and \(\tilde{V}_{k,\,i}\) are given in (39). The power of the signal component of (6) is
\[E\left[\left|S_{i}\right|^{2}F_{i}^{2}\right] =P_{\text{av}}E\left[F_{i}^{2}\right]\] \[=\frac{P_{\text{av}}}{N_{rt}^{2}}E\left[\sum_{k=0}^{N_{rt}-1} \tilde{F}_{k,\,i,\,i}\sum_{l=0}^{N_{rt}-1}\tilde{F}_{l,\,i,\,i}\right]\] \[=\frac{P_{\text{av}}}{N_{rt}^{2}}\sum_{k=0}^{N_{rt}-1}\Biggl{[} \sum_{\begin{subarray}{c}l=0\\ l\neq k\end{subarray}}^{N_{rt}-1}E[\tilde{F}_{k,\,i,\,i}]E[\tilde{F}_{l,\,i,\, i}]\] \[\qquad\quad+E\left[\left|\tilde{F}_{k,\,i,\,i}\right|^{2}\right] \Biggr{]} \tag{55}\]
where we have used the fact that the channel is independent across retransmissions, therefore
\[E[\tilde{F}_{k,\,i,\,i}\tilde{F}_{l,\,i}]=E[\tilde{F}_{k,\,i,\,i}]E[\tilde{F}_ {l,\,i,\,i}]\qquad\text{for }k\neq l. \tag{56}\]
Now
\[E[\tilde{F}_{k,\,i,\,i}] =E\left[\sum_{l=1}^{N_{r}}|\tilde{H}_{k,\,l,\,i}|^{2}\right]\] \[=2N_{r}\sigma_{H}^{2}. \tag{57}\]
Substituting (41) and (57) in (55) we get
\[E\left[\left|S_{i}\right|^{2}F_{i}^{2}\right]=\frac{4N_{r}P_{\text{av}}\sigma _{H}^{4}}{N_{rt}}\left(1+N_{r}N_{rt}\right). \tag{58}\]
The power of the interference component in (6) and (54) is
\[E\left[\left|\tilde{U}_{i}\right|^{2}\right] =\frac{1}{N_{rt}^{2}}E\left[\sum_{k=0}^{N_{rt}-1}\left(\tilde{I}_ {k,\,i}+\tilde{V}_{k,\,i}\right)\sum_{l=0}^{N_{rt}-1}\left(\tilde{I}_{l,\,i}^ {*}+\tilde{V}_{l,\,i}^{*}\right)\right]\] \[=\frac{1}{N_{rt}^{2}}\sum_{k=0}^{N_{rt}-1}\sum_{l=0}^{N_{rt}-1}E \left[\tilde{I}_{k,\,i}\tilde{I}_{l,\,i}^{*}\right]+E\left[\tilde{V}_{k,\,i} \tilde{V}_{l,\,i}^{*}\right] \tag{59}\]
where we have used the following properties from (40)
\[E\left[\tilde{I}_{k,\,i}\right] =E\left[\tilde{V}_{k,\,i}\right]\] \[=0\] \[E\left[\tilde{I}_{k,\,i}\tilde{V}_{l,i}^{*}\right] =E\left[\tilde{V}_{k,\,i}\tilde{I}_{l,\,i}^{*}\right]\] \[=0\qquad\text{for all }k,l \tag{60}\]
since \(S_{j}\) and \(\tilde{W}_{k,\,j}\) are mutually independent with zero-mean. Now
\[E\left[\tilde{I}_{k,\,i}\tilde{I}_{l,\,i}^{*}\right] =E\left[\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N_{t}}\tilde{F}_{k,\,i,\,j}S_{j}\sum_{\begin{subarray}{ c}n=1\\ n\neq i\end{subarray}}^{N_{t}}\tilde{F}_{l,\,i,\,n}^{*}S_{n}^{*}\right]\] \[=\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N_{t}}\sum_{\begin{subarray}{c}n=1\\ n\neq i\end{subarray}}^{N_{t}}E\left[\tilde{F}_{k,\,i,\,j}\tilde{F}_{l,\,i,\, n}^{*}\right]E\left[S_{j}S_{n}^{*}\right]\] \[=\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N_{t}}\sum_{\begin{subarray}{c}n=1\\ n\neq i\end{subarray}}^{N_{t}}E\left[\tilde{F}_{k,\,i,\,j}\tilde{F}_{l,\,i,\, n}^{*}\right]P_{\text{av}}\delta_{K}(j-n)\] \[=P_{\text{av}}\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N_{t}}E\left[\tilde{F}_{k,\,i,\,j}\tilde{F}_{l,\,i, \,j}^{*}\right] \tag{61}\]
where we have used the property that the symbols are uncorrelated and \(\delta_{K}(\cdot)\) is the Kronecker delta function [40]. When \(k=l\), (61) is given by (42) and (50). When \(k\neq l\), (61) is given by
\[E\left[\tilde{I}_{k,\,i}\tilde{I}_{l,\,i}^{*}\right] =P_{\text{av}}\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N_{t}}E\left[\tilde{F}_{k,\,i,\,j}\right]E\left[\tilde{F}_{l, \,i,\,j}^{*}\right]\] \[=P_{\text{av}}\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N_{t}}4N_{r}^{2}R_{H\,\tilde{H},\,j-i}^{2} \tag{62}\]
where we have used (40) and (48). Similarly, we have
\[E\left[\tilde{V}_{k,\,i}\tilde{V}_{l,\,i}^{*}\right] =4N_{r}\sigma_{H}^{2}\sigma_{W}^{2}\delta_{K}(k-l) \tag{63}\]
where we have used (43). Substituting (42), (50), (62) and (63) in (59) we get
\[E\left[\left|\tilde{U}_{i}\right|^{2}\right] =\frac{1}{N_{rt}^{2}}\left[4P_{\text{av}}N_{r}N_{rt}\sum_{ \begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N_{t}}\left(\sigma_{H}^{4}+N_{r}R_{H\,\tilde{H},\,j-i}^{2}\right)\right.\] \[\qquad+\left.4P_{\text{av}}N_{r}^{2}N_{r}t(N_{rt}-1)\sum_{ \begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N_{t}}R_{H\,\tilde{H},\,j-i}^{2}\right]\] \[\qquad+\frac{4N_{r}}{N_{rt}}\sigma_{H}^{2}\sigma_{W}^{2}\] \[=\frac{1}{N_{rt}}\left[4P_{\text{av}}N_{r}\sum_{\begin{subarray}{c}j= 1\\ j\neq i\end{subarray}}^{N_{t}}\left(\sigma_{H}^{4}+N_{r}R_{H\,\tilde{H},\,j-i}^{2}\right)\right.\] \[\qquad+\left.4P_{\text{av}}N_{r}^{2}(N_{rt}-1)\sum_{ \begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N_{t}}R_{H\,\tilde{H},\,j-i}^{2}\right]\] \[\qquad+\frac{4N_{r}}{N_{rt}}\sigma_{H}^{2}\sigma_{W}^{2}. \tag{64}\]
The average SINR per bit for the \(i^{th}\) transmit antenna, after averaging over retransmissions (also referred to as "combining" [36]) is given by
\[\mathrm{SINR}_{\mathrm{av,\,}b,\,C,\,i}=\frac{2P_{\mathrm{av}}E\left[F_{i}^{2} \right]}{E\left[\left|\tilde{U}_{i}\right|^{2}\right]} \tag{65}\]
into which (58) and (64) have to be substituted. The upper bound on the average SINR per bit after "combining" for the \(i^{th}\) transmit antenna is given by
\[\mathrm{SINR}_{\mathrm{av,\,}b,\,C,\,\mathrm{UB,}}=\mathrm{SINR}_{\mathrm{av, \,}b,\,C,\,i}|_{\sigma_{W}^{2}=0}\,. \tag{66}\]
The plots of the average SINR per bit for the \(i^{th}\) transmit antenna before and after "combining" are shown in Figures 7 and 8 respectively for \(N_{\mathrm{tot}}=1024\) and \(N_{rt}=2\). The channel correlation is given by
\[R_{\tilde{H}\tilde{H},\,j-i}=0.9^{|j-i|}\sigma_{H}^{2} \tag{67}\]
in (48), which is obtained by passing samples of white Gaussian noise through a unit-energy, first-order infinite impulse response (IIR) lowpass filter with \(a=-0.9\) (see (30) of [46]).
We observe in Figures 7 and 8 that
1. The upper bound on the average SINR per bit decreases rapidly with increasing transmit antennas \(N_{t}\) and falls below \(0\) dB for \(N_{t}>5\) (see Figures 7(b) and 8(b)). Since the spectral efficiency of the system is \(N_{t}/(2N_{rt})\) bits/sec/Hz (see (33) of [36]), the system would be of no practical use, since the BER would be close to \(0.5\) for \(N_{t}>5\).
2. The upper bound on the average SINR per bit after "combining" is _less_ than that before "combining". Therefore retransmissions are ineffective.
In view of the above observation, it becomes necessary to design a better receiver using precoding. This is presented in the next section.
### _Precoding_
Similar to (4), consider the modified received signal given by
\[\tilde{\mathbf{R}}_{k}=\tilde{\mathbf{H}}_{k}\tilde{\mathbf{B}}\mathbf{S}+ \tilde{\mathbf{W}}_{k} \tag{68}\]
where \((\cdot)^{T}\) denotes transpose. In (69), \(\tilde{\mathbf{A}}\) is an \(N_{t}\times N_{t}\) lower triangular matrix with diagonal elements equal to unity and \(\tilde{a}_{i,\,j}\) denotes the \(j^{th}\) coefficient of the optimum \(i^{th}\)-order forward prediction filter [40] and \(\tilde{\mathbf{B}}\) is the precoding matrix. Let
\[\tilde{\mathbf{Y}}_{k}=\tilde{\mathbf{B}}^{H}\tilde{\mathbf{H}}_{k}^{H}\tilde {\mathbf{R}}_{k}\]
\[=\tilde{\mathbf{B}}^{H}\tilde{\mathbf{H}}_{k}^{H}\tilde{\mathbf{H}}_{k}\tilde {\mathbf{B}}\mathbf{S}+\tilde{\mathbf{B}}^{H}\tilde{\mathbf{H}}_{k}^{H}\tilde {\mathbf{W}}_{k}. \tag{70}\]
Define
\[\tilde{\mathbf{Z}}_{k} =\tilde{\mathbf{H}}_{k}\tilde{\mathbf{B}}\] \[=\left[\begin{array}{ccc}\tilde{Z}_{k,\,1,\,1}&\cdots&\tilde{Z }_{k,\,1,\,N_{t}}\\ \vdots&\cdots&\vdots\\ \tilde{Z}_{k,\,N_{r},\,1}&\cdots&\tilde{Z}_{k,\,N_{r},\,N_{t}}\end{array} \right]. \tag{71}\]
Now [40]
\[\frac{1}{2}E\left[\tilde{\mathbf{Z}}_{k}^{H}\tilde{\mathbf{Z}}_{k}\right] =N_{r}\left[\begin{array}{cccc}\sigma_{Z,\,1}^{2}&0&\cdots&0\\ 0&\sigma_{Z,\,2}^{2}&\cdots&0\\ \vdots&\cdots&\cdots&\vdots\\ 0&\cdots&0&\sigma_{Z,\,N_{t}}^{2}\end{array}\right]\] \[\triangleq\tilde{\mathbf{R}}_{\tilde{\mathbf{Z}}\tilde{\mathbf{Z}}} \tag{72}\]
is an \(N_{t}\times N_{t}\) diagonal matrix and \(\sigma_{Z,\,i}^{2}\) denotes the variance per dimension of the optimum \((i-1)^{th}\)-order forward prediction filter. Note that [40]
\[\sigma_{Z,\,1}^{2} =\sigma_{H}^{2}\] \[\sigma_{Z,\,i}^{2} \geq\sigma_{Z,\,j}^{2}\qquad\text{for $i<j$}. \tag{73}\]
Let
\[\tilde{\mathbf{V}}_{k} =\tilde{\mathbf{Z}}_{k}^{H}\tilde{\mathbf{W}}_{k}\] \[=\left[\begin{array}{cccc}\tilde{V}_{k,\,1}&\cdots&\tilde{V}_{ k,\,N_{t}}\end{array}\right]^{T} \tag{74}\]
which is an \(N_{t}\times 1\) vector. Now
\[E\left[\tilde{V}_{k,\,i}\tilde{V}_{k,\,m}^{*}\right] =E\left[\sum_{j=1}^{N_{r}}\tilde{Z}_{k,\,j,\,i}^{*}\tilde{W}_{k, \,j}\sum_{l=1}^{N_{r}}\tilde{Z}_{k,\,l,\,m}\tilde{W}_{k,\,l}^{*}\right]\] \[=\sum_{j=1}^{N_{r}}\sum_{l=1}^{N_{r}}E\left[\tilde{Z}_{k,\,l,\,m }\tilde{Z}_{k,\,j,\,i}^{*}\right]E\left[\tilde{W}_{k,\,j}\tilde{W}_{k,\,l}^{*}\right]\] \[=\sum_{j=1}^{N_{r}}\sum_{l=1}^{N_{r}}2\sigma_{Z,\,i}^{2}\delta_{ K}(i-m)\delta_{K}(j-l)\] \[\qquad\times 2\sigma_{W}^{2}\delta_{K}(j-l)\] \[=4N_{r}\sigma_{Z,\,i}^{2}\sigma_{W}^{2}\delta_{K}(i-m) \tag{75}\]
where we have used (72). Let
\[\tilde{\mathbf{F}}_{k}=\tilde{\mathbf{Z}}_{k}^{H}\tilde{\mathbf{Z}}_{k} \tag{76}\]
which is an \(N_{t}\times N_{t}\) matrix. Substituting (76) in (70) we get
\[\tilde{\mathbf{Y}}_{k}=\tilde{\mathbf{F}}_{k}\mathbf{S}+\tilde{\mathbf{V}}_{k}. \tag{77}\]
Similar to (39), the \(i^{th}\) element of \(\tilde{\mathbf{Y}}_{k}\) in (77) is given by
\[\tilde{Y}_{k,\,i}=\tilde{F}_{k,\,i,}S_{i}+\tilde{I}_{k,\,i}+\tilde{V}_{k,\,i} \quad\text{for $1\leq i\leq N_{t}$} \tag{78}\]
where
\[\tilde{V}_{k,\,\,i} =\sum_{j=1}^{N_{r}}\tilde{Z}_{k,\,j,\,i}^{*}\tilde{W}_{k,\,j}\] \[\tilde{I}_{k,\,i} =\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N_{t}}\tilde{F}_{k,\,i,\,j}S_{j}\]
Figure 7: Plot of \(\text{SINR}_{\text{av},\,b,\,\text{UB},\,i}\) for \(N_{\text{tot}}=1024\), \(N_{rt}=2\). (a) Back view. (b) Sideview. (c) Front view.
\[\tilde{F}_{k,\,i,\,j}=\sum_{l=1}^{N_{r}}\tilde{Z}_{k,\,l,\,i}^{*}\tilde{Z}_{k,\,l, \,j}. \tag{79}\]
Note that from (72) and (76) we have
\[E\left[\tilde{F}_{k,\,i,\,i}\right]=2N_{r}\sigma_{Z,\,i}^{2}. \tag{80}\]
Now
\[E\left[\tilde{F}_{k,\,i,\,j}^{2}\right] =E\left[\sum_{l=1}^{N_{r}}\left|\tilde{Z}_{k,\,l,\,i}\right|^{2} \sum_{m=1}^{N_{r}}\left|\tilde{Z}_{k,\,m,\,i}\right|^{2}\right]\] \[=\sum_{l=1}^{N_{r}}\left|\tilde{Z}_{k,\,l,\,i}\right|^{4}\] \[\qquad+\sum_{\begin{subarray}{c}m=1\\ m\neq l\end{subarray}}^{N_{r}}E\left[\left|\tilde{Z}_{k,\,l,\,i}\right|^{2} \right]E\left[\left|\tilde{Z}_{k,\,m,\,i}\right|^{2}\right]\] \[=4N_{r}(N_{r}+1)\sigma_{Z,\,i}^{4}. \tag{81}\]
Similarly
\[E\left[\left|\tilde{I}_{k,\,i}\right|^{2}\right] =E\left[\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N_{r}}\tilde{F}_{k,\,i,\,j}\sum_{\begin{subarray}{c}l =1\\ l\neq i\end{subarray}}^{N_{r}}\tilde{F}_{k,\,i,\,l}^{*}S_{l}^{*}\right]\] \[=P_{\text{av}}\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N_{r}}E\left[\left|\tilde{F}_{k,\,i,\,j}\right|^{2} \right]. \tag{82}\]
Now
\[E\left[\left|\tilde{F}_{k,\,i,\,j}\right|^{2}\right] =E\left[\sum_{l=1}^{N_{r}}\tilde{Z}_{k,\,l,\,i}^{*}\tilde{Z}_{k,\, l,\,j}\sum_{m=1}^{N_{r}}\tilde{Z}_{k,\,m,\,i}\tilde{Z}_{k,\,m,\,j}^{*}\right]\] \[=\sum_{l=1}^{N_{r}}\sum_{m=1}^{N_{r}}4\sigma_{Z,\,i}^{2}\sigma_{Z,\,j}^{2}\delta_{K}(l-m)\] \[=4N_{r}\sigma_{Z,\,i}^{2}\sigma_{Z,\,j}^{2} \tag{83}\]
where we have used (72). Substituting (83) in (82) we get
\[E\left[\left|\tilde{I}_{k,\,i}\right|^{2}\right]=4P_{\text{av}}N_{r}\sigma_{Z,\,i}^{2}\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N_{r}}\sigma_{Z,\,j}^{2}. \tag{84}\]
Note that
\[E\left[\left|\tilde{I}_{k,\,i}+\tilde{V}_{k,\,i}\right|^{2}\right]=E\left[ \left|\tilde{I}_{k,\,i}\right|^{2}\right]+E\left[\left|\tilde{V}_{k,\,i} \right|^{2}\right]. \tag{85}\]
The average SINR per bit for the \(i^{th}\) transmit antenna is given by (52) and is equal to
\[\text{SINR}_{\text{av},\,b,\,i} =\frac{E\left[\left|\tilde{F}_{k,\,i,\,i}S_{l}\right|^{2}\right] \times 2N_{rt}}{E\left[\left|\tilde{I}_{k,\,i}+\tilde{V}_{k,\,i}\right|^{2} \right]}\] \[=\frac{P_{\text{av}}(N_{r}+1)\sigma_{Z,\,i}^{2}\times 2N_{rt}}{P_{ \text{av}}\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N_{r}}\sigma_{Z,\,j}^{2}+\sigma_{W}^{2}} \tag{86}\]
where we have used (75), (81) and (84). The upper bound on the average SINR per bit for the \(i^{th}\) transmit antenna is obtained by setting \(\sigma_{W}^{2}=0\) in (86) and is equal to
\[\text{SINR}_{\text{av},\,b,\,\text{UB},\,i}=\frac{(N_{r}+1)\sigma_{Z,\,i}^{2} \times 2N_{rt}}{\frac{\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N_{t}}\sigma_{Z,\,j}^{2}}{P_{\text{av}}\sum_{ \begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N_{t}}\sigma_{Z,\,j}^{2}}} \tag{87}\]
which is illustrated in Figure 9 for \(N_{\text{tot}}=1024\) and \(N_{rt}=2\). The value of the upper bound on the average SINR per bit for \(N_{t}=i=50\) is \(18.6\) dB. The channel correlation is given by (67). Note that a first-order prediction filter completely decorrelates the channel with [40]
\[\tilde{a}_{i,\,1} =-0.9\qquad\text{for}\,\,1\leq i\leq N_{t}-1\] \[\tilde{a}_{i,\,j} =0\qquad\text{for}\,\,2\leq i\leq N_{t}-1,\,2\leq j\leq i. \tag{88}\]
We also have [40]
\[\sigma_{Z,\,i}^{2} =\sigma_{Z,\,2}^{2}\] \[=\left(1-|-0.9|^{2}\right)\sigma_{Z,\,1}^{2}\] \[=0.19\sigma_{Z,\,1}^{2}\qquad\text{for}\,\,i>2. \tag{89}\]
Therefore we see in Figure 9 that the first transmit antenna \(i=1\) has a high \(\text{SINR}_{\text{av},\,b,\,\text{UB},\,i}\) due to low interference power from remaining transmit antennas, whereas for \(i\neq 1\) the \(\text{SINR}_{\text{av},\,b,\,\text{UB},\,i}\) is low due to high interference power from the first transmit antenna (\(i=1\)). The received signal after "combining" is given by (6) and (54). Note that from (54) and (79)
\[E\left[F_{i}^{2}\right] =\frac{1}{N_{rt}^{2}}E\left[\sum_{k=0}^{N_{rt}-1}\tilde{F}_{k,\,i, \,i}\sum_{l=0}^{N_{rt}-1}\tilde{F}_{l,\,i,}\right]\] \[=\frac{1}{N_{rt}^{2}}\sum_{k=0}^{N_{rt}-1}E\left[\left|\tilde{F}_{k,\,i,}\right|^{2}\right]+\sum_{\begin{subarray}{c}l=0\\ l\neq k\end{subarray}}^{N_{rt}-1}E\left[\tilde{F}_{k,\,i,\,i}\tilde{F}_{l,\,i,}\right]\] \[=\frac{4N_{r}\sigma_{Z,\,i}^{4}}{N_{rt}^{2}}\sum_{k=0}^{N_{rt}-1}(N _{r}+1)+(N_{rt}-1)N_{r}\] \[=\frac{4N_{r}\sigma_{Z,\,i}^{4}}{N_{rt}}(1+N_{r}N_{rt}) \tag{90}\]
where we have used (56), (80) and (81). Similarly from (54), (75), (84) and (85) we have
\[E\left[\left|\tilde{U}_{i}\right|^{2}\right] =\frac{1}{N_{rt}^{2}}E\left[\sum_{k=0}^{N_{rt}-1}\hat{U}_{k,\,i}^{ \prime}\sum_{l=0}^{N_{rt}-1}\left(\tilde{U}_{l,\,i}^{\prime}\right)^{*}\right]\] \[=\frac{1}{N_{rt}^{2}}\sum_{k=0}^{N_{rt}-1}\sum_{l=0}^{N_{rt}-1}E \left[\tilde{U}_{k,\,i}^{\prime}\left(\tilde{U}_{l,\,i}^{\prime}\right)^{*}\right]\] \[=\frac{1}{N_{rt}^{2}}\sum_{k=0}^{N_{rt}-1}\sum_{l=0}^{N_{rt}-1}E \left[\left|\tilde{U}_{k,\,i}^{\prime}\right|^{2}\right]\delta_{K}(k-l)\] \[=\frac{1}{N_{rt}}E\left[\left|\tilde{U}_{k,\,i}^{\prime}\right|^{2}\right]\] \[=\frac{1}{N_{rt}}\left[E\left[\left|\tilde{I}_{k,\,i}\right|^{2} \right]+E\left[\left|\tilde{V}_{k,\,i}\right|^{2}\right]\right]\] \[=\frac{4N_{r}\sigma_{Z,\,i}^{2}}{N_{rt}}\left[P_{\text{av}}\sum_{ \begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N_{t}}\sigma_{Z,\,j}^{2}+\sigma_{W}^{2}\right]. \tag{91}\]
Substituting (90) and (91) in (65) we have, after simplification, for \(1\leq i\leq N_{t}\)
\[\text{SINR}_{\text{av},\,b,\,C,\,i}=\frac{2P_{\text{av}}E\left[F_{i}^{2}\right]}{E \left[\left|\tilde{U
Figure 10. Plot of \(\text{SINR}_{\text{av},\,b,\,C,\,\text{UB},\,i}\) for \(N_{\text{tot}}=1024\), \(N_{rt}=2\) after precoding. (a) Back view. (b) Side view. (c) Front view.
\[=\frac{(N_{r}N_{rt}+1)\sigma_{Z,\,i}^{2}\times 2P_{\rm av}}{P_{\rm av} \sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N_{t}}\sigma_{Z,\,j}^{2}+\sigma_{W}^{2}} \tag{92}\]
The upper bound on the average SINR per bit for the \(i^{th}\) transmit antenna is obtained by substituting (92) in (66) and is equal to
\[\text{SINR}_{\text{av},\,b,\,C,\,\text{UB},\,i} =\frac{(N_{r}N_{rt}+1)\sigma_{Z,\,i}^{2}\times 2}{\sum_{ \begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N_{t}}\sigma_{Z,\,j}^{2}}\] \[\approx\text{SINR}_{\text{av},\,b,\,\text{UB},\,i} \tag{93}\]
for \(1\leq i\leq N_{t}\), \(N_{r}\gg 1\). This is illustrated in Figure 10 for \(N_{\rm tot}=1024\) and \(N_{rt}=2\). We again observe that the first transmit antenna (\(i=1\)) has a high upper bound on the average SINR per bit, after "combining", compared to the remaining transmit antennas. The value of the upper bound on the average SINR per bit after "combining" for \(N_{t}=i=50\), \(N_{\rm tot}=1024\) is 18.6 dB. After concatenation, \(\tilde{Y}_{i}\) for \(0\leq i\leq L_{d}-1\), in (6) and (54) is given to the turbo decoder [29], [40]. Let (see (26) of [29]):
\[\tilde{\mathbf{Y}}_{1} =\left[\begin{array}{ccc}\tilde{Y}_{0}&\dots&\tilde{Y}_{L_{d}- 1}\end{array}\right]\] \[\tilde{\mathbf{Y}}_{2} =\left[\begin{array}{ccc}\tilde{Y}_{L_{d}1}&\dots&\tilde{Y}_{L _{d}-1}\end{array}\right]. \tag{94}\]
Then [29], [40]
\[\gamma_{1,\,i,\,m,\,n} =\exp\left[-\frac{\left|\tilde{Y}_{i}-F_{i}S_{m,\,n}\right|^{2}} {2\sigma_{U,\,i}^{2}}\right]\] \[\gamma_{2,\,i,\,m,\,n} =\exp\left[-\frac{\left|\tilde{Y}_{i1}-F_{i1}S_{m,\,n}\right|^{2} }{2\sigma_{U,\,i}^{2}}\right] \tag{95}\]
where
\[i1=i+L_{d1}\qquad\text{for }0\leq i\leq L_{d1}-1. \tag{96}\]
The rest of the turbo decoding algorithm is similar to that discussed in [29], [40] and will not be repeated here. In the next subsection we present the computer simulation results for correlated channel with precoding and PCTC.
### _Simulation Results_
The channel correlation is given by (67). The BER results for \(N_{\rm tot}=1024\) with precoding are depicted in Figure 11. Incidentally, the value of the upper bound on the average SINR per bit before and after "combining" for \(N_{t}=i=512\), \(N_{\rm tot}=1024\) is 6 dB. The BER results for \(N_{\rm tot}=32\) with precoding are depicted in Figure 12. Note that since the average SINR per bit depends on the transmit antenna, the _minimum_ average SINR per bit is indicated along the \(x\)-axis of Figures 11 and 12. We also observe from Figures 11(a, b) and 12 that there is a large difference between theory and simulations. This is probably because, the average SINR per bit is not identical for all transmit antennas. In particular, we observe from Figures 9 and 10 that the first transmit antenna has a large average SINR per bit compared to the remaining antennas. However, in Figures 11(c, d) there is a close match between theory and simulations. This could be attributed to having a large number of blocks in a frame, as given by (2), resulting in better statistical properties. Even though the number of blocks is large in 12, the number of transmit antennas is small, resulting in inferior statistical properties. In order to improve the accuracy of the BER estimate for \(N_{\rm tot}=32\), we propose to transmit "dummy data" from the first transmit antenna and "actual data" from the remaining antennas. The BER results shown in Figure 14 indicates a good match between theory and practice. However, comparison of Figures 11 and 13 demonstrates that "dummy data" is ineffective for large number of transmit antennas.
## IV Conclusions & Future Work
This article presents the advantages of single-user massive multiple input multiple output (SU-MMIMO) over multi-user (MU) MMIMO systems. The bit-error-rate (BER) performance of SU-MMIMO using serially concatenated turbo codes (SCTC) over uncorrelated channel is presented. A semi-analytic approach to estimating the BER of a turbo code is derived. A detailed signal-to-interference-plus-noise ratio analysis for SU-MMIMO over correlated channel is presented. The BER performance of SU-MMIMO with parallel concatenated turbo code (PCTC) over correlated channel is studied. Future work could involve estimating the MMIMO channel, since the present work assumes perfect knowledge of the channel.
|
2310.13231 | Multi-level Contrastive Learning for Script-based Character
Understanding | In this work, we tackle the scenario of understanding characters in scripts,
which aims to learn the characters' personalities and identities from their
utterances. We begin by analyzing several challenges in this scenario, and then
propose a multi-level contrastive learning framework to capture characters'
global information in a fine-grained manner. To validate the proposed
framework, we conduct extensive experiments on three character understanding
sub-tasks by comparing with strong pre-trained language models, including
SpanBERT, Longformer, BigBird and ChatGPT-3.5. Experimental results demonstrate
that our method improves the performances by a considerable margin. Through
further in-depth analysis, we show the effectiveness of our method in
addressing the challenges and provide more hints on the scenario of character
understanding. We will open-source our work on github at
https://github.com/David-Li0406/Script-based-Character-Understanding. | Dawei Li, Hengyuan Zhang, Yanran Li, Shiping Yang | 2023-10-20T02:40:52Z | http://arxiv.org/abs/2310.13231v1 | # Multi-level Contrastive Learning for Script-based Character Understanding
###### Abstract
In this work, we tackle the scenario of understanding characters in scripts, which aims to learn the characters' personalities and identities from their utterances. We begin by analyzing several challenges in this scenario, and then propose a multi-level contrastive learning framework to capture characters' global information in a fine-grained manner. To validate the proposed framework, we conduct extensive experiments on three character understanding sub-tasks by comparing with strong pre-trained language models, including SpanBERT, Longformer, BigBird and ChatGPT-3.5. Experimental results demonstrate that our method improves the performances by a considerable margin. Through further in-depth analysis, we show the effectiveness of our method in addressing the challenges and provide more hints on the scenario of character understanding. We will open-source our work in this URL.
## 1 Introduction
As one essential element in stories, character comprehension is a popular research topic in literary, psychological and educational research [12, 13, 14]. To fully understand characters, individuals must empathize with characters based on personal experiences [1], construct profiles according to characters' identities, and inference about characters' future actions [17, 1].
According to the data modality and format, character comprehension can be categorized into several classes [15]. In this work, we focus on character understanding in scripts [1, 16]. Scripts are written text for plays, movies, or broadcasts [11]. Typically, scripts are often structured with several text fields, including scene description, conversation, transition and summary [1].
Although pre-trained language models (PLMs) have demonstrated their effectiveness in language and vision research fields [2, 13], script-based character understanding is yet a hard task, as shown in our experiments. Here we highlight two challenges. The first one is **text type**. As scripts mainly consist of conversations between different characters, at the core of script-based character understanding is conversation understanding. Especially, scripts often involve multi-party conversations where multiple characters talk and interact with each other in a single scene. Considering other common issues in conversation understanding, it is non-trivial for PLMs to comprehend characters based on fine-grained conversation information [12, 13, 14, 15]. The other challenge of applying PLMs to script-based character understanding is **text length**. Table 1 shows a comparison between a script from TVSHOWGUESS [15] and a short story from ROCStories [16]. Typically, scripts are very long with even billion of words [1], and in turn character information are distributed globally throughout the entire script [1, 17]. However, PLMs are ineffective in capturing such global information due to the sensitiveness of context modeling [13, 12]
\begin{table}
\begin{tabular}{l l l} \hline Character & Sheldon & Jennifer \\ \hline Story Title & TBBT & The Test \\ \hline Dataset & TVSHOWGUESS & ROCStories \\ \hline Text Length & 528832 & 41 \\ \hline Character’s & Sheldon: “... we take on Koothra-paliash ids dog. Really give ourselves a challenge.” & Jennifer has a big exam tomorrow.... Jennifer felt bitter-sweet about it. \\ \hline \end{tabular}
\end{table}
Table 1: Comparison between a script from TVSHOWGUESS [15] and a narrative from ROCStories [16].
and the limitation of input length (Dai et al., 2019; Beltagy et al., 2020).
To address the aforementioned challenges, we propose a multi-level contrastive learning framework and capture both fine-grained and global information using two devised contrastive losses. For fine-grained character information, we build a **summary-conversation contrastive loss** by comparing character representations from different sources. Specifically, we leverage two text fields in scripts, i.e., summary and conversation, and then extract character representations from the corresponding field. The representations of the same character are then treated as the postive pairs, while those of different characters are negative pairs. To model the global information, we also propose a novel **cross-sample contrastive loss** as inspired by (Bai et al., 2021; Inoue et al., 2022). By aligning the same character's representation in different samples, the model overcomes the input length limitation and learns the global information of each character. To validate the effectiveness of our framework, we benchmark the performances of several PLMs, including SpanBERT, Longformer, BigBird, and ChatGPT-3.5, on three widely-adopted character understanding tasks.
In general, our contributions are as follows:
* We identify two critical challenges for character understanding in scripts and propose a multi-level contrastive learning framework to address them.
* Through extensive experiments, we demonstrate the effectiveness of our method across multiple datasets and downstream tasks.
* With further analysis, we provide some insights into script-based character understanding. All codes will be open-sourced for future research.
## 2 Related Work
### Character Understanding
Character understanding has long been the subject of considerable interest and scrutiny. Some early works propose to extract keywords as characters' features from movies (Bamman et al., 2013) and novels (Flekova and Gurevych, 2015). Other works attempt to learn the relationship between characters in both supervised (Massey et al., 2015; Kim and Klinger, 2019) and unsupervised ways (Krishnan and Eisenstein, 2014; Iyyer et al., 2016).
Recently, more challenging tasks in character understanding have emerged. Chen and Choi (2016) benchmark the character linking and coreference resolution tasks on TV show scripts. Brahman et al. (2021) collect dataset with storybooks and their summaries, and define the character description generation and character identification tasks. Sang et al. (2022) extend the character guessing task into a multi-character scenario on TV show scripts. Additionally, some works attempt to combine traditional self-supervised learning methods (Mikolov et al., 2013) with language models (Liu et al., 2019) to learn contextual character embeddings and apply them in downstream tasks (Azab et al., 2019; Inoue et al., 2022).
In this work, we focus on character understanding tasks in scripts. While some works benchmark summary-based tasks in narratives (Chen et al., 2021; Brahman et al., 2021), we are the first to leverage script summaries as auxiliary data and learn fine-grained and global character representations in a novel way.
### Contrastive Learning
In recent years, contrastive learning is widely used in various NLP tasks (Zhang et al., 2022), including sentence representation (Gao et al., 2021; Kim et al., 2021), machine translation (Pan et al., 2021; Vamvas and Sennrich, 2021), text generation (Lee et al., 2020; Shu et al., 2021; Zhang et al., 2022), and etc. Literatures in multimodal research field adopt contrastive learning for vision-language model training, constructing positive pairs with images and their corresponding captions (Li et al., 2020; Radford et al., 2021; Yang et al., 2022). In our work, we also regard characters in summaries and conversations as two different views of the same target and align them for a better representation.
Moreover, some works aim to construct positive pairs in global manners. Both Qin et al. (2020) and Hogan et al. (2022) conduct document-level contrastive learning in the relation extraction task to align the representation of the same entity and relation. Pan et al. (2021) propose an aligned augmentation method that generates more positive sentence pairs in different languages to improve translation performances in non-English directions. Similarly, Qin et al. (2022) acquire multilingual views of the same utterance from bi-lingual dictionaries. Following this line of research, we propose the cross-sample contrastive learning in addition to the in-sample contrastive loss to learn character
representations globally.
## 3 Preliminaries
Generally, character understanding tasks require the model to predict character's information given a segment of text. For script-based character understanding, the provided texts often consist of conversations within scripts. In this work, we also leverage script summaries as an additional source. We provide detailed examples in Appendix A.
In practice, the model first generates character's embeddings \(e\) in the representation learning step. Subsequently, a feed-forward network FFN is often adopted as the classifier with the cross-entropy loss:
\[p=Softmax(\mathrm{FFN}(e)) \tag{1}\]
\[L_{Sup}=-\frac{1}{N}\sum_{i=1}^{N}y_{i}\log(p) \tag{2}\]
## 4 Method
Our work presents a multi-level contrastive learning framework for character representation learning. Firstly, we follow a general encoding process to obtain character representations from conversations and summaries. Then, we describe two novel contrastive losses to capture fine-grained and global information at both in-sample and cross-sample levels. Finally, we propose a two-stage training paradigm that applies different losses in different learning stages. Figure 1 illustrates an overview pipeline of our method.
### Character Representation in Conversation
To obtain character representations from the conversation field in the scripts, we first concatenate each utterance Joshi et al. (2020); Beltagy et al. (2020) and utilize a pre-trained language model \(\mathrm{PLM}\)1 to produce the encoding of the whole text \(\mathbf{H}\):
Footnote 1: Without loss of generalization, we adopt several PLMs in experiments.
\[\mathbf{H}=\mathrm{PLM}(u_{1};u_{2};,...;u_{T}) \tag{3}\]
Then, the character embeddings \(e_{1},e_{2},...e_{n}\) are extracted from the contextual encoding \(\mathbf{H}\). After that, we follow previous works Bai et al. (2021); Sang et al. (2022) and use an attention-based layer to share the character-level information among each embedding2:
Footnote 2: We provide further details in Appendix B
\[e_{1},...e_{n}=\mathrm{Extract}(\mathbf{H}) \tag{4}\]
\[e_{1},...e_{n}=\mathrm{Attention}(e_{1},...e_{n}) \tag{5}\]
However, the conversations in the scripts are complex and thus the character embeddings solely based on the conversations are often insufficient for fine-grained character understanding.
Figure 1: The overview pipeline of our method. Each color represents a character entity or embedding. The conversation and summary encoding parts correspond to Section 4.1 and 4.2 respectively. The multi-level contrastive learning part corresponds to Section 4.3. The inference with character embedding part corresponds to Section 4.4.
### Character Representation in Summary
To supply more information, we leverage scripts' summaries as auxiliary data and apply contrastive learning to capture the character intricacies.
Similar with conversation encoding, given a summary \(S\) contains a group of character mentions \(\{cm_{1}^{s},cm_{2}^{s},...,cm_{n}^{s}\}\), we also encode the whole summary and extract the character representations:
\[\mathbf{H}_{s}=\mathrm{PLM}(S) \tag{6}\] \[e_{i}^{s}=t_{start_{i}}+t_{end_{i}},1<=i<=n \tag{7}\]
where \(t_{start_{i}}\) and \(t_{end_{i}}\) are the first and last tokens of the \(i_{th}\) character mention \(cm_{i}^{s}\) in the summary.
After that, we follow [1] and use a mention-level self-attention (MLSA) layer 3 to gather information for each character embedding:
Footnote 3: It is a transformer encoder layer with \(B\) repeated block. Please refer to Bai et al. (2021) for more details.
\[e_{1}^{s},...,e_{n}^{s}=\mathrm{MLSA}(e_{1}^{s},...,e_{n}^{s}) \tag{8}\]
and the last layer's output \(e_{i}^{s}\) is treated as the character's representation from the summary.
### Multi-level Contrastive Learning
To enhance the character representations learned from the conversation and the summary, we develop a novel multi-level contrastive learning to capture both fine-grained and global information.
#### 4.3.1 Summary-conversation Contrastive Learning
At the local in-sample level, we develop a summary-conversation contrastive loss to align representations of the same character. This gives the model an additional perspective on character representation and encourages it to find a general space where different representations of the same character are closer. Concretely, the loss function for the summary-conversation contrastive learning is:
\[L_{Sum}=\sum_{i=1}^{P}-log\frac{exp^{\mathrm{sim}(e_{c_{i}}e_{c_{i}}^{s})}/ \tau}{\sum_{j=1}^{P}exp^{\mathrm{sim}(e_{c_{i}}e_{c_{j}}^{s})}/\tau} \tag{9}\]
where \(c_{i}\) denotes the \(i_{th}\) character, and \(P\) here is the number of characters that appear in both scripts and summaries. Also, \(\tau\) is a temperature hyper-parameter, and \(\mathrm{sim}(,)\) stands for the similarity function4. Note that in samples where conversation and summary contain multiple representations of character \(c_{i}\), we randomly select one as \(e_{c_{i}}\) and \(e_{c_{i}}^{s}\), respectively.
Footnote 4: Here we use Cosine similarity.
By applying the summary-conversation contrastive loss, we are able to learn fine-grained character representations from both summary and conversation texts.
#### 4.3.2 Cross-sample Contrastive Learning
In addition to fine-grained information, global-level information is also crucial for character representation learning [1, 10]. To this end, we also propose a cross-sample contrastive learning to align the same character representation in different samples within a batch:
\[L_{Cross}=\sum_{i=1}^{K}-log\frac{exp^{\mathrm{sim}(e_{c_{i}}^{1},e_{c_{i}}^{ 2})}/\tau}{\sum_{j=1}^{K}exp^{\mathrm{sim}(e_{c_{i}}^{1},e_{c_{j}}^{2})}/\tau} \tag{10}\]
\[SI(e_{c_{i}}^{1})\neq SI(e_{c_{i}}^{2}) \tag{11}\]
where \(SI(e)\) means the sample index of the character representation \(e\)5. When there are multiple representations of one given character in a batch, we randomly select two from them. For cross-sample learning, we impose a constraint that restricts \(e_{c_{i}}^{1}\) and \(e_{c_{i}}^{2}\) to originate from different samples. \(K\) is the number of characters appearing in at least two different samples within a batch. To this end, the cross-sample contrastive loss forces the model to utilize global information in a batch and thus obtain a comprehensive understanding of the characters.
Footnote 5: \(e\) generally represents any character embedding.
### Two-stage Training
To fully train the model, we further propose a two-stage training paradigm to apply different losses in different learning stages.
Concretely, in the first stage, we combine the two contrastive losses with the supervised loss together, and post-train the pre-trained language model. The supervised loss serves as a guidance to facilitate the contrastive learning, and stabilize the training at the very beginning. The total loss of the first stage is:
\[L_{Total}=\lambda*L_{Sup}+\alpha*L_{Sum}+\beta*L_{Cross} \tag{12}\]
where \(\lambda,\alpha,\beta\) are hyper-parameters of task ratios, and we will analyze their effects in Section 6.3. After the first stage, only the supervised loss is
kept to train the model in the second stage. This makes the model concentrate on the downstream supervision signals.
## 5 Experiments Setup
### Tasks and Datasets
We evaluate the proposed method on three character understanding tasks, i.e., coreference resolution Chen and Choi (2016), character linking Chen and Choi (2016), and character guessing Sang et al. (2022).
**Coreference Resolution** Given a conversation in scripts that contains multiple utterances and \(n\) character mention entity \(c_{1},c_{2},...,c_{n}\) within it, the objective of the coreference resolution task is to assemble all mention entities that refer to the same character in a cluster.
**Character Linking** The input of the character linking task is the same as coreference resolution. Unlike coreference resolution, the goal of character linking is to accurately classify each mention entity to the character in a pre-defined character set \(Z=\{z_{1},z_{2},...,z_{m}\}\).
**Character Guessing** Distinct from previous tasks, the character guessing task focuses on identifying the speaker for each utterance in scripts. In this task, each utterance within a scene is segmented and fed into the model. The speaker's name preceding each utterance is masked and replaced with a special token. The same speaker within a scene is represented by the same special token. The objective of the character guessing task is to predict the identity of the speaker for each special token.
**Datasets** We choose two TV show datasets to conduct experiments. For coreference resolution and character linking, we use the latest released version of the Character Identification dataset6. For character guessing, we adopt the TVSHOWGUESS dataset7 to conduct experiments. We follow all the training, development, and testing separation provided by the original datasets. The dataset statistics are given in Table 13 in Appendix.
Footnote 6: [https://github.com/emorynlp/character-identification](https://github.com/emorynlp/character-identification)
Footnote 7: [https://github.com/YisiSang/TVSHOWGUESS](https://github.com/YisiSang/TVSHOWGUESS)
### Baseline Models
Following previous works, we adopt several state-of-the-art (SOTA) models in character understanding as baselines and apply the proposed framework on them. For coreference resolution and character linking, we choose **SpanBERT**Joshi et al. (2020), a transformer-architecture pre-trained model with the contiguous random span mask strategy in the pre-training stage. We also adopt **C\({}^{2}\)**, which combines coreference resolution and character linking together and achieves the SOTA performance in both two tasks. For character guessing, we use **BigBird**Zaheer et al. (2020) and **Longformer**Beltagy et al. (2020), as they are specialized for long-form document input. We follow Sang et al. (2022) and add a character-specific attentive pooling layer upon the the model encoders and denote them as **BigBird-P** and **Longformer-P**. Notably, we also design a zero-shot and one-shot instruction prompts and evaluate **ChatGPT-3.5** (gpt-3.5-turbo) via its official API8 as another strong large language model baseline.
Footnote 8: [https://platform.openai.com/docs/api-reference/completions/create](https://platform.openai.com/docs/api-reference/completions/create)
### Evaluation Metrics
For coreference resolution, we follow the previous works Zhou and Choi (2018); Bai et al. (2021) and use B3, CEAF\(\phi\)4, and BLANC as our evaluation metrics. These three metrics are first proposed by the CoNNL'12 shared task Pradhan et al. (2012) to measure the clustering performance of the coreference resolution task. For character linking and character guessing, we use Macro and Micro F1 to evaluate the models' classification performances.
### Implementation Details
We employ both the base and large sizes of each model, and implement our proposed method on them. For summary-conversation contrastive loss, we use summary corpus collected by Chen et al. (2021). We follow the hyper-parameter settings in the original papers to reproduce each baseline's result. We repeat each experiment 3 times and report the average scores. For ChatGPT prompts and other implementation details, please refer to Appendix C and Appendix D. We will open-source all codes in this work.
## 6 Results and Analysis
### Main Results
Table 2 and Table 3 present the automatic evaluation results on the three tasks. Surprisingly, even with specialized instruction and one-shot demonstration, ChatGPT-3.5 performs the worst among all the baselines on each task. This implies that
character understanding is still hard and complex to solve for large language models. Among the three tasks, models perform worse on character guessing than on coreference resolution and character linking tasks. In particular, ChatGPT achieves extremely low scores of 44.05 Macro-F1 in character guessing. Since character guessing requires a deeper understanding of each character and more varied narrative comprehension skills [22], this suggests that **the current pre-trained models, especially LLMs, have room for improvement in tasks that require global and in-depth learning for a specific individual**.
Despite the discrepancies in model architecture and size, the proposed method brings significant improvements to each baseline model on almost every metric, except for B3 and CEAF\(\phi\)4 in C\({}^{2}\)-large model. These results indicate the effectiveness and compatibility of our method.
### Ablation Studies
We also conduct an ablation study to examine the contributions of the two novel contrastive losses, i.e., the cross-sample loss and summary-conversation loss. To implement, we select SpanBERT-base and SpanBERT-large as backbone models and implement model variants by removing one of two contrastive losses in the training phases.
Table 4 presents the results of our ablation study on the coreference resolution and character linking tasks. Compared with the vanilla SpanBERT-base and SpanBERT-large, adding one or two contrastive losses yield better performances. Additionally, we observe that when applied separately, models with the summary-conversation loss work better than models with the cross-sample loss only. More importantly, it is evident that the models trained with both contrastive losses together outperform the models with only one loss, indicating the necessity of our multi-level contrastive framework as well as its effectiveness in addressing the two challenges, i.e., text type and text length.
We also conduct an ablation study on the two-stage learning strategy. Table 5 shows the experiment results on C2-base using character linking and coreference resolution. While the one-stage multi-task training can also improve the baseline model's performance, we found it leads to a sub-optimal result compared with that using our two-stage learning strategy. This observation leads us to the conclusion that supervision-only fine-tuning is also very important in our method, consistently enhancing baseline models' performance. This aligns with the findings of prior research, which advocate for task-specific fine-tuning following multi-task
\begin{table}
\begin{tabular}{l|c c c|c c c|c c c|c|c} \hline \hline \multirow{2}{*}{MODEL} & \multicolumn{8}{c|}{Coreference Resolution} & \multicolumn{2}{c}{Character Linking} \\ \cline{2-13} & \multicolumn{3}{c|}{B3} & \multicolumn{3}{c|}{CEAF\(\phi\)4} & \multicolumn{3}{c|}{BLANC} & \multicolumn{2}{c|}{MICRO} & \multicolumn{1}{c}{MACRO} \\ \cline{2-13} & PREC. & REC. & F1 & PREC. & REC. & F1 & PREC. & REC. & F1 & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \hline ChatGPT-Zero-Shot & 63.43 & 59.51 & 61.41 & 68.39 & 64.37 & 66.32 & 80.39 & 77.74 & 78.97 & 74.7 & 64.3 \\ ChatGPT-One-Shot & 66.43 & 62.54 & 64.43 & 68.47 & 64.44 & 66.40 & 82.19 & 79.40 & 80.70 & 76.2 & 63.6 \\ \hline SpanBERT-base & 77.40 & 82.67* & 79.94 & 74.69 & 67.93 & 71.15* & 84.80* & 89.96 & 87.20 & 85.0* & 78.4 \\ SpanBERT-base (Ours) & 79.95* & 84.71 & 82.26 & 76.67 & 70.38 & 73.39* & 87.44 & 91.26 & 89.26 & 86.3 & 78.9* \\ SpanBERT-large & 81.92 & 85.56 & 83.69* & 77.85 & 74.74 & 76.25* & 88.61* & 91.91 & 90.20 & 87.2* & 82.8* \\ SpanBERT-large (Ours) & 83.55* & **87.38*** & 85.42* & **79.83** & 76.29 & 78.02* & 89.18* & 93.00 & 91.00 & **88.2*** & **83.7*** \\ \hline C\({}^{2}\)-base & 80.75 & 84.77* & 82.71* & 76.97 & 71.78 & 74.28 & 82.22* & 91.52 & 89.80* & 85.6 & 80.4* \\ C\({}^{2}\)-base (Ours) & 83.35 & 85.12* & 84.23 & 76.88* & 74.97 & 75.91 & 90.48 & 91.85* & 91.15 & 86.4 & 81.1 \\ C\({}^{2}\)-large & 84.98 & 86.92* & 85.94 & 79.63 & 78.16* & 78.89 & 90.87* & 93.05* & 91.93 & 87.6* & 82.5* \\ C\({}^{2}\)-large (Ours) & **86.42** & 86.44* & **86.24*** & 78.82 & **80.42*** & **79.61** & **91.77*** & **93.13** & **92.45*** & 88.0* & 83.2* \\ \hline \hline \end{tabular}
\end{table}
Table 2: Automatic evaluation results on coreference resolution and character linking. The best results are in bold. We follow previous works to present the results of coreference resolution in a 2-digital decimal and the results of character linking in a 1-digital decimal. * denotes that \(p\leq 0.01\) in the statistical significance test.
\begin{table}
\begin{tabular}{l c c} \hline \hline Model & MICRO & MACRO \\ \hline ChatGPT-Zero-Shot & 48.58 & 42.17 \\ ChatGPT-One-Shot & 51.57 & 44.05 \\ \hline BigBird-P-base & 71.01 & 70.32* \\ BigBird-P-base (Ours) & 72.61 & 73.00 \\ BigBird-P-large & 75.43* & 75.24 \\ BigBird-P-large (Ours) & 77.68* & 76.41 \\ \hline Longformer-P-base & 71.80 & 73.75 \\ Longformer-P-base (Ours) & 73.65* & 74.22 \\ Longformer-P-large & 77.58 & 75.92* \\ Longformer-P-large (Ours) & **78.92*** & **76.52*** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Automatic evaluation results on character guessing. The best results are in bold.
post-training (Guan et al., 2020; Han et al., 2021).
### Analysis on Hyper-Parameters
The task ratio setting is also an important component of our method. In this section, we investigate their impacts by testing various task ratios in the first training stage. We employ the SpanBERT-large model and perform experiments on the coreference resolution and character linking tasks.
The results of the hyper-parameter analysis are presented in Table 6. As defined in Equation 12, \(\lambda\), \(\alpha\), and \(\beta\) represent the ratios of task-specific supervised loss, summary-conversation loss, and cross-sample loss, respectively. Accordingly, the first block (Row 1) presents the vanilla SpanBERT-large performance w/o our framework, and the second block (Row 2 and Row 3) shows the model variants with only supervision loss or contrastive losses. Comparing the first and second block we can see, there is no obvious improvement when only keeping the supervised loss, a.k.a \(\lambda=1.0,\alpha=0.0,\beta=0.0\) in the first stage. Moreover, when \(\lambda\) is set to \(0.0\), the model trained without supervised loss also exhibits inferior performances, e.g., there is a notable decrease in Macro F1 (from 82.8 to 78.6). This finding supports our hypothesis that **the task-specific supervision signal plays a crucial role in guiding the two contrastive learning**. When examining the last block (Row 4-6), we observe that the models w/ our framework under different task ratios consistently surpasses the others (except only one MARCO metric). This further demonstrates the robustness of our method on the task ratio hyper-parameter.
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c} \hline \hline \(\lambda\) & \(\alpha\) & \(\beta\) & B3 & CEAF\(\phi\)4 & BLANC & MICRO & MACRO \\ \hline \(-\) & \(-\) & \(-\) & 83.69 & 76.25 & 90.20 & 87.2 & 82.8 \\ \hline
1.0 & 0.0 & 0.0 & 83.98 & 76.12 & 90.75 & 87.8 & 82.2 \\
0.0 & 1.0 & 1.0 & 85.04 & 77.72 & 90.77 & 86.4 & 78.6 \\ \hline
1.0 & 1.0 & 1.0 & 85.42 & 78.02 & 91.00 & 88.2 & 83.7 \\
0.5 & 1.0 & 1.0 & 85.14 & 78.15 & 91.02 & 88.4 & 83.2 \\
1.0 & 0.5 & 0.5 & 85.23 & 79.00 & 90.96 & 88.1 & 82.0 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Hyper-parameter analysis results on coreference resolution and character linking. For coreference resolution, we report the F1 scores of the B3, CEAF\(\phi\)4 and BLANC metrics.
\begin{table}
\begin{tabular}{l|c c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{MODEL} & \multicolumn{8}{c|}{Coreference Resolution} & \multicolumn{2}{c}{Character Linking} \\ \cline{2-10} & \multicolumn{2}{c|}{B3} & \multicolumn{2}{c|}{CEAF\(\phi\)4} & \multicolumn{2}{c|}{BLANC} & \multicolumn{2}{c|}{MICRO} & \multicolumn{1}{c}{MACRO} \\ \hline SpanBERT-base & 77.40 & 82.67 & 79.94 & 74.69 & 67.93 & 71.15 & 84.80 & 89.96 & 87.20 & 85.0 & 78.4 \\ SpanBERT-base (Ours) & **79.95** & **84.71** & **82.26** & **76.67** & 70.38 & **73.39** & 87.44 & **91.26** & **89.26** & **86.3** & 78.9 \\ w/o cross-sample loss & 79.48 & 83.06 & 81.23 & 74.72 & **71.07** & 72.85 & **87.68** & 90.59 & 89.08 & 86.1 & **80.0** \\ w/o summary-conversation loss & 79.00 & 83.36 & 81.11 & 74.68 & 70.33 & 72.44 & 85.45 & 90.61 & 87.85 & 85.6 & 78.8 \\ \hline SpanBERT-large & 81.92 & 85.56 & 83.69 & 77.85 & 74.74 & 76.25 & 88.61 & 91.91 & 90.20 & 87.2 & 82.8 \\ SpanBERT-large (Ours) & 83.55 & **87.38** & **85.42** & **79.83** & 76.29 & **78.02** & 89.18 & **93.00** & **91.00** & **88.2** & **83.7** \\ w/o cross-sample loss & **83.85** & 86.68 & 85.24 & 79.44 & 76.37 & 77.88 & **90.68** & 92.47 & 90.65 & 87.4 & 83.6 \\ w/o summary-conversation loss & 85.29 & 83.96 & 84.62 & 74.65 & **79.20** & 76.69 & 91.45 & 91.15 & 90.80 & 87.9 & 82.8 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation study results on two contrastive losses. The experiment is conducted using character resolution and character linking.
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c} \hline \hline & \multicolumn{4}{c|}{Coreference Resolution} & \multicolumn{2}{c}{Character Linking} \\ \hline & B3 & CEAF\(\phi\)4 & BLANC & MICRO & MACRO \\ \hline C2-base & 82.71 & 74.28 & 89.80 & 85.6 & 80.4 \\ \hline C2-base-OS & 83.58 & 74.28 & 90.63 & 86.1 & 80.8 \\ \hline C2-base-TS & **84.23** & **75.91** & **91.15** & **86.4** & **81.1** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Ablation study results on two-stage learning strategy. -OS and -TS represents the one-stage training and two-stage training respectively. For one-stage training, we remove the second supervised loss-only stage and adopt the multi-task training only.
Figure 2: Evidence type analysis result.
### Resource Availability Analysis
The proposed summary-conversation contrastive learning relies on well-organized script datasets that include a summary of each scene. This prerequisite could potentially limit the applicability of our approach to datasets in other languages or domains. To address this constraint, we conduct an experiment in which we replaced the manually collected summary dataset with an automatically generated one, produced by ChatGPT. As depicted in Table 7, our results indicate that when using the auto-generated corpus in summary-conversation contrastive learning, a significant improvement is still observed when compared to the vanilla baseline. This discovery further validates the adaptability of our method, irrespective of whether golden or generated summaries are used.
### Breakdown to Evidence Type
To better understand when and how our method works on each sample, we conduct an evidence type analysis on the character guessing task based on the fine-grained annotation provided by Sang et al. (2022). To remedy the scarcity issue in the original annotations, we merge the fine-grained annotation categories into two broader categories: _Global & In-depth Evidence_ and _Local & Textual Evidence_. More details on evidence type merging is described in Appendix E.
The results of evidence type analysis are presented in Figure 2. Note that our framework works better when Local & Textual evidence is required for character guessing than Global & In-depth evidence. This finding aligns with our intuition that Global & In-depth evidence is more challenging for the model to comprehend. It is also worth noting that our framework yields larger increases for samples requiring Global & In-depth evidences (2.4% and 2.7% for the base and large size models respectively), as compared to those requiring Local & Textual evidence (1.1% and 1.6% for the base and large models respectively). Based on these results, we safely conclude that **our framework is effective in facilitating character information modeling, especially for global information**.
### Visualization
The core of our method is to learn fine-grained and global character representations. To this end, we also visualize the learned character embeddings in the character guessing task. Specifically, we use character embeddings in the test set of the "FRIENDS" (a subset of TVSHOWGUESS dataset) and randomly choose 6 embeddings for each character from different samples.
Figure 3 shows the visualization results using T-SNE (Van der Maaten and Hinton, 2008). We compare the character embeddings generated by Longformer-P-Large w/ and w/o our framework. One thing to note is that without our framework, some character embeddings of Ross overlap with those of Rachel. This is because that in the TV show "FRIENDS", Ross and Rachel are partners and together appearing and engaging in many scenes. In contrast, this overlapping phenomenon is greatly mitigated. Overallly speaking, our framework encourages the embeddings belonging to the same character exhibit a more compact clustering pattern. This finding provides a new perspective to understand the effectiveness of our proposed method in character comprehension tasks.
### Case Study
We also choose a challenging sample from "The Big Bang Theory" subset of TVSHOWGUESS in the character guessing task, and analyze the predictions from Longformer-P-Large w/o and w/ our method, as well as that from ChatGPT.
As shown in Table 8, all the predictions from ChatGPT are wrong, indicating ChatGPT lacks a fine-grained understanding of each character. Be
\begin{table}
\begin{tabular}{l|c|c|c|c|c} \hline \hline & \multicolumn{3}{c|}{Coreference Resolution} & \multicolumn{3}{c}{Character Linking} \\ \hline & B3 & CEAF\(\phi\)4 & BLANC & MICRO & MACRO \\ \hline C2-base & 82.71 & 74.28 & 89.80 & 85.6 & 80.4 \\ \hline C2-base-LLM & 84.14 & **76.06** & 90.89 & 86.1 & 80.9 \\ \hline C2-base-G & **84.23** & 75.91 & **91.15** & **86.4** & **81.1** \\ \hline \hline \end{tabular}
\end{table}
Table 7: Experiment results with automatically generated summarization. -LLM and -G denote the model trained on summaries generated by ChatGPT and those trained using the dataset provided by (Chen et al., 2021).
Figure 3: Character embedding visualization result.
sides, the only difference between the vanilla model w/ and w/o our framework is whether the speaker P1 is predicted correctly or not. In this case, predicting P1 is particularly challenging, as few utterances are spoken by this character. Hence, it is a must for the models to guess P1's identity using other details in the scene. By understanding the relationships between P1 and other characters, our method is able to correctly predict that P1 is Sheldon's partner, Amy. This demonstrates that **our method benefits the fine-grained understanding on character relationships in script-based character understanding**, e.g., character guessing tasks.
## 7 Discussion about LLMs on Character Understanding
In this section, we go deeper to discuss the unsatisfied performance when adopting the ICL of LLMs to perform character understanding tasks. One possible reason for this is the script-based character understanding we focus on requires the model to learn the character information globally. For example, in character guessing, anonymous speakers sometimes need to be identified with some global evidence, like linguistic style and the character's relationship with others. These subtle cues are usually not included in the current sample and thus require the model to learn them globally from other samples Sang et al. (2022). However, due to the fine-tuned unavailability of ICL, LLMs can only utilize local information from the current sample and limited demonstrations to make inferences. We believe this is the reason that LLMs don't perform well in our script-based character understanding scenario. Additionally, we notice ICL also falls short in some other tasks that involve learning a domain-specific entity or individual across multiple samples, like knowledge graph completion Yao et al. (2023).
In our work, we notice while the number of demonstrations increases, the performance of LLMs shows a corresponding improvement. It appears that augmenting the number of demonstrations in the prompt could be a potential strategy for enhancing the capabilities of LLMs in these global learning tasks. Nonetheless, it's essential to note that incorporating an excessive number of relevant samples as demonstrations faces practical challenges, primarily due to constraints related to input length and efficiency considerations. In the future, more efforts are needed to explore optimal ways of harnessing the ICL method of LLMs in such global learning scenarios.
## 8 Conclusions
In this work, we focus on addressing two key challenges, text length and text type in script-based character understanding. To overcome these challenges, we propose a novel multi-level contrastive framework that exploits in-sample and cross-sample features. The experimental results on three tasks show that our method is effective and compatible with several SOTA models. We also conduct in-depth analysis to examine our method detailedly and provide several hints in the character understanding tasks.
In the future, we plan to apply contrastive learning to other long-form document understanding tasks, such as long document matching Jiang et al. (2019) and fiction understanding Yu et al. (2023).
\begin{table}
\begin{tabular}{l} P0 : Hey, sorry about that \\ P1 : No, we’re sorry. We never should have been \\ comparing relationships in the first place. \\ P2 : Why? We won. You know, I say, next, we take \\ on Koothrappali and his dog. Really give ourselves \\ a challenge. \\ P3 : I just want to say one more thing about this. \\ Just because Penny and I are very different people \\ does not mean that we’re a bad couple. \\ P2 : The answer is one simple test away. \\ Hmm? You know, it’s like when I thought there was \\ a possum in my closet. Did I sit around wondering? \\ No, I sent Leonard in with a pointy stick and a bag. \\ P3 : I killed his Chewbacac slippers. \\ P0 : Let’s just take the test. \\ P3 : No, no, no, I don’t want to. \\ P0 : Oh, well, ’cause you know we’re gonna do bad. \\ P3 : Because it doesn’t matter. I don’t care if we’re \\ a ten or a two. \\ P2 : Or a one. A one is possible. \\ P3 : Marriage is scary. You’re scared, I’m scared. But \\ it doesn’t make me not want to do it. It, it just \\ makes me want to hold your hand and do it with you. \\ P0 : Leonard. \\ P1 : It makes me so happy if you said things like that. \\ P2 : We got an eight-point-two. Trust me, you’re happy. \\ ChatGPT:_P0: Leonard, P1: Sheldon, P2: Penny, P3:Howard_ \\ Vanilla: _P0: Penny, P1: Howard, P2: Sheldon, P3:Leonard_ \\ Ours: _P0: Penny, P1: Amy, P2: Sheldon, P3:Leonard_ \\ Golden: _P0: Penny, P1: Amy, P2: Sheldon, P3:Leonard_ \\ \end{tabular}
\end{table}
Table 8: An example chosen from “The Big Bang Theory” in the character guessing task. We analyze the predictions made by ChatGPT (one-shot), Longformer-P-Large (vanilla and with our framework).
## 9 Limitations
Our framework depends on pre-trained large languages (PLMs) to encode conversations and summaries, and requires gradient information to tune the PLMs' parameters. This makes it challenging to apply our approach to language models with gigantic sizes. In this work, we demonstrate the generalization of our method in the experimental section at the base and large size, as well as the incapability of ChatGPT-3.5 on character understanding tasks. Nevertheless, it remains unclear how well our framework will fit to 3B+ encoder-decoder PLMs or decoder-only LLMs. As our experiments suggest, there is still room for improvement in character understanding tasks.
|
2308.06310 | Remote Measurement of Heliostat Reflectivity with the Backward Gazing
Procedure | Concentrated solar power is a promising technique enabling renewable energy
production with large scale solar power plants in the near future. Estimating
quantitatively the reflectivity of a solar concentrator is a major issue, since
it has a significant impact on the flux distribution formed on the solar
receiver. Moreover, it is desirable that the mirrors can be measured during
operation in order to evaluate environmental factors such as day night thermal
cycles or soiling and ageing effects at the reflective surfaces. For that
purpose, we used a backward gazing method that was originally developed to
measure mirror shape and misalignment errors. The method operates in quasi
real-time without disturbing the heat production process. It was successfully
tested at a solar tower power plant in France. Its basic principle consists in
acquiring four simultaneous images of a Sun-tracking heliostat, captured from
different observation points located near the thermal receiver. The images are
then processed with a minimization algorithm allowing the determination of
mirror slopes errors. In this communication, it is shown that the algorithm
also allows one to get quantitative reflectivity maps at the surface of the
heliostat. The measurement is fully remote and is used to evaluate surface
reflectivity that depends on optical coatings quality and soiling. Preliminary
results obtained with a Themis heliostat are presented. They show that
reflectivity measurements can be carried out within repeatability about 10
percent Peak-to-Valley (PTV) and 1 percent RMS. Ways to improving these numbers
are discussed in the paper | Francois Henault | 2023-08-11T17:54:43Z | http://arxiv.org/abs/2308.06310v1 | # Remote Measurement of Heliostat Reflectivity with the Backward Gazing Procedure
###### Abstract
Concentrated solar power is a promising technique enabling renewable energy production with large scale solar power plants in the near future. Estimating quantitatively the reflectivity of a solar concentrator is a major issue, since it has a significant impact on the flux distribution formed on the solar receiver. Moreover, it is desirable that the mirrors can be measured during operation in order to evaluate environmental factors such as day/night thermal cycles or soiling and ageing effects at the reflective surfaces. For that purpose, we used a backward gazing method that was originally developed to measure mirror shape and misalignment errors. The method operates in quasi real-time without disturbing the heat production process. It was successfully tested at the Themis solar tower power plant in Targasonne, France. Its basic principle consists in acquiring four simultaneous images of a Sun-tracking heliostat, captured from different observation points located near the thermal receiver. The images are then processed with a minimization algorithm allowing the determination of mirror slopes errors. In this communication, it is shown that the algorithm also allows one to get quantitative reflectivity maps at the surface of the heliostat. The measurement is fully remote and is used to evaluate surface reflectivity that depends on optical coatings quality and soiling. Preliminary results obtained with a Themis heliostat are presented. They show that reflectivity measurements can be carried out within repeatability about \(\square\)5% Peak-to-Valley (PTV) and 1% RMS. Ways to improving these numbers are discussed in the paper.
Keywords:Solar concentrator; Heliostat; Reflectivity measurement; Shape measurement
## Introduction
Concentrated solar power (CSP) is a promising technique enabling renewable energy production with large scale solar power plants in the near future. In CSP tower plants, the reflectivity of the heliostats plays a major role on the achieved performance and system efficiency, implying that the mirrors must be cleaned regularly. Thus it is highly desirable to perform measurements of the heliostats reflectivity in situ and in quasi real-time.
Heliostats reflectivity losses are known to originate from dust deposition in dessertic environment, optical coatings degradation due to day/time thermal cycles and humidity, and more generally from any damage of the optical surfaces. Ways of measuring them have been extensively reviewed in Ref. [1]. They can be schematically divided into two families:
* Using portable reflectometers such as described in Refs. [2-4]. These measurements are generally restricted to mirror samples in laboratory. Extending them to in situ mirror measurements is feasible, but would require excessive measurement time for heliostat fields comprising hundreds or thousands of mirrors, multiplied with the number of measurement points on each mirror.
* Performing remote measurements as described in Refs. [5-6] that make use of different images of the flux density formed at the solar receiver. They allow estimating the global
reflectivity loss of the heliostats, but provide no information about their locations and amplitudes.
Here is described a local, backward gazing method originally developed to measure the mirror shape and misalignment errors of the heliostats in a reasonable period of time. It allows quasi real-time measurements without disturbing the heat production process. Its basic principle consists in acquiring four simultaneous images of a Sun-tracking heliostat, captured from different observation points located near the solar receiver. The images are then processed with a minimization algorithm allowing the determination of mirror slopes errors. In this communication, it is shown that the algorithm also allows one to get a quantitative reflectivity map at the surface of the heliostat. The measurement is fully remote and allows evaluating soiling effects due to dust accumulation and moisture, as well as surface defects and cracks inside the optical coatings.
The paper is divided as follows: section 2 firstly describes the Themis experiment and the reflectivity reconstruction algorithm. The measurement methodology and the obtained numerical results are given in section 3, and then discussed in section 4. A brief conclusion is drawn in section 5.
## Method
The backward gazing method was developed in the 1980's to measure the canting and shape errors of the reflective facets of solar concentrators, such as those equipping the 1 MW solar furnace in Odeillo, France, and the focusing heliostats of the solar tower power plant Themis in Targasonne, France [7]. Later, the appearance of modern CCD cameras allowed using more than one single point of observation, therefore achieving quantitative measurements and reinforcing considerably the interest of the method. Numerical simulations were undertaken in order to evaluating its performance, and demonstrated that a measurement accuracy of the mirror slopes and misalignment errors better than 0.1 mrad is feasible [8-12]. A series of experiments were then conducted at the Themis solar power plant, and confirmed a high potential of the method for measuring the slopes errors of the heliostats [13]. All data acquired during these experiments are reusable for quantitatively estimating the reflectivity maps of the heliostats.
### Coordinate systems and scientific notations
The Themis experiment is illustrated in Figure 1. It makes use of the following coordinate systems (Figure 1-a):
* The XYZ reference frame is attached to an individual heliostat with X its optical axis and YZ are its lateral dimensions along which its geometry is defined (see Figure 1-d). Points at the surface of the heliostat are denoted P(y, z) with y and z their Cartesian coordinates.
* The XY'Z' reference frame is attached to the solar receiver, or to the target plane. The X'-axis is directed from the centre of the heliostat to the centre of the target plane. The Y' and Z' axes are assumed to be perpendicular to the X'-axis. The four cameras are installed at points M\({}^{\rm i}\) (1 \(\leq\) i \(\leq\) 4) of Cartesian coordinates (y'i, z'i).
In addition, three vectors defined:
* **S** is a unitary vector directed to the Sun centre,
* **N** is a unitary vector perpendicular to the heliostat surface, parallel to the X-axis,
* **R** is the unitary target vector parallel to the X'-axis.
The vectors **S**, **R** and **N** obey the Smells-Descartes reflection law that writes in vectorial form:
\[{\bf S+R}=2\big{(}{\bf S}{\bf N}\ \big{)}{\bf N}. \tag{1}\]
The different input and output maps at the heliostat surface employed here are summarized in Table 1. It may be noted that theoretical and approximated relations between the angles \(\square\)(P), \(a\)(P) and \(h\)(P) were established in Refs. [8-12]. They are not utilized here because the minimization algorithm described in the next subsection allows removing any approximation.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Input and output maps** & **Symbol** & **Unit** \\ \hline Image acquired with the \(\hat{P}^{n}\) camera (\(1\leq i\leq 4\)) & H(P) & – \\ Simulated image for the \(\hat{P}^{n}\) camera (\(1\leq i\leq 4\)) & B(P) & – \\ Deviation angle with respect to Sun centre for the \(\hat{P}^{n}\) camera (\(1\leq i\leq 4\)) & s(P) & mrad \\ Heliostat reflectivity map & R(P) & – \\ Heliostat slope errors map in azimuth along Y-axis & a(P) & mrad \\ Heliostat Slope errors map in height along Z-axis & \(h\)(P) & mrad \\ \hline \end{tabular}
\end{table}
Table 1: Input and output maps at the surface of the heliostat.
Figure 1: Principle of the four cameras backward gazing method and its implementation at the Themis solar power plant.
## The Themis experiment
The Themis experiment was extensively described in Refs. [11] and [13]. Its main features are illustrated in Figure 1 and summarized below.
* The measured heliostat is made of nine focusing modules, eight of them being strictly identical. A \(9^{\text{th}}\) "complementary" module is located just above the rotating elevation mechanism (see Figures 1-c and 1-d). The modules are tilted one with respect to the other in order to mimic an ideal parabolic profile. The overall dimensions of the heliostat are 8.75 x 7.34 m along the Y and Z axes respectively. The heliostat is located at a distance \(d\) = 131 m from the target plane and is set in Sun-tracking mode.
* Four small cameras equipped with CMOS monochrome sensors and telephoto lenses are used to capturing images of the Sun reflected through the heliostat with a maximal resolution of 1280x1024 pixels. They are located behind a thermal shield pierced with four 25-mm diameter pinholes in the Y'Z' target plane, enabling the observation of the heliostat field. They are protected from the concentrated solar radiation by a set of neutral densities. The distance between the cameras is set to 200 mm (see Figures 1-e and 1-f). The common acquisition time of all images is set to 2 milliseconds, which is negligible with respect to the Sun tracking refreshing rate of he heliostat drive.
* A fifth CMOS camera is located at the top of the solar tower (see Figure 1-b) and mounted on a Sun-tracking mechanism. It is used for radiometric calibration of the images acquired with the four previous cameras.
* Data from the five cameras are acquired simultaneously and transferred to a laptop computer via Ethernet cables and a switch.
The image data processing software is then executed offline, and is described in the next subsection. It may be noted that a similar experimental setup was described in Ref. [14] in order to estimating the angular deviations of Sun-tracking heliostats. However the acquired data was not utilized in view of reflectivity measurements.
## Reflectivity reconstruction algorithm
Then, starting from the four pre-processed camera images H\({}_{i}\) (P), the reflectivity reconstruction algorithm consists in the following steps (see Figure 2):
1. Select a point P at the surface of the heliostat.
2. Read the brightness value at point P H\({}_{i}\) (P) from the image of the heliostat recorded with the \(i^{\text{th}}\) camera (\(1\leq i\leq 4\)).
3. Perform reverse ray-tracing starting from point M\({}_{i}\), then reflecting the ray at point P and finally directing it to the solar disk.
4. Compute the angular deviation \(\varepsilon_{i}\) (P) of the reverse reflected ray with respect to the Sun centre.
5. Estimate the image brightness B\({}_{i}\) (P) at point P, either from an analytical model of from the direct Sun image recorded with the calibration camera. In the first option we use Jose's formula [15]: \[\text{B}_{i}\left(\text{P}\right)=0.39+0.61\sqrt{1-\frac{\sin^{2}\varepsilon_ {i}\left(\text{P}\right)}{\sin^{2}\varepsilon_{0}}}\,\] (2) with \(\varepsilon_{0}\) the angular radius of the Sun taken equal to 16 arcmin.
6. Repeat steps 1 to 5 with another camera \(i\neq i\).
7. Compute a cost function defined as: \[\text{CF}=\sqrt{\sum_{i=1}^{4}\left(\text{H}_{i}\left(\text{P}\right)-\text {R}\left(\text{P}\right)\times\text{B}_{i}\left(\text{P}\right)\right)^{2}}\.\] (3)
8. Find the minimum value of the cost function CF when varying the reflectivity factor R(P) and the deviation angles \(\boldsymbol{a}\)(P) and \(h\)(P) with a Powell descent algorithm.
9. Repeat steps 1 to 8 for all points P at the heliostat surface.
## Methodology and measurement results
Mapping the full reflectivity distribution R(P) of a heliostat and estimating the global measurement accuracy faces up to a serious difficulty that is the absence of reference measurements to be compared with: in that case, only portable reflectometers could be used [2-4] at the price of excessive measuring time and significantly reduced spatial sampling. Thus we opted for the following methodology:
* A first set of measurements was carried out on the 21st of December 2017 at 14h11 GMT, which corresponds to solar angles \(\alpha_{\mathrm{S}}\) and \(h_{\mathrm{S}}\) equal to -34.3 deg. in azimuth and +19.9 deg. in elevation.
* A second set of measurements was acquired 30 minutes later with solar angles \(\alpha_{\mathrm{S}}\) = -36.2 deg. and \(h_{\mathrm{S}}\) = +15.2 deg.
Figure 2: Flow-chart of the reflectivity and slopes reconstruction algorithm.
It is assumed that reflectivity changes due to the slight variations of the incidence angles on the heliostat (< 3 deg.) are negligible, and that other reflectivity degradations do not occur in this short lap of time.
Then the difference of reflectivity measurements between case 1 and 2 stands for a fair estimator of the repeatability error.
It is finally assumed that the absolute measurement accuracy and repeatability are of the same magnitude order.
The acquired heliostat images for both cases n\({}^{\circ}\)1 and 2 are reproduced in Figure 3. The measured repeatability is given in Table 2 and illustrated with the false-colour views in Figure 4. The Table 2 shows the spatial coverage on each heliostat module, which is proportional to the number of valid pixels where the cost function does not exceed a certain threshold. The average, PTV and RMS reflectivity errors are indicated in the rightmost columns for each heliostat module. It must be noticed that the module n\({}^{\circ}\)1 was excluded from these statistics because of a too low spatial coverage. The estimated repeatability is then found to be lower or equal than 5% PTV and 1% in RMS sense. These results are further discussed into the next section.
Figure 3: Acquired images during the first and second sets of measurements. Direct Sun images acquired with the fifth camera are displayed at the centre.
\begin{table}
\begin{tabular}{|c|c|c c c|} \hline Module & \multicolumn{2}{c|}{Spatial} & \multicolumn{2}{c|}{**Reflectivity measurement error**} \\ \cline{3-5} number & coverage (\%) & Mean (\%) & PTV (\%) & RMS (\%) \\ \hline
1 & 0,3 & **1.0** & \(\pm\) 1.8 & **1.0** \\
2 & 5,0 & 0,3 & \(\pm\) 4,8 & 0,9 \\
3 & 24,3 & 1,7 & \(\pm\) 5,6 & 1,2 \\
4 & 36,2 & 2,5 & \(\pm\) 4,3 & 1,4 \\
5 & 38,9 & -0,7 & \(\pm\) 5,5 & 1,4 \\
6 & 8,6 & -0,6 & \(\pm\) 4,8 & 1,3 \\
7 & 31,0 & 0,6 & \(\pm\) 5,0 & 1,6 \\
8 & 22,7 & 0,8 & \(\pm\) 5,1 & 0,9 \\
9 & 63,9 & 0,2 & \(\pm\) 5,0 & 0,8 \\ \hline Average & & 0,6 & \(\pm\) 4,7 & 1,2 \\ \hline \end{tabular}
\end{table}
Table 2: Estimation of reflectivity measurement errors for each heliostat module.
Figure 4: (a) Initial measurement. (b) Second measurement performed 30 minutes later. (c) Averaged reflectivity map. (d) Estimation of the repeatability error between both measurements. The maximal values of the images are normalized to unity. Red circles indicate probable cracks locations.
## Discussion
The most important issue probably consists in extending the spatial coverage of the method in order to reconstruct the entire surface of the heliostat. Actually, the limited spatial coverage seen in Figure 4 results from the combination of large slope errors \(\alpha\)(P) and \(h\)(P) with a limited number of cameras \(N_{Cam}=4\). It was demonstrated in Ref. [16] that using more cameras, either arranged in a square or a line geometry combined with sun-tracking operation allows overcoming this difficulty, and would improve the measurement accuracy by a by a factor \(1/\sqrt{N_{Cam}}\) at the same time.
Secondly, careful examination of the reflectivity maps in Figure 4 allows identifying the areas where the measurement accuracy is degraded. It is found that they coincide with those areas where the deviation angles \(\alpha\)(P) and \(h\)(P) are poorly determined [13]. The degraded areas are generally located near the contours of the reflectivity maps where steep transitions between the lighted and unlighted zones are observed.
Moreover, the method also allows to locating precisely the corrupted areas. In particular, cracks in the optical coatings can be evidenced. Some of them are marked with red circles in Figure 4. It turns out that this will be very helpful in deciding if and when cleaning or replacing some heliostat modules with spare ones is required.
Lastly, an extensive analysis of the main experimental error sources was presented in Ref. [13]. Ways of mitigation were proposed as follows:
For each camera, increasing the number \(N_{i}\) of acquired images during the refreshing time of the tracking heliostat that is currently limited to \(N_{i}=5\). A reduction of the noise by a factor \(1/\sqrt{N_{i}}\) will result.
Implementing real-time visualization of the images of the observed heliostat, assisted by remote controlled zoom and focusing devices of the cameras.
Improving the registration algorithm of the rectified images of the heliostat down to sub-pixel level.
Provided that such improvements are implemented, we believe that a measurement accuracy of \(\approx 2\%\) PTV and \(0.5\%\) RMS is achievable on the full heliostat surface.
## Conclusion
Estimating quantitatively the reflectivity of solar concentrators will be a key issue for increasing the energy produced by large scale solar power plants in the near future. Here was described the principle of a backward gazing method originally intended to measure the shape and canting errors of focusing heliostats. The method proves to be very efficient for regular control of the heliostats reflectivity with a high spatial resolution. It enables precise estimation of various environmental factors such as day/night thermal cycles, soiling and ageing effects on the reflective surfaces due to dust accumulation and moisture, or of cracks in the optical surfaces. It is fully remote and can be operated in quasi real-time when the heliostats are in Sun-tracking mode, without disturbing the electricity production process. A minimization algorithm then allows determining quantitative reflectivity maps at the surface of the heliostats. Preliminary results obtained with a focusing heliostat of the Themis solar tower power plant showed that the current measurement errors are about \(\Box 5\%\) PTV and \(1\%\) RMS. Ways to improving these numbers were discussed and may allow attaining an accuracy of \(\Box 2\%\) PTV and \(0.5\%\) RMS. The method may finally be used for routine reflectivity measurements helping in deciding if and when some heliostat modules have to be cleaned or replaced.
## Author contributions
F. Henault is the First Author. He is optical engineer, PHD in Optics and Photonics and acquired extensive knowledge about the opto-mechanical design of focusing heliostats.
|
2305.03294 | Quantum battery based on dipole-dipole interaction and external driving
field | The Dicke model is a fundamental model in quantum optics, which describes the
interaction between quantum cavity field and a large ensemble of two-level
atoms. In this work, we propose an efficient charging quantum battery achieved
by considering an extension Dicke model with dipole-dipole interaction and an
external driving field. We focus on the influence of the atomic interaction and
the driving field on the performance of the quantum battery during the charging
process and find that the maximum stored energy exhibits a critical phenomenon.
The maximum stored energy and maximum charging power are investigated by
varying the number of atoms. When the coupling between atoms and cavity is not
very strong, compared to the Dicke quantum battery, such quantum battery can
achieve more stable and faster charging. In addition, the maximum charging
power approximately satisfies a superlinear scaling relation $P_{\rm
max}\varpropto\beta N^{\alpha}$, where the quantum advantage $\alpha=1.6$ can
be reached via optimizing the parameters. | Wuji Zhang, Shuyue Wang, Chunfeng Wu, Gangcheng Wang | 2023-05-05T05:49:39Z | http://arxiv.org/abs/2305.03294v1 | # Quantum battery based on dipole-dipole interaction and external driving field
###### Abstract
The Dicke model is a fundamental model in quantum optics, which describes the interaction between quantum cavity field and a large ensemble of two-level atoms. In this work, we propose an efficient charging quantum battery achieved by considering an extension Dicke model with dipole-dipole interaction and an external driving field. We focus on the influence of the atomic interaction and the driving field on the performance of the quantum battery during the charging process and find that the maximum stored energy exhibits a critical phenomenon. The maximum stored energy and maximum charging power are investigated by varying the number of atoms. When the coupling between atoms and cavity is not very strong, compared to the Dicke quantum battery, such quantum battery can achieve more stable and faster charging. In addition, the maximum charging power approximately satisfies a superlinear scaling relation \(P_{\rm max}\propto\beta N^{\alpha}\), where the quantum advantage \(\alpha=1.6\) can be reached via optimizing the parameters.
## I Introduction
With the advancement of quantum technology, there is a growing interest in schemes that utilize quantum effects to enable superior performance of future technological devices [1; 2; 3; 4; 5; 6]. Recently, it has shown great success in several practical fields, such as quantum computing, quantum cryptography, and thermodynamic nanoscale device, which are expected to completely solve data analysis in the communication process, optimize sensitive parameters to improve network security, and provide more accurate temperature measurement [7; 8; 9; 10; 11; 12]. Overall, the development of quantum technology promises to offer more miniaturized and more precise devices. The devices with potential quantum information processing have also been developed, but strategies for storing and releasing energy in these devices remain a major problem to be addressed [13; 14; 15].
The question of whether quantum effects can improve energy storage to meet the current needs of quantum devices has been explored. The so-called quantum battery (QB), a small quantum system for energy storage and extraction, has been published in a seminal paper by Alicki and Fannes [16]. This provides an efficient way to solve the energy constraints of quantum devices [17; 18; 19]. A central goal for such research field is to optimize the performance of QBs, such as energy storage and charging rate [20; 21; 22; 23]. Generally speaking, two distinct schemes for charging have been proposed, namely collective charging and parallel charging [24; 25]. For collective charging scheme, all cells are charged from the same charger; for parallel charging, each cell is charged through its own charger independently. Some results have demonstrated that collective charging shows better performance over parallel charging and charging acceleration can be achieved [26; 27; 28; 29; 30]. The advantage of collective charging over parallel charging is called the quantum charging advantage [31; 32].
In pursuit of the quantum advantage and possible experimental realization, the QB has been proposed in various models, such as two-level systems [33; 34], three-level systems [35; 36; 37], two photons model [38], the superconducting circuit model [39; 40; 41], Lipkin-Meshkov-Glick model [42], Sachdev-Ye-Kitaev model [43; 27; 44], Heisenberg spin-chain model [45; 46], quantum cavity model [47; 48], collision model [49; 50; 51], many-body localized model [52; 53; 54; 55], dissipation model [56; 57] and so on [58; 59]. One of the most well-known examples is the Dicke model [60; 42; 61], which describes the interaction of the ensemble of two-level atoms with the single-photon mode of a cavity. Despite the relative simplicity of the Dicke model, it still exhibits several interesting phenomena as the coupling strength increases to ultrastrong coupling (USC) or even deep-strong coupling (DSC) [62; 63; 64]. Recently, QBs based on the Dicke model have been introduced, assuming a conventional coupling between atoms and photon radiation of a cavity. It has been reported that a \(\sqrt{N}\) acceleration can be achieved for Dicke QB under the collective charging scheme [65; 66].
In the development stage of QBs, the spin-chain model placed in a optical cavity has also received much attention [67; 68]. The cavity can affect the radiation of the atoms, inducing atom-atom interactions and collective properties. It is worth noting that various protocols can be experimentally employed to create an array of traps in the cavity to maintain the atoms at a fixed distance [69; 70]. The atoms placed at a smaller distance are coupled to each other by dipole-dipole interaction [71; 72; 73]. Previous studies have rarely considered long-distance interactions of atoms, but long-distance interactions between atoms can produce many interesting phenomena that cannot be ignored in actual atomic chain models [74; 75]. These interactions are dependent on the distance between atoms. Through the interaction, we can regulate the energy levels and control whether atomic transitions occur [72; 76; 77; 78; 79]. A natural question arises: can the charging process be accelerated compared to Dicke QB when considering long-distance interactions between atoms?
On the other hand, in a recent work [47], a generalized Dicke QB has been proposed based on the global entanglement interactions and the driving field, which can achieve a much faster charging process than Dicke QB. To take another step forward, we consider the direct dipole-dipole interaction |
2308.05619 | Updating Clinical Risk Stratification Models Using Rank-Based
Compatibility: Approaches for Evaluating and Optimizing Clinician-Model Team
Performance | As data shift or new data become available, updating clinical machine
learning models may be necessary to maintain or improve performance over time.
However, updating a model can introduce compatibility issues when the behavior
of the updated model does not align with user expectations, resulting in poor
user-model team performance. Existing compatibility measures depend on model
decision thresholds, limiting their applicability in settings where models are
used to generate rankings based on estimated risk. To address this limitation,
we propose a novel rank-based compatibility measure, $C^R$, and a new loss
function that aims to optimize discriminative performance while encouraging
good compatibility. Applied to a case study in mortality risk stratification
leveraging data from MIMIC, our approach yields more compatible models while
maintaining discriminative performance compared to existing model selection
techniques, with an increase in $C^R$ of $0.019$ ($95\%$ confidence interval:
$0.005$, $0.035$). This work provides new tools to analyze and update risk
stratification models used in clinical care. | Erkin Ötleş, Brian T. Denton, Jenna Wiens | 2023-08-10T15:08:13Z | http://arxiv.org/abs/2308.05619v1 | Updating Clinical Risk Stratification Models Using Rank-Based Compatibility: Approaches for Evaluating and Optimizing Clinician-Model Team Performance
###### Abstract
As data shift or new data become available, updating clinical machine learning models may be necessary to maintain or improve performance over time. However, updating a model can introduce compatibility issues when the behavior of the updated model does not align with user expectations, resulting in poor user-model team performance. Existing compatibility measures depend on model decision thresholds, limiting their applicability in settings where models are used to generate rankings based on estimated risk. To address this limitation, we propose a novel rank-based compatibility measure, \(\mathcal{C}^{\mathrm{R}}\), and a new loss function that aims to optimize discriminative performance while encouraging good compatibility. Applied to a case study in mortality risk stratification leveraging data from MIMIC, our approach yields more compatible models while maintaining discriminative performance compared to existing model selection techniques, with an increase in \(\mathcal{C}^{\mathrm{R}}\) of 0.019 (95% confidence interval: 0.005, 0.035). This work provides new tools to analyze and update risk stratification models used in clinical care.
1219:1-31, 2023 Machine Learning for Healthcare
## 1 Introduction
As machine learning (ML) models become increasingly integrated into clinical workflows, understanding the impact of model updates on these workflows and users is crucial. Models may be retrained and updated as new data become available to maintain or improve performance over time (Fnilayson et al., 2021; Jenkins et al., 2021; Davis et al., 2022). For example, Memorial Sloan Kettering Cancer Center's prostate cancer outcome prediction models are updated annually (Vickers et al., 2017). While primarily intended to improve model performance, model updating can also affect users' expectations, _i.e._, how users believe a model will perform given specific examples or patients. When models behave in
unexpected ways (_e.g._, make mistakes in situations where they were previously accurate), user-model team performance can suffer (Bansal et al., 2019; Guo and Yang, 2020). Thus, selecting updated models based solely on discriminative performance may be insufficient. Model developers may need to consider the potential disruption to existing workflows and alignment with user expectations in addition to discriminative performance (Bansal et al., 2019; Zahedi and Kambhampati, 2021). This creates a need for practical tools to estimate how updated models might influence user expectations without directly querying users (Bansal et al., 2021). Fundamentally, we would like a way to answer this question: _to what extent does an updated model retain the correct behavior of an original model?_
To this end, _compatibility measures_ assess how much an updated model may disrupt a user's mental model compared to the original model and an evaluation dataset. While researchers have proposed compatibility measures for supervised classification, like the backwards trust compatibility measure, these existing measures depend on a decision threshold (Bansal et al., 2019). However, selecting a single fixed threshold may not be appropriate in many settings. In the context of patient risk stratification tools, decision thresholds can depend on system constraints or user preferences (Wynants et al., 2019; Gorski et al., 2017). Similar to how the receiver operating characteristic curve evaluates discriminative performance across all decision thresholds, there is a need for compatibility measures that are independent of a threshold.
Given this gap, we propose a novel rank-based compatibility measure that estimates the probability that an updated model will correctly rank a pair of discordantly labeled patients (a _patient-pair_), given that the original model was correct. This new measure offers
Figure 1: Backwards trust compatibility (\(\mathcal{C}^{\mathrm{BT}}\), Bansal et al. (2019)) vs. Rank-based compatibility (\(\mathcal{C}^{\mathrm{R}}\), proposed). A model is used to stratify patients at risk for an outcome (black) from those that are not (white). Both the original and updated models have decision thresholds independently set to maximize accuracy on the validation set (shown). On this set, the original model has an accuracy of \(\frac{9}{11}\) and an AUROC of \(\frac{26}{30}\). The updated model switches the order of patients highlighted in magenta, resulting in higher accuracy \(\frac{10}{11}\) and AUROC \(\frac{28}{30}\). Out of the 9 patients correctly labeled by the original model the updated model labeled 8 correctly, this fraction, \(\frac{8}{9}\), is \(\mathcal{C}^{\mathrm{BT}}\). This measure depends on the model decision thresholds. Our compatibility measure, \(\mathcal{C}^{\mathrm{R}}\), evaluates the ordering of patient-pairs. Of the 26 patient-pairs correctly ordered by the original model, the updated model correctly ordered 25 (makes an error on patient-pair E-F), yielding a \(\mathcal{C}^{\mathrm{R}}\) of \(\frac{25}{26}\).
a broader evaluation framework for model updates used in risk stratification and ranking, and applies in settings where model outputs are used for clinical resource allocation decisions. By considering the concordance between model rankings, we can proactively detect potentially harmful updates and avoid negative impacts on user-model team performance.
**Figure 1** provides an overview of our proposed approach, illustrating its relationship to existing performance and compatibility measures. **Figure 2** illustrates the limitations of backwards trust compatibility compared to rank-based compatibility. In this work, we also demonstrate how our new measure relates to model discriminative performance and develop a loss function that can be used to directly optimize for compatibility.
#### Generalizable Insights about Machine Learning in the Context of Healthcare
Healthcare has witnessed an explosion of ML models in recent years, and it is a domain in which the task of ranking patients based on risk arises frequently. At the same time, models must be updated to retain clinical utility. For example, the Epic sepsis model, a patient deterioration model used by tens of thousands of clinicians in the United States, was recently updated in light of reports of poor performance (Gerhart and Thayer, 2021; Wong et al., 2021; Ross, 2022). We focus on a similar case study in which we stratify patients according to their risk of in-hospital mortality. While it may seem that discriminative performance must suffer to maintain compatibility, we show that developers can generate compatible updated models without negatively affecting discriminative performance by using our proposed loss function during training. Compared to updating approaches that ignore compatibility, or use existing compatibility measure, this work facilitates model updates that are more consistent with clinicians' expectations and thus may be more readily accepted and adopted in practice.
Our main contributions are as follows:
* To the best of our knowledge, we introduce the first rank-based compatibility measure based on the concordance of risk estimate pairs.
Figure 2: \(\mathcal{C}^{\text{BT}}\) is sensitive to the choice of both model decision thresholds. Both models have perfect rank-based discrimination (_i.e._, AUROC = 1). Depending on the updated decision threshold, \(\mathcal{C}^{\text{BT}}\) may be \(\frac{1}{2}\) (red), \(\frac{3}{4}\) (yellow), or 1 (green). Regardless of the model decision threshold, \(\mathcal{C}^{\text{R}}\) is 1 for this example.
* We characterize the extent to which the new compatibility measure may vary over potential model updates.
* We introduce a loss function that incorporates ranking _incompatibility loss_, which can be used to train model updates with improved rank-based compatibility characteristics.
* Using MIMIC-III, we present empirical results that demonstrate how the proposed loss function leads to improved rank-based compatibility without a significant decrease in AUROC compared to standard model selection approaches.
## 2 Problem Setup & Background
In the context of learning risk stratification models, a patient \(i\) is represented by the tuple \((\mathbf{x}_{i},y_{i})\), where \(\mathbf{x}_{i}\in\mathbb{R}^{d}\) represents the feature vector and \(y_{i}\in\{0,1\}\) represents the binary label (_e.g._, outcome). Risk stratification model, \(f(\cdot)\), outputs risk estimates, \(\hat{p}_{i}\in[0,1]\) that estimate \(\Pr(y_{i}=1|\mathbf{x}_{i})\). These risk estimates can be converted to predicted labels, \(\hat{y}_{i}=\mathbb{1}(\hat{p}_{i}>\tau)\), where \(\tau\) is some decision threshold.
We seek to assess the impact on user expectations when updating an original model, \(f^{o}(\cdot)\), to an updated model, \(f^{u}(\cdot)\). Note that the original and updated models are specific instantiations of the risk stratification models introduced above. They produce risk estimates denoted as \(\hat{p}_{i}^{o}\) and \(\hat{p}_{i}^{u}\), respectively. We refer to the combination of an original and updated model as a _model-pair_. Decision thresholds for the original and the updated models are \(\tau^{o}\) and \(\tau^{u}\), respectively.
The original and candidate updated risk stratification models are evaluated on a held-out set of patients, denoted as \(I\). This set can be partitioned into two mutually exclusive subsets based on patient labels: 0-labeled patients, \(I^{0}\), and 1-labeled patients, \(I^{1}\). The size of these subsets of patients are denoted as \(n^{0}\) and \(n^{1}\), respectively, and their sum, \(n\), is the cardinality of \(I\). We formalize the notion of a _patient-pair_, a pair of patients \(i\) and \(j\) that do not share the same label (_i.e._, \(i\in I^{0}\) and \(j\in I^{1}\)). The total number of patient-pairs, \(m\), is the product \(n^{0}n^{1}\). We denote the number of patient-pairs correctly ranked by the original and updated models as \(m^{o+}\) and \(m^{u+}\), respectively. Both \(m^{o+}\) and \(m^{u+}\) are integers taking on values between 0 and \(m\) inclusively. Given an original model, we aim to select an updated model that achieves good discriminative performance and compatibility.
### Discriminative Performance
Discriminative performance measures a model's ability to separate patients with different labels (Harrell Jr et al., 1996). The area under the receiver operating characteristic curve (AUROC) is widely used to evaluate the discriminative performance of risk stratification models since it evaluates performance across all decision thresholds \(\tau\). The AUROC corresponds to the probability of correctly ranking two patients with differing labels based on the risk estimates produced by the model. It may be estimated by counting the number of patient-pairs ranked correctly by a model, \(m^{o+}\), and then normalizing by the total number of patient-pairs \(m\)(Hanley and Mcneil, 1982):
\[\text{AUROC}(f^{o})=\frac{\sum\limits_{i\in I^{0}}\sum\limits_{j\in I^{1}}\mathbbm{1 }(\hat{p}_{i}^{o}<\hat{p}_{j}^{o})}{m}=\frac{m^{o+}}{m} \tag{1}\]
The AUROC ranges between 0 and 1; a value of 0.5 corresponds to an ordering that is no better than random. The AUROC is the binary case of the concordance index (c-index), and both are related to the Wilcoxon-Mann-Whitney U statistic (Harrell, 1982; Kendall, 1938; Harrell Jr et al., 1996).
### Backwards Trust Compatibility
Currently, _backwards trust compatibility_ (\(\mathcal{C}^{\text{BT}}\)) is the primary compatibility measure described in the literature (Bansal et al., 2019, a). \(\mathcal{C}^{\text{BT}}\) measures the agreement between the true label and the predicted labels produced by the original and updated models by counting the number of patients both labeled correctly and normalizing by the number of patients the original model labeled correctly:
\[\mathcal{C}^{\text{BT}}(f^{o},f^{u})=\frac{\sum\limits_{i\in I}\mathbbm{1}(y_{ i}=\hat{y}_{i}^{o})\cdot\mathbbm{1}(y_{i}=\hat{y}_{i}^{u})}{\sum\limits_{i\in I }\mathbbm{1}(y_{i}=\hat{y}_{i}^{o})} \tag{2}\]
\(\mathcal{C}^{\text{BT}}\) depends on an evaluation set of patients, \(I\), and values range between 0 and 1. \(\mathcal{C}^{\text{BT}}=0\) when the updated model mislabels all the patients labeled correctly by the original model, and \(\mathcal{C}^{\text{BT}}=1\) when the updated model correctly labels all the patients the original model got correct. \(\mathcal{C}^{\text{BT}}\) is not symmetric, as \(\mathcal{C}^{\text{BT}}(f^{o},f^{u})\) does not necessarily equal \(\mathcal{C}^{\text{BT}}(f^{u},f^{o})\). \(\mathcal{C}^{\text{BT}}\) is expected to decrease in settings with dataset shifts as the feature-label relationships captured by the model-pairs differ (Srivastava et al., 2020).
In the context of patient risk stratification models, calculating \(\mathcal{C}^{\text{BT}}\) requires first thresholding risk scores to produce binary predictions. However, many settings in healthcare do not use a decision threshold (Wynants et al., 2019). For example, patients in the emergency department may be stratified by continuous risk estimates, and surgeons may use different risk thresholds to recommend surgery. In use cases where there are multiple thresholds, \(\mathcal{C}^{\text{BT}}\) may be computed multiple times; however, this is problematic for several reasons. First, the evaluation grows proportionally with the number of thresholds being considered. Second, there is limited utility in doing so for cases with a class imbalance (see **Appendix Section C.6**) Third, \(\mathcal{C}^{\text{BT}}\) is sensitive to the selection of thresholds, and poorly chosen thresholds could lead to a model with good discrimination being evaluated poorly, as shown in **Figure 2**. These suggest a need for a compatibility measure that applies directly to continuous risk estimates without thresholding.
## 3 Methods
We present our proposed rank-based compatibility measure, \(\mathcal{C}^{\text{R}}\), which measures compatibility independent of a decision threshold by examining the ranking concordance of patient-pairs. While related to the AUROC, we hypothesize that optimizing discriminative model
performance by minimizing binary cross-entropy loss when training models may not necessarily lead to high \(\mathcal{C}^{\text{R}}\). Thus, we propose a new loss function based on a differentiable approximation of \(\mathcal{C}^{\text{R}}\) that can be used when training updated models.
### Rank-Based Compatibility
The rank-based compatibility, presented in **Equation 3**, compares the ranking produced by the updated model against the ranking produced by the original model.
\[\mathcal{C}^{\text{R}}(f^{o},f^{u}):=\frac{\sum\limits_{i\in I^{0}}\sum \limits_{j\in I^{1}}\mathds{1}(\hat{p}_{i}^{o}<\hat{p}_{j}^{o})\cdot\mathds{1} (\hat{p}_{i}^{u}<\hat{p}_{j}^{u})}{\sum\limits_{i\in I^{0}}\sum\limits_{j\in I ^{1}}\mathds{1}(\hat{p}_{i}^{o}<\hat{p}_{j}^{o})} \tag{3}\]
Given a set of evaluation patients, \(I\), \(\mathcal{C}^{\text{R}}\) corresponds to the number of patient-pairs that both models rank correctly normalized by the number of patient-pairs that the original model ranked correctly. In contrast with \(\mathcal{C}^{\text{BT}}\), which operates by counting patients, \(\mathcal{C}^{\text{R}}\) operates on patient-pairs produced by the mutually disjoint subsets \(I^{0}\) and \(I^{1}\). \(\mathcal{C}^{\text{R}}\) measures the concordance of ranking patient-pairs and ranges from 0 to 1. In contrast, \(\mathcal{C}^{\text{BT}}\) measures concordance with respect to binary patient predictions.
Although this work focuses on risk stratification models that operate over patients with binary outcomes, \(\mathcal{C}^{\text{R}}\) is not limited to this setting; we present a general form of \(\mathcal{C}^{\text{R}}\) in **Appendix Equation 6**.
Relationship to AUROC.Both \(\mathcal{C}^{\text{R}}\) and AUROC involve counting correct patient-pair rankings. We introduce several ancillary rank-based compatibility variables to clarify how \(\mathcal{C}^{\text{R}}\) relates to AUROC. Four proportion of patient-pairs (POP) variables measure how two models rank (correctly vs. incorrectly) patient-pairs.
The POP variables, \(\phi^{ab}\), follow a convention where \(a\) represents how the original model ranks patient-pairs correctly (\(+\)) vs. incorrectly (\(-\)), and \(b\) represents the same information for the updated model. For example, the POP variable for patient-pairs _correctly_ ordered by both models is denoted by \(\phi^{++}\), and the proportion of patient-pairs _incorrectly_ ordered by both models is \(\phi^{--}\). The four POP variables sum to 1. From the POP variables, one can
\begin{table}
\begin{tabular}{l|l l|l} \hline & **Original Model** & **Original Model** & \\ & **Ranks Correctly** & **Ranks Incorrectly** & \\ \hline
**Updated Model** & \multirow{2}{*}{\(\phi^{++}=\frac{m^{++}}{m}\)} & \multirow{2}{*}{\(\phi^{-+}=\frac{m^{-+}}{m}\)} & \multirow{2}{*}{AUROC\((f^{u})=\frac{m^{u+}}{m}\)} \\
**Ranks Correctly** & & & \\ \cline{1-1}
**Updated Model** & \multirow{2}{*}{\(\phi^{+-}=\frac{m^{+-}}{m}\)} & \multirow{2}{*}{\(\phi^{--}=\frac{m^{--}}{m}\)} & \multirow{2}{*}{\(1-\text{AUROC}(f^{u})\)} \\
**Ranks Incorrectly** & & & \\ \hline & \(\text{AUROC}(f^{o})=\frac{m^{o+}}{m}\) & \(1-\text{AUROC}(f^{o})\) & \\ \hline \end{tabular}
\end{table}
Table 1: Relationship between original and updated model AUROC, proportion of patient-pairs and count variables.
calculate the AUROC of each model (_e.g._, \(\text{AUROC}(f^{o})=\phi^{++}+\phi^{+-}\)). Each POP variable is proportional to a patient-pair count variable: \(m^{++},m^{+-},m^{-+}\), and \(m^{--}\), which follow the same \({}^{,ab}\) notation. The relationships among the POP variables, the count variables, and discriminative performances can be expressed in a tabular manner, as depicted in **Table 1**. From these relationships, \(\mathcal{C}^{\text{R}}=\frac{m^{++}}{m^{o+}}=\frac{\phi^{++}}{\text{AUROC}(f^{ o})}\).
Rank-Based Compatibility Lower Bound.Given \(\text{AUROC}(f^{o})\) and \(\text{AUROC}(f^{u})\), we can bound all POP variables (see Appendix Section B.2). Here, we assume that \(0.5<\text{AUROC}(f^{o})\leq\text{AUROC}(f^{u})\leq 1\), yielding the following lower bound for the rank-based compatibility:
\[\frac{\text{AUROC}(f^{o})+\text{AUROC}(f^{u})-1}{\text{AUROC}(f^{o})}\leq \mathcal{C}^{\text{R}}(f^{o},f^{u})\]
This bound can be used to contextualize the \(\mathcal{C}^{\text{R}}\) of an update, as the range of \(\mathcal{C}^{\text{R}}\) changes depending on the model AUROCs being considered. The lower bound of \(\mathcal{C}^{\text{R}}\) increases with respect to the AUROC of the updated model (shown graphically in **Appendix Section B.3**). We note that the upper bound is always \(1\) for the model updating region we are interested in.
### Optimizing for Rank-Based Compatibility
While standard model training and selection procedures that typically focus on discriminative performance will result in a larger lower bound for \(\mathcal{C}^{\text{R}}\), one may choose to optimize directly for \(\mathcal{C}^{\text{R}}\). However, as defined, \(\mathcal{C}^{\text{R}}\) is non-differentiable due to the _ranking indicator function_, \(\mathbbm{1}(\hat{p}_{i}<\hat{p}_{j})\). To facilitate the use of rank-based incompatibility loss in gradient-based optimization, we introduce a differentiable approximation of rank-based compatibility:
\[\widehat{\mathcal{C}^{\text{R}}}(f^{o},f^{u})=\frac{\sum\limits_{i\in I^{0}} \sum\limits_{j\in I^{1}}\sigma(\hat{p}_{j}^{o}-\hat{p}_{i}^{o})\cdot\sigma( \hat{p}_{j}^{u}-\hat{p}_{i}^{u})}{\sum\limits_{i\in I^{0}}\sum\limits_{j\in I ^{1}}\sigma(\hat{p}_{j}^{o}-\hat{p}_{i}^{o})}\]
This approximation replaces the ranking indicator function used to evaluate patient pairs with a _ranking sigmoid function_:
\[\sigma(\hat{d}_{ji})=\frac{1}{1+\exp(-s\cdot\hat{d}_{ji})}\]
Where \(\hat{d}_{ji}\) is the difference in risk estimates produced for a patient pair (_i.e._, \(\hat{d}_{ji}=\hat{p}_{j}-\hat{p}_{i}\) and ranges between \(-1\) and \(1\)). A correct ranking corresponds to \(\hat{d}_{ji}>0\) and an incorrect ranking corresponds to \(\hat{d}_{ji}<0\). The sigmoid function maps this to a value between \(0\) and \(1\), closer to the behavior of the ranking indicator function (Han and Moraga, 1995). A hyperparameter, \(s\), controls the spread of this mapping. Note that using a sigmoid to overcome discontinuity in the loss function is similar to work introduced to optimize for the AUROC directly (Yan et al., 2003).
Risk stratification models are often trained by minimizing the binary cross-entropy loss \(\mathcal{L}^{BCE}\). This attempts to optimize the discriminative performance of the model by reducing
the probability estimates for 0-labeled patients and increasing them for 1-labeled patients, and indirectly optimizes the correct ranking of patient-pairs, the AUROC (Cortes and Mohri, 2003). However, \(\mathcal{L}^{BCE}\) only examines the relationship between a patient's label and the risk estimates produced by a model. To incorporate rank-based compatibility, we augment model update training to incentivize rank-based compatibility, using a weighted combination of binary cross-entropy and \(\widetilde{\mathcal{L}^{R}}=1-\widetilde{\mathcal{C}^{R}}\):
\[\alpha\mathcal{L}^{BCE}+(1-\alpha)\widetilde{\mathcal{L}^{R}}\text{ where } \alpha\in[0,1] \tag{4}\]
Hyperparameter \(\alpha\) controls the trade-off between discriminative performance and compatibility. During training, the predictions produced by the original model are incorporated into the loss function.
## 4 Experiments & Results
We focus on understanding and engineering model updates in terms of \(\mathcal{C}^{R}\) using a real-world benchmark dataset. While \(\mathcal{C}^{R}\) could be used as a validation metric when selecting among candidate models during an update procedure, we hypothesize that by including \(\widetilde{\mathcal{C}^{R}}\) in the loss function, we can achieve better compatibility without paying a penalty in terms of AUROC. To test this hypothesis, we generated and analyzed model updates on the MIMIC-III mortality prediction dataset.
**Questions.** Our experiments seek to answer two related questions:
1. _What is the empirical distribution of \(\mathcal{C}^{R}\) achieved using standard model updates when using real data?_ (**Section 4.2**, **Figure 4**)
2. _Compared to standard model update generation and selection approaches, can we use the rank-based incompatibility loss, \(\widetilde{\mathcal{L}^{R}}\), to generate updates with better \(\mathcal{C}^{R}\)? Can this be accomplished without a loss of AUROC?_ (**Section 4.3**, **Figures 5**, 6**, 7**, 13**, and 14)
### Data & Model Updating Setup
Dataset & Task.We use Bansal et al. (2019)'s work as foundation for our experimental setup. Their experimental work analyzing \(\mathcal{C}^{\text{BT}}\) in the setting of updating an in-hospital mortality prediction model served as a template for our main analyses. In order to maintain consistency and enable comparisons between \(\mathcal{C}^{\text{BT}}\) and \(\mathcal{C}^{R}\) we modeled our predictive task, dataset splits, and model architectures considered off of their initial experiments.
Like Bansal et al. (2019), we employed the MIMIC-III dataset (Johnson et al., 2016), with the goal of predicting in-hospital mortality based on the first 48 hours of a patient's ICU stay, with the population and task defined by Harutyunyan et al. (2019). The data were transformed using FIDDLE (Tang et al., 2020). For details regarding the data inclusion and transformation, please see the procedures detailed by Tang et al. (2020). Since our goal wasn't to learn the best possible mortality prediction tool, but to investigate the applicability of \(\mathcal{C}^{R}\), we reduced the number of features from \(350,832\) to \(35,000\), for computational efficiency. This was done by random sampling.
We randomly split the MIMIC-III data into three disjoint datasets. Two of these datasets were allocated for model development and validation. The third dataset was reserved for held-out evaluation. \(8,577\) patients in the MIMIC-III dataset meet the in-hospital mortality inclusion criteria defined by Harutyunyan et al. (2019). The datasets were split similarly to Bansal et al. (2019), with \(1,000\) allocated to the original model dataset, \(5,000\) were assigned to the updated model dataset, and \(2,577\) held-out for the evaluation dataset. The two model datasets were used to develop and validate the original and updated models. The model datasets were each split equally (\(50/50\%\)) into development and validation datasets. The dataset partitions and their sizes are depicted in **Figure 3**.
Original model training & selection.Original models were trained using regularized logistic regression. L2 regularization strength \(\{0.1,0.01,0.001\}\) was selected to maximize validation AUROC.
Updated model training & selection.Two different types of updated models were created to assess standard updating approaches against our proposed loss function. Standard updates, "BCE models", were trained to minimize \(\mathcal{L}^{BCE}\) subject to regularization. The same regularization weights used for the original models were available for the updated models.
Using the same original model and data, we generated additional updated models, "RBC models" based on a loss function that incorporates \(\mathcal{L}^{R}\) (**Equation 4**), sweeping \(\alpha\) in the set \(\{0,0.1,0.2,...,0.9,1\}\).
Updated models from the "BCE" and "RBC models" were selected based on maximizing the following validation function:
\[\beta\,\text{AUROC}(f^{u})+(1-\beta)\,\mathcal{C}^{\text{R}}(f^{o},f^{u})\text { where }\beta\in[0,1] \tag{5}\]
Evaluation.The selected updated models were evaluated in terms of \(\mathcal{C}^{\text{R}}\) and AUROC on the held-out evaluation dataset. The process of splitting the data, training model-pairs, and evaluation was replicated 40 times.
Figure 3: The MIMIC-III mortality data was partitioned into three datasets. Two of these datasets were allocated for model development and validation, and one was held-out for evaluation. Model-pairs were evaluated on the evaluation dataset.
### Rank-Based Compatibility Distribution
We first investigate: _What is the empirical distribution of \(\mathcal{C}^{\mathrm{R}}\) achieved using standard model updates (i.e., minimizing the binary cross-entropy loss) when using real data?_ Using the experimental setup described above, we created 150 standard updated models for each original model, minimizing \(\mathcal{L}^{BCE}\). To introduce variation, these 150 candidate "BCE models" were created by combining dataset resampling, shuffling, and regularization weights. The updated model development dataset was either resampled with replacement (45 of the times) or shuffled (5 of the times, which yields difference in models due to our use of stochastic gradient descent) and then models were trained using binary cross-entropy loss with one of three L2 regularization weights (\((45+5)\cdot 3=150\)).
We calculated the AUROC of the original model and the resultant \(\mathcal{C}^{\mathrm{R}}\) and AUROC across the candidate update models (**Figure 4**). Across the 150 "BCE models," empirical 95% confidence intervals were calculated for AUROC(\(f^{u}\)), and violin plots were generated for \(\mathcal{C}^{\mathrm{R}}\).
The observed \(\mathcal{C}^{\mathrm{R}}\) values for the set of candidate updates vary across a portion of the feasible range (between the lower1 and upper bounds). We note that the observed distributions of \(\mathcal{C}^{\mathrm{R}}\) shifts in relation to the AUROCs of the models. To cont
Figure 4: \(\mathcal{C}^{\mathrm{R}}\) Distribution For Model Updates on the MIMIC-III Mortality Task. An original model was selected for each replication and 150 “BCE models” were generated as candidate updates. We plot the AUROC of the original model (blue dots) and updated “BCE models” (red, 95% confidence intervals). We also show the expected lower bounds for \(\mathcal{C}^{\mathrm{R}}\) (light gray). Finally, the “BCE models” \(\mathcal{C}^{\mathrm{R}}\)s distribution are plotted as violin plots (gray).
can examine the POP variable \(\phi^{++}\). We see that the distribution of \(\phi^{++}\) for this experiment tends to a central value; this is shown and discussed in **Appendix Section C.4**. These results show the behavior of \(\mathcal{C}^{\mathrm{R}}\) for one data-generating process, where we see some variation in \(\mathcal{C}^{\mathrm{R}}\) values that provide limited options for model developers to select among. Additionally, we see that larger \(\mathcal{C}^{\mathrm{R}}\) values are possible but not observed through standard update generation procedures (this is the space above the observed \(\mathcal{C}^{\mathrm{R}}\) violin plots in **Figure 4**). These findings are underscored in an analytical sketch discussed in **Appendix Section B.5**. _All together, these results mean that model developers may be constrained if they wish to develop updated models that optimize for \(\mathcal{C}^{\mathrm{R}}\) using standard update generation procedures._
### Weighted Loss vs. Standard Updated Model Selection
We now investigate our second question: _Compared to optimizing for \(\mathcal{L}^{BCE}\) alone, does incorporating the rank-based incompatibility loss, \(\mathcal{L}^{R}\), generate updates with better \(\mathcal{C}^{\mathrm{R}}\)?_
For each replication, we generated 150 "BCE models" using the generation procedure described above. For each value of \(\alpha\in\{0,0.1,0.2,...,0.9,1\}\), we also generated 3 "RBC models". This was done by sweeping the regularization strengths used above. Aside from the objective function used during training (and early stopping), other aspects of model training and selection were held constant across approaches. To give the baseline the best chance, we resampled and shuffled the training data while training the BCE models to more fully explore the space of potential updates (resulting in 150 updates instead of 3). The best "BCE" and "RBC models" from these model sets were selected based on validation performance using **Equation 5**. We compare the resulting "BCE" and "RBC models" by calculating the difference in rank-based compatibility, \(\Delta\,\mathcal{C}^{\mathrm{R}}\), and difference in AUROC, \(\Delta\,\)AUROC (an example of this calculation can be found in **Appendix Section C.2**). We repeated this process 40 times, for every value of \(\alpha\) and every value of \(\beta\in\{0,0.1,...,0.9,1\}\) and compared the mean differences in both \(\mathcal{C}^{\mathrm{R}}\) and AUROC.
Results are displayed in **Figure 5**. There is a trade-off between AUROC and \(\mathcal{C}^{\mathrm{R}}\). For many \(\alpha\)-\(\beta\) combinations, there is a significant gain in \(\mathcal{C}^{\mathrm{R}}\) (blue) at the cost of lower AUROC (red) when using the proposed objective function during optimization. However, we note many cases in which there is a gain in compatibility without paying a penalty in terms of AUROC. For example, when \(\alpha=0.5\) and \(\beta=0.5\), we achieve a significant gain in compatibility of \(\Delta\,\mathcal{C}^{\mathrm{R}}=0.019\) (95% confidence interval: 0.005, 0.035) with an \(\Delta AUROC=-0.009\) (\(-0.030\), 0.011).2 By incorporating \(\mathcal{L}^{R}\) during training, it is possible to achieve improved compatibility without compromising discriminative performance. Out of the 121 \(\alpha\)-\(\beta\) combinations, 57 demonstrate statistically significant improvements in \(\mathcal{C}^{\mathrm{R}}\) while maintaining AUROC; see **Appendix Section C.5** for further discussion.
Footnote 2: The “RBC models” had the following performance: \(\mathcal{C}^{\mathrm{R}}=0.966\) (0.948, 0.979) AUROC \(=0.828\) (0.804, 0.855) vs. “BCE models” with \(\mathcal{C}^{\mathrm{R}}=0.947\) (0.932, 0.963)
Examining results across replications for an \(\alpha=0.6\) while we vary \(\beta\)**Figure 6**, we see that across selection options, the "RBC model" generally provides a better \(\mathcal{C}^{\mathrm{R}}\) (statistically significant for \(\beta\leq 0.6\)) without a significant decrease in AUROC (_i.e._, \(\Delta AUROC\) is at or close to zero). In **Figure 7**, we set \(\beta=0.6\) during the selection process for both the "RBC models" and the "BCE models", and sweep \(\alpha\) during training "BCE models". Again,
we observe that for specific \(\alpha\) values (e.g., \(\alpha=0.3-0.6\)), we can significantly improve compatibility without penalizing AUROC performance.
_These empirical results suggest that by incorporating rank-based compatibility into the objective function during training, we can generate model updates with larger \(\mathcal{C}^{\mathrm{R}}\) values than obtained through standard update generation procedures (i.e., minimizing for \(\mathcal{L}^{\mathrm{BCE}}\) alone)._ Moreover, while there is often a trade-off between \(\mathcal{C}^{\mathrm{R}}\) and AUROC, achieving gains in \(\mathcal{C}^{\mathrm{R}}\) while maintaining AUROC(\(f^{u}\)) is possible.
## 5 Discussion & Conclusion
When selecting among potential updated clinical risk stratification models, it may be important to consider compatibility with existing models already in use. In this study, we propose the first rank-based compatibility measure, \(\mathcal{C}^{\mathrm{R}}\), which measures the concordance in ranking between two models. We illustrate the connection between \(\mathcal{C}^{\mathrm{R}}\) and discriminative model performance. This relationship suggests that increased rank-based compatibility accompanies improved discriminative performance, as the lower bound of rank-based compatibility increases as each model's discriminative performance increases. Despite this relationship, we show empirically that it is improbable to observe very high levels of rank-based compatibility through standard updated model development, which tends to focus on optimizing
Figure 5: Performance Differences Between “RBC Models” and “BCE Models” With Variation of \(\alpha\) and \(\beta\). Mean value of \(\Delta\,\mathcal{C}^{\mathrm{R}}\) on the left and mean value of \(\Delta\,\)AUROC on the right, blue shows improvement of “RBC Models” over “BCE models” and red shows degradation. For a large majority of \(\alpha\)-\(\beta\) pairs, there is an improvement in mean \(\mathcal{C}^{\mathrm{R}}\). For a smaller majority, there is a degradation in mean AUROC. This suggests that there is a trade-off between AUROC and \(\mathcal{C}^{\mathrm{R}}\), with improved \(\mathcal{C}^{\mathrm{R}}\) coming at the cost of AUROC. Although this trade-off exists, we note that the degradations in AUROC are often not statistically significant, while the improvements in \(\mathcal{C}^{\mathrm{R}}\) are. This is shown and discussed in **Appendix Section C.5**.
discriminative performance. These findings motivate methods that enable developers to build models with good discriminative performance and rank-based compatibility. As such, we introduce a new differentiable rank-based incompatibility loss function that can be used when training updated models to further optimize for rank-based compatibility.
We used the MIMIC-III dataset to compare our proposed approach to generating model updates to a standard approach that optimizes for binary cross-entropy alone. Our results highlight standard updated model development's limitations in identifying model updates with very high compatibility. Using our proposed approach, we identify models with equivalent discriminative performance yet significantly better compatibility. However, if rank-based compatibility is greatly emphasized over discriminative performance, then improvements may come at a cost.
The rank-based compatibility measure serves a different role than the original backwards trust compatibility measure proposed by Bansal et al. (2019). Depending on the use case, one may choose one over the other. Use cases that strongly depend on decision thresholds, such as sending a notification when a patient risk estimate exceeds a specific threshold, may correspond to clinician mental models best represented by \(\mathcal{C}^{\text{BT}}\). In settings where the decision may depend on the state of the system, such as hospital admission decisions, which are impacted by the number of patients in the emergency department (Gorski et al., 2017), the \(\mathcal{C}^{\text{R}}\) may better represent clinician mental models because it is not tied to a fixed threshold. Additionally, the complexity of this evaluation grows proportionally with
Figure 6: Performance Difference for \(\alpha=0.6\) Sweeping Across \(\beta\). Comparing \(\Delta\,\mathcal{C}^{\text{R}}\) and \(\Delta\,\text{AUROC}\) for various \(\beta\) values. We see that for all \(\beta\) values, there is no statistically significant degradation in AUROC while for \(\beta\) values less than \(0.7\) we see improvement in \(\mathcal{C}^{\text{R}}\). This suggests that “RBC models” yield a benefit over “BCE models” in this regime.
the number of models and thresholds being considered. Thus, if there are many potential thresholds, it may be more effective to use \(\mathcal{C}^{\text{R}}\) directly.
Although we know the absolute scale of rank-based compatibility with 0 denoting "no compatibility" and 1 denoting "perfect compatibility", we do not have a sense of what the numbers in between mean and how they compare across model updates. Ideally, we would like to have a sense of what is an excellent rank-based compatibility value, like we do with the AUROC measure (_e.g._, AUROC\((f)>0.85\)). This will likely come with further study of models being updated across different tasks. One advantage \(\mathcal{C}^{\text{R}}\) does present is that its improvements can be directly compared against improvements in AUROC by examining the POP variables.
While we discussed the different use cases for \(\mathcal{C}^{\text{R}}\) vs. \(\mathcal{C}^{\text{BT}}\), we did not explore users' preferences. Although there may be update tasks for which the \(\mathcal{C}^{\text{R}}\) measure is better suited, we have not yet characterized the relationship between rank-based compatibility and user mental models. For example, a sepsis detection system that flags patients as at risk (Henry et al., 2022) or sends an alert notification (Wong et al., 2021) may be a good candidate for the \(\mathcal{C}^{\text{BT}}\) compatibility measure. Users in these cases would expect consistent correct classification of patients when the underlying model is updated. If users interact with the model to help risk stratify their patients, then the \(\mathcal{C}^{\text{R}}\) measure may be a better choice. Tools used for cardiovascular event risk stratification (Lip et al., 2010) and in-hospital deterioration risk stratification (Epic Systems Corporation, 2020; Kamran et al., 2022) may be more effectively updated using \(\mathcal{C}^{\text{R}}\).
Figure 7: Performance Difference for \(\beta=0.6\) Sweeping Across \(\alpha\). Comparing \(\Delta\,\mathcal{C}^{\text{R}}\) and \(\Delta\,\text{AUROC}\) for various \(\alpha\) values. In this case see a more limited benefit of the “RBC models” over “BCE models”, with \(\alpha\in[0.3,0.6]\) showing significant benefit in \(\mathcal{C}^{\text{R}}\) and no significant degradation in AUROC.
Like backwards trust compatibility, rank-based compatibility captures the "global" user perspective. Modifying rank-based compatibility to focus on individual user perspectives may lead to better compatibility and parity with user expectations (Martinez et al., 2020, 2021). We have focused our study of rank-based compatibility exclusively on when the updated model continues the correct behaviors established by the original model. Previous user studies have shown that user mental models are influenced by the error behavior of classification models (Bansal et al., 2019). This may hold for risk stratification models, motivating the study of incorrect ranking in conjunction with rank-based compatibility. We believe there is much work to do with this measure in terms of human user studies.
Finally, the primary analysis we present is based on the experimental setup developed by Bansal et al. (2019). Although this was intentionally done to enable the comparison of the \(\mathcal{C}^{\text{BT}}\) and \(\mathcal{C}^{\text{R}}\) it is not an exhaustive evaluation. Notably, future work may benefit from the exploration of different tasks, datasets, and model architectures. Some tasks like survival analysis (Otles et al., 2022) may be able to use the general form of \(\mathcal{C}^{\text{R}}\), **Equation 6**. Different model architectures may need adaption of the joint optimization of performance and compatibility proposed by this work. Additionally, there are real world complexities that are unaccounted for in this analysis, such as outcome censoring due to clinician interventions based on model predictions (Adam et al., 2020) and the impact of deployment infrastructure changing as models are updated (Otles et al., 2021).
These limitations notwithstanding, the new rank-based compatibility measure and incompatibility loss present a novel way to think about model maintenance and updating models, beyond simply optimizing for AUROC. Furthermore, optimizing the rank concordance between the output of two models, rather than thresholded predictions, may be more robust to calibration shifts, a commonly observed phenomenon in healthcare (Hickey et al., 2013; Davis et al., 2017; Minne et al., 2012). We expect this new measure applies in evaluating healthcare risk stratification models. However, there are likely settings in domains beyond healthcare that would similarly benefit from such rank-based measures. Overall, this work enables the evaluation and development of model updates that have the potential to lead to better clinician-model team performance.
EO was supported by NIH grant T32GM007863 and JW was supported by the Alfred P. Sloan Foundation. The authors would like to thank the anonymous reviewers and editors of the 2023 Machine Learning for Healthcare Conference for their thoughtful feedback.
|
2301.09528 | Scaling and Kinetic Exchange Like Behavior of Hirsch Index and Total
Citation Distributions: Scopus-CiteScore Data Analysis | We analyze the data distributions $f(h)$, $f(N_c$) and $f(N_p)$ of the Hirsch
index $(h)$, total citations ($N_c$) and total number of papers ($N_p$) of the
top scoring 120,000 authors (scientists) from the Stanford cite-score (or
c-score) 2022 list and their corresponding $h$ ($3 \le h \le 284$), $N_c (1009
\le N_c \le 428620$) and $N_p$ ($3\le N_p \le 3791$) statistics from the Scopus
data. For reasons explained in the text, we divided the data of these top
scorers (c-scores in the range 5.6125 to 3.3461) into six successive
equal-sized Groups of 20,000 authors or scientists. We tried to fit, in each
Group, $f(h)$, $f(Nc)$ and $f(Np)$ with Gamma distributions, viewing them as
the ``wealth distributions'' in the fixed saving-propensity kinetic exchange
models and found $f(h) \sim h^{\gamma_h} \mathrm{exp} (-h/T_h)$ with fitting
noise level or temperature level ($T_h$) and average value of $h$, and the
power $\gamma_h$ determined by the ``citation saving propensity'' in each
Group. We further showed that using some earlier proposed power law scaling
like $h = D_c N_c^{\alpha_c}$ (or $h = D_p N_p^{\alpha_p}$) with $\alpha_c =
1/2 = \alpha_p$, we can derive the observed $f(h)$ from the observed $f(N_c)$
or $f(N_p)$, with $D_c = 0.5$, but $D_p$ depending on the Group considered.
This observation suggests that the average citations per paper ($N_c/N_p$) in
each group ($= (D_p/D_c)^2 =4D_p^2$) vary (from 58 to 29) with the c-score
range of the six Groups considered here, implying different effective
Dunbar-like coordination numbers of the scientists belonging to different
groups or networks. | Asim Ghosh, Bikas K. Chakrabarti | 2023-01-23T16:25:53Z | http://arxiv.org/abs/2301.09528v3 | Scaling and Kinetic Exchange Like Behavior of Hirsch Index and Total Citation Distributions: Scopus-CiteScore Data Analysis 1
###### Abstract
We analyse the data distributions \(f(h)\), \(f(N_{c})\) and \(f(N_{p})\) of the Hirsch index (\(h\)), total citations (\(N_{c}\)) and total number of papers (\(N_{p}\)) of the top scoring 120,000 authors (scientists) from the Stanford cite-score (2022) list and their corresponding \(h\) (\(3\leq h\leq 284\)), \(N_{c}(1009\leq Nc\leq 428620\)) and \(N_{p}\) (\(3\leq N_{p}\leq 3791\)) statistics from the Scopus data, dividing the data into six equal Groups, each containing 20,000 authors or scientists. We find, in each Group, \(f(h)\), \(f(N_{c})\) and \(f(N_{p})\) fit well with the kinetic exchange (model with fixed "wealth saving propensity") wealth distribution: For example like Gamma function distributions \(f(h)\sim h^{\gamma_{n}}exp(-h/T_{h})\), having similar relations between the fitting noise level or temperature level (\(T_{h}\)) and average value of \(h\), where the power \(\gamma_{h}\) is determined by the "citation saving propensity" in each group. The observation that \(h=D_{c}N_{c}^{\alpha_{c}}=D_{p}N_{p}^{\alpha_{p}}\), with \(\alpha_{c}=1/2=\alpha_{p}\), suggesting the average coordination (Dunbar-like) number of the citation network, given by the average citations per paper (in each group) equal to \(N_{c}/N_{p}=(D_{p}/D_{c})^{2}\) ranges from 58 to 29.
## I Introduction
A popular measure of the success of individual scientist or author (called scientist here generally) has been the Hirsch Index [1] or h-index, which can be viewed as the fixed point [2] of the non-linear function relating the monotonically decreasing number of publications (\(n_{p}\)) with increasing number of citations (\(n_{c}\)): \(n_{p}=h=n_{c}\) of the scientist. Mapping the citation function to a combinatorial Fermi one, Yong proposed [3] the relationship
\[h=D_{c}N_{c}^{\alpha_{c}}, \tag{1}\]
with \(D_{c}\simeq 0.54\) and \(\alpha_{c}=1/2\) for any scientist with Hirsch index value \(h\) and total citations \(N_{c}=\sum_{p}n_{c}\) from all his or her publications (denoted by \(p\)), in the limit \(N_{c}\rightarrow\infty\). Several attempts to check the validity of such a relationship between \(h\) and \(N_{c}\) have been made, see e.g., Redner [4] (supporting the relation (1) with the exponent \(\alpha_{c}\) value equal to 0.5, from the data analysis for 255 scientists) and Radicchi and Castellano [5] ( analysing a much larger set of data for 83,897 scientists) who found the best fit value of the exponent \(\alpha_{c}\simeq 0.42\). Ghosh et al. [2] studied the Widom-Stauffer like scaling behavior of the Hirsch index for the fiber bundle as well as percolation models away from the "critical" breaking point or stress and percolation point respectively and proposed
\[h\sim\sqrt{N_{c}}/[logN_{c}], \tag{2}\]
for the citations of individual scientists, giving reasonable agreement with the google scholar data for 1000 scientists (with \(h\)-indices in the range \(17\leq h\leq 221\) and total number of citations \(N_{c}\) in the range \(996\leq Nc\leq 348680\)).
We find here, in each of the six equal-size Groups of twenty thousand top ranking scientists from the Elsevier Stanford c-score list [6; 7] (total one hundred and twenty thousand top cited scientists), the distributions (frequencies) \(f(h)\), \(f(Nc)\) and \(f(Np)\) of their Hirsch index (\(h\)), total citations (\(N_{c}=\sum_{p}n_{c}\)) and total number of papers (\(N_{p}=\sum_{p}n_{p}\)) all fit very well with Gamma function form:
\[f(h)\sim h^{\gamma_{h}}[exp(-h/T_{h})], \tag{3a}\] \[f(N_{c})\sim N_{c}^{\gamma_{c}}[exp(-N_{c}/T_{c})],\] (3b) \[f(N_{p})\sim h^{\gamma_{p}}[exp(-N_{p}/T_{p})], \tag{3c}\]
with the exponent values \(\gamma_{h}\simeq 11.0\), \(\gamma_{c}\simeq 3.0\), \(\gamma_{p}\simeq 2.2\), and the noise levels \(T_{h},T_{c},T_{P}\), dependent on the c-score range generally decreases with decreasing c-score (see Figs. 1, 2 and 3 in the next the section on data analysis).
We also calculated the \(h\) values from the corresponding \(N_{c}\) values using the scaling relation (1) with \(D_{c}=0.5\) proposed by Yong [3] in 2014. This gives extremely good fit of their distributions \(f(h)\), fitting very well the directly observed Hirsch index data are shown in Figs. 1. Other scaling relations \(h\sim N_{c}^{0.42}\) suggested by Radicchi and Castellano [5] in 2013, or \(h\sim N_{c}^{0.50}/logN_{c}\) suggested by Ghosh et al. [2] in 2022 do not give comparable good fits.
In addition, we find the \(T_{h}\) values for each Group (the six c-score ranges I-VI), calculated using the relation
\[T_{h}=h_{av}/(\gamma_{h}+1), \tag{4}\]
Figure 1: Frequency distribution (normalized) for the Hirsch index (\(h\)) of the authors in Groups I to VI, shown in Figs. 1a to 1f. The fitting Gamma functions are also shown.
where \(h_{av}\) denotes the average of \(f(h)\) in each Group, compares very well with the observed values. This relation suggests a strong correlation of the Chakraborti-Chakrabarti model [8] of "wealth" distribution where a fixed saving fraction of the wealth (which determines the exponent value of \(\gamma\) in the resulting Gamma distribution of wealth) is retained in each kinetic exchange or interaction (see [9; 10]). If we consider a similar stochastic dynamics of paper citations, where the fixed fraction of (confident or core group) "citations" (like wealth) in each paper-writing (interaction) determines the exponent \(\gamma_{h}\) value and the corresponding noise level \(T_{h}\) in the resulting Gamma distribution \(f(h)\) of the (wealth) h-index. The equivalent wealth conservation may be assumed to come (see e.g., [11]) from the overall constancy (node coordination number) of the citation network, discussed later.
We also found an interesting feature of the citation network. As we mentioned in connection with relation (3c), using the scaling relation
\[h=D_{p}N_{p}^{\alpha_{p}}, \tag{5}\]
with \(\alpha_{p}=0.5\), we get \(D_{p}\simeq 3.8\), 3.4, 3.2, 3.0, 2.8 and 2.7 respectively for the successively decreasing six c-score groups I to VI. Comparison of the relations (3a) and (3c) with \(\alpha_{c}=0.5=\alpha_{p}\), and as discussed above the best fit value of \(D_{c}=0.5\), suggests the value of the average citation per paper \(N_{c}/N_{p}\) for any of these scientists will be given by (a Dunbar-like [12; 13] citation network effective coordination number) \((D_{p}/D_{c})^{2}=4D_{p}^{2}\) which ranges from 58 to 29 depending on the Group (I to VI).
Figure 2: Frequency distribution (normalized) of the total number of citations (\(N_{c}\)) of the authors in Groups I to VI, shown in Figs. 2a to 2f. The fitting Gamma functions are also shown.
## II Scopus data analysis
As mentioned already, we analyzed here the Elsevier Scopus [6] data for the Hirsch index \(h\) and the corresponding number \(N_{c}\) of total citations for 120,000 scientists who came at the top of Stanford c-score list [7] last year (2022). We divided the set into six equal Groups of 20,000 scientists having c-sore rank ranges I [1-20000], II [20001-40000], III [40001-60000], IV [60001-80000], V [80001-100000], and VI [100001-120000]. We observed that for the scientists in each of these ranges, both the \(h\)-index values and the total citations numbers \(N_{c}\) have similar Gamma-like distributions (see Fig. 1 (1a to 1f) for distributions of \(h\) and of \(N_{c}\) for the six ranges of c-score ranks mentioned above). We observe (see Figs 2; 2a to 2f) that the \(h\)-index distribution \(f(h)\) in each of these six score ranges fit very well to the Maxwell-Boltzmann like Gamma function form 3(a) with \(\gamma_{h}\simeq 11\) and the effective noise (temperature) decreases with increasing range (from I through VI). In Table 1, we give for each of the six ranges (I-VI) the estimated values of the most probable value of Hirsch index \(h^{mp}\), its average \(h_{av}\) and the respective noise level or temperature \(T_{h}\).
In Figs. 4 (4a to 4f) we compare the above-mentioned observed distribution of \(f(h)\) with those obtained by using the Yong's scaling relation (1) with \(\alpha=0.5\) and the best fit value of \(D_{c}=0.5\) for the six different ranges I to VI of c-score ranks. The overlap seems to be very good and encouraging. In contrast, insets of Figs. 4 we compared the same \(h\)-index distributions \(f(h)\) in the six different ranges of c-score ranks with the \(h\) values obtained the \(N_{c}\) values using relation (1) with \(\alpha=0.42\) (as observed in [5], mentioned above) and the best fit value of the prefactor. The level of misfit is obvious.
Figure 3: Frequency distribution (normalized) of the total number of papers (\(N_{p}\)) of the authors in Groups I to VI, shown in Figs. 3a to 3f. The fitting Gamma functions are also shown.
The same is true when one uses the relation (2) between \(h\) and \(N_{C}\) with the best fit value (4.5) of the prefactor (as suggested in [2]). Again the distributions of \(h\) and those obtained using relation (2) do not match (see the insets of Figs. 4).
Our analysis (see Figs. 2) for the Hirsch indices and the corresponding values of the total citations (from Scopus data) for the top ranking c-score authors therefore confirms the relation (1) with the exponent \(\alpha_{c}=1/2\), as obtained by Yong [3]. This is because of the lack of matches (inset of the Figs. 4) with \(\alpha_{c}=0.42\)[5] or \(\alpha_{c}=1/2\)) with a log correction in relation (1) [2]).
An important observation (see Figs. 1) has been the Gamma function for the distribution of the \(h\) indices for all these (arbitrarily) divided six ranges of top scorers. This indicates a Chakraborti
Figure 4: Frequency distribution for Hirsch index (\(h\)), obtained directly from the data source and that obtained from the number of total citations (\(N_{c}\)) using relation (1) with \(\alpha_{c}=0.5\) (Yong [3]) and \(D_{c}=0.5\). The observed overlaps confirm the relation (1) with \(\alpha_{c}=0.5\). The insets show the same with \(\alpha_{c}=0.42\)[5] (left) and that with \(\alpha_{c}=0.5\) with an inverse \(logN_{c}\) correction [2] (right). There seem to be considerable mismatches.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Group & c-score rank & \(\gamma_{h}\) & \(h^{app}\) & \(h_{av}\) & \(T_{h}\) & \(h_{av}/(\gamma_{h}+1)\) \\ \hline I & 1-20K & 11.0 & 61.0 & 68.8 & 5.53 & 5.73 \\ \hline II & 20k-40K & 11.0 & 45.5 & 49.8 & 4.14 & 4.15 \\ \hline III & 40K-60K & 11.0 & 39.0 & 43.4 & 3.57 & 3.61 \\ \hline IV & 60K-80K & 11.0 & 35.0 & 39.5 & 3.21 & 3.29 \\ \hline V & 80K-100K & 11.0 & 32.5 & 36.6 & 2.95 & 3.05 \\ \hline VI & 100K-120K & 11.0 & 29.9 & 34.5 & 2.76 & 2.88 \\ \hline \end{tabular}
\end{table}
Table 1: Hirsch index (\(h\)) data fitting parameters obtained from relation (3a).
Figure 5: Frequency distribution of Hirsch index (\(h\)) obtained directly from the data source and those obtained from the number (\(N_{p}\)) of total publication by the authors of the Group I to VI using relation (5) \(\alpha_{p}=0.5\)
Figure 6: Extrapolated (statistical) value of the prefactor \(D_{p}\) in eqn. (5) for top ranking (\(r=1\)) scientist. We plot the values of \(D_{p}\) for different Groups from table 2 against the middle rank (\(r\)) of the corresponding rank range (the same plot for the prefactor \(D_{c}\) in Eqn. (1) remains a horizontal line).
Chakrabarti type kinetic exchange model [8; 9] of citation dynamics for each new paper, with a random citation sharing fraction over a fixed (saved) faction of citations of the close-circle papers. This "saving" fraction determines (see e.g., [10]) the exponent \(\gamma_{h}\) in the distribution (3a) and the conservation of the total citations in such "social dynamics" of citations is practically determined by the total publications within the "aging" period (see e.g., [14] and the references therein). Indeed, for such a Gamma distributed statistics (3) in the Chakrabort-Chakrabarti kinetic exchange model (with fixed fraction close circle citation propensity), the analysis of Patriarca et al. [10] suggests the relation (4). Such a relation fits extremely well with the values of the noise level (temperature) \(T\) obtained by fitting the Hirsch index distribution data to the relation (4) and the value of \(h_{av}\) obtained from distribution (3a) of \(h\) together with its \(\gamma_{h}\) value. As mentioned already, this indicates an effective kinetic exchange like stochastic dynamics for citations where each author has a fixed share of core-group citations and allows the rest from the literature. The dynamics give the total citations per paper constant on an average (constant value weakly dependent on the c-score rank or the Group).
In fact, the relation (5) fits very well with the data set for each Group with \(\alpha_{p}=0.5\) (see Figs. 5). Combining relations (1) and (5) with \(\alpha_{c}=0.5=\alpha_{p}\) and \(D_{c}=0.5\), one gets the average citations per paper \(N_{c}/N_{p}\) or the average coordination number of the citations network equal to \(4D_{p}^{2}\), which ranges from 58 to 29 (see Table 2). This was observed and reported earlier [13] and can be viewed as an effective Dunbar number [12] for the citations network.
Unlike the fitting value (0.50; see Table 2 ) of the prefactor \(D_{c}\) in eqn (1). The fitting values of the prefactor \(D_{p}(r)\) in eqn. (5) increase with the rank \(r\) (see table 2). Fig. 6 gives the extrapolated value of \(D_{p}\) for the top rank (\(r=1\)) to be about 4.31, which gives the limiting value of the citation network coordination number (network average of citations per paper) to be \(4[D_{p}(r=1)]^{2}\simeq 75\).
## III Summary and Conclusion
We analyse the distributions \(f(h)\), \(f(N_{c})\) and \(f(N_{p})\) of the Hirsch index (\(h\)), total citations (\(N_{c}\)) and total number of papers (\(N_{p}\)) of the top 120,000 scorers (scientists) from the Stanford cite-score (c-score, 2022) list and their corresponding \(h\) (\(3\leq h\leq 284\)), \(N_{c}\) (\(1009\leq N_{c}\leq 428620\)) and \(N_{p}\) (\(3\leq N_{p}\leq 3791\)) from the Scopus data. It may be mentioned that all these authors fall within (indeed the toppers of) the top 2% scientists in the Stanford cite-score (2022) selection list [6; 7]. We divided the data into six equal Groups (I, II, III,IV, V and VI), each having 20,000 scientists according to their successive c-score ranks. We find in each Group \(f(h)\), \(f(N_{c})\) and \(f(N_{p})\) fit well with Gamma function form (3a), (3b) and (3c) (see Figs 1, 2, and 3), e.g., \(f(h)\sim h^{\gamma_{h}}[exp(-h/T_{h})]\), with the exponent \(\gamma_{h}\simeq 11.0\), \(\gamma_{c}\simeq 3.0\) and \(\gamma_{p}\simeq 2.2\) and the noise levels \(T_{h}\), \(T_{c}\) and \(T_{p}\) dependent on the c-score range considered. We also calculated the \(h\) values from the corresponding total citation values \(N_{c}\) using the scaling relation (1) \(h=D_{c}N_{c}^{\alpha_{c}}\), and found that for best fits values across all Groups I-VI to be \(D_{c}=0.5\) and \(\alpha_{c}=0.5\) (as the statistical considerations by Yong [3] suggested). This gives extremely good fit for their distributions \(f(h)\) observed directly from Hirsch index data (see Figs. 4). Other suggestions like \(\alpha_{c}\simeq 0.42\)[5] or \(\alpha_{c}=0.5\) but with an inverse \(logN_{c}\) correction term [2] do not give good fits (see the insets of Figs 4). In addition, we find the \(T_{h}\) values for each of the six c-score ranges fit very well with the relation \(T_{h}=h_{av}/\big{(}\gamma_{c}+1\big{)}\) where where \(h_{av}\) is the average of
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Group & c-score rank & \(\alpha_{c}\) & \(\alpha_{p}\) & \(D_{c}\) (from relation (1)) & \(D_{p}\) (from relation (5)) & \((D_{p}/D_{c})^{2}\) \\ \hline I & 1-20K & 0.5 & 0.5 & 0.5 & 3.8 & 58 \\ \hline II & 20k-40K & 0.5 & 0.5 & 0.5 & 3.4 & 46 \\ \hline III & 40K-60K & 0.5 & 0.5 & 0.5 & 3.2 & 41 \\ \hline IV & 60K-80K & 0.5 & 0.5 & 0.5 & 3.0 & 36 \\ \hline V & 80K-100K & 0.5 & 0.5 & 0.5 & 2.8 & 31 \\ \hline VI & 100K-120K & 0.5 & 0.5 & 0.5 & 2.7 & 29 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Fitting parameters for the total number of citations (\(N_{c}\)) and the total number of papers (\(N_{p}\)) obtained using the relations (1) and (5).
\(f(h)\) in each Group. This compares very well with the Chakraborti-Chakrabarti model [8; 10; 11] of "wealth" distribution where a fixed saving fraction of the wealth (which determines the value of the exponent \(\gamma_{c}\) in the Gamma distribution) is retained in each kinetic exchange or interaction, suggesting a similar stochastic dynamics of paper citations, where the fixed fraction of (confident or core Group) "citations" (wealth) in each paper-writing (interaction) determines the exponent \(\gamma_{c}\) value and the corresponding noise level \(T_{h}\) in \(f(h)\). We also observe an interesting feature of the citation network. The observation (relations (1) and (5)) \(h=D_{c}N_{c}^{\alpha_{c}}=D_{p}N_{p}^{\alpha_{p}}\), where \(\alpha_{c}=\alpha_{p}=0.5\), \(D_{c}=0.5\) and \(2.7\leq D_{p}\leq 3.8\) depending on the Group, suggesting the value \((N_{c}/N_{p}=(D_{p}/D_{c})^{2}=4D_{p}^{2})\) of the average citation per paper shown in Table II (Dunbar-like effective network coordination number [12; 13]), ranges from 58 to 29 depending on the Group (I to VI). As discussed at the end of the last section (see Figs. 6), the limiting value of this citation-network coordination number (network average of citations per paper) gets extrapolated to about 75.
## Acknowledgement
We are thankful to Soumyajyoti Biswas and Parongama Sen for several useful comments on the manuscript. BKC is grateful to the Indian National Science Academy for their Senior Scientist Research Grant.
|
2310.10738 | On running coupling in the JIMWLK evolution and its Langevin formulation | Various conventional running coupling prescriptions reproducing $\beta_
0$-dependent terms of NLO JIMWLK are reviewed and found to be theoretically
inconsistent: the JIMWLK evolution Hamiltonian with running coupling violates
the requirement of positive semidefiniteness. This requirement appears to be
tightly related to the possibility of having a Langevin formulation for the
evolution. We also review the scheme that attributes a part of
$\beta_0$-dependent terms to the DGLAP evolution of the projectile. The
remaining $\beta_0$-dependent contributions sum up into so-called the
``daughter dipole'' prescription, which leads to a manifestly positive
semidefinite Hamiltonian. | Tolga Altinoluk, Guillaume Beuf, Michael Lublinsky, Vladimir V. Skokov | 2023-10-16T18:10:01Z | http://arxiv.org/abs/2310.10738v2 | # On running coupling in the JIMWLK evolution and its Langevin formulation
###### Abstract
Various running coupling prescriptions for the BK-JIMWLK evolution are reviewed and found to be theoretically inconsistent: the evolution Hamiltonian with running coupling violates the requirement of positive semidefiniteness. This requirement appears to be tightly related to the possibility of having a Langevin formulation for the evolution.
## 1 Introduction
High energy evolution of a hadronic observable \(\mathcal{O}\) in a dilute-on-dense scattering (DIS-like processes) is governed by the JIMWLK equation of the form
\[\dot{\mathcal{O}}\,=\,-\,H^{\rm JIMWLK}\,\mathcal{O}\,, \tag{1}\]
which is a non-linear functional generalization of the BFKL equation [1; 2; 3].
The JIMWLK evolution [4; 5; 6; 7; 8], or mainly its large \(N_{c}\)/mean field version, the Balitsky-Kovchegov (BK) equation [9; 10; 11; 12], was instrumental in the phenomenology of high-energy QCD. Relaxing the large \(N_{c}\)/mean field approximation turns out to be very challenging. One is then forced to simulate the entire JIMWLK Hamiltonian, meaning simulating a 2+1 dimensional non-local QFT. So far, the stochastic formulation of the Leading Order (LO) JIMWLK in terms of Langevin equation [13; 14] is the only known approach to this computational problem. The approach is valid only for fixed coupling constant or for
very specific running coupling prescriptions [15; 16]. This is a severe limitation for any reliable phenomenology. Due to this issue, among others, most phenomenological studies involving the JIMWLK equation have so far been restricted to LO accuracy. Yet, in order to reveal gluon saturation effects, it is critical to improve the precision of the theoretical calculations by including higher order contributions. A lot of progress has been achieved in this direction over the last fifteen years. Particularly, the NLO formulation of both the BK equation [12] and the JIMWLK Hamiltonian [17; 18] (including the contribution due to quark masses [19]) became available, enabling the start of the era of NLO phenomenology [20; 21; 22; 23] based on the BK equation. However, beyond LO, the JIMWLK Hamiltonian does not seem to admit any simple stochastic formulation. Consequently, there has been no phenomenological applications of the full NLO JIMWLK Hamiltonian.
Despite the theoretical progresses, bringing all the calculations relevant to DIS systematically to the NLO accuracy remains a major challenge. It was discovered that at NLO, there emerge conceptual problems related to large energy-independent new logarithmic corrections, threatening stability of the \(\alpha_{s}\) expansion, and consequently requiring additional resummations. These logarithms could be divided into three groups. The first one contains large logs related to time ordering between gluon emissions. A lot of efforts have been invested over the years in attempting to impose time ordering via various kinematical constrains at the level of the BFKL, BK and even JIMWLK equations [24; 25; 26; 27; 28; 29; 30; 31]. While resumming these logarithms is critical in order to gain full control over the calculations and achieve the desired NLO accuracy, this research direction is largely irrelevant to our goals below and hence would not be reviewed in any further detail here.
The second group contains UV logs proportional to \(\beta_{0}\), the one loop coefficient of the QCD \(\beta\)-function, which are naturally resummed into the running of the coupling constant \(\alpha_{s}\). Finally, there are potentially large logs associated with the DGLAP evolution of either the projectile or the target (see e.g. [24]). More precisely, they are induced by the non-singular part of the DGLAP splitting functions at low x. As has been realized in [32], at the level of the JIMWLK equation, the logarithms associated with the DGLAP evolution of the projectile are proportional to \(\beta_{0}\), and can be confused with the ones associated with running coupling. In Ref. [32], a scheme was then introduced in order to disentangle the running coupling logs and these DGLAP logs, and to resum each of them in a consistent way at the level of the JIMWLK Hamiltonian.
The main focus of this paper is on the implementation of running coupling in the JIMWLK equation. The inclusion of running coupling effects is known to be important from the phenomenological perspective: as mentioned, the running coupling BK (rcBK) equation has been crucial in most phenomenological studies done so far [22; 23; 29; 33; 34; 35; 36; 37; 38; 39].
There exist several running coupling prescriptions, which have been inferred from the calculation of NLO QCD diagrams. One prescription for rcBK is due to Balitsky [40] and another one, written both for rcBK and rcJIMWLK, is due to Kovchegov and Weigert (KW) [41] (see also [42]). Nevertheless, the Balitsky prescription for rcBK can be uplifted to a prescription for rcJIMWLK as well. However, as we demonstrate below, these two rcJIMWLK Hamiltonians turn out to violate _positive semidefiniteness_. This means that their spectra have negative eigenvalues. For some particular observables this would lead
to a wrong sign of the energy evolution. Instead, a fully consistent high energy evolution Hamiltonian should have a non-negative spectrum. Violation of positive semidefiniteness, as is observed both in the Balitsky and KW running coupling schemes, would lead to non-physical results for the solution of the corresponding evolution equation.
As will be explained later, the property of positive semidefiniteness of the Hamiltonian is tightly connected with the existence of a Langevin formulation of the evolution. A couple of running coupling prescriptions have been proposed in the literature for the Langevin formulation of JIMWLK, see e.g. [15; 16]. They automatically correspond to positive semidefinite rcJIMWLK kernels. However, when expanded at NLO accuracy, none of these prescriptions reproduce the terms commonly interpreted as running coupling corrections in the NLO JIMWLK Hamiltonian, following Refs. [40] and [41].
Our goal in this paper is to review and asses various running coupling prescriptions. Ideally, a prescription for rcJIMWLK would lead to a positive semidefinite Hamiltonian, which would also be compatible with fixed-order NLO results for the JIMWLK Hamiltonian. For such a positive semidefinite rcJIMWLK Hamiltonian, we will explain how to obtain a Langevin formulation suitable for numerical simulations. We find the running coupling prescriptions from Refs. [40] and [41], as well as all the ones related to them that we could imagine, to lead to a non positive semidefinite rcJIMWLK, which is thus unsuitable.
In Ref. [32], a different choice of basis of color operators was used to write down the NLO JIMWLK Hamiltonian compared to Refs. [40] and [41], in order to disentangle running coupling logs and DGLAP logs, as mentioned above. This results in a different NLO contribution interpreted as running coupling correction than in the schemes of Refs. [40] and [41]. In the scheme of Ref. [32], the obtained running coupling correction at NLO leads by resummation to the daughter dipole prescription of Ref. [15], corresponding to the replacement
\[\alpha_{s}\mapsto\sqrt{\alpha_{s}(X^{2})}\,\sqrt{\alpha_{s}(Y^{2})} \tag{2}\]
in the LO JIMWLK Hamiltonian, where \(|X|\) (\(|Y|\)) is the transverse size of the emitter-gluon pair in the wave-function (conjugate wave-function). The daughter dipole prescription (2) is thus so far the only known running coupling prescription which is both compatible with the Langevin formulation of JIMWLK and consistent with NLO calculations. It should thus be the preferred running coupling prescription for JIMWLK in future applications. This also provides further motivation for the resummation scheme of Ref. [32] for JIMWLK.
The paper is structured as follows. In Section 2, we review the stochastic reformulation of the LO JIMWLK evolution. In Section 3, we discuss formal properties that should be obeyed by the JIMWLK evolution with a consistent running coupling prescription. In Section 4, we discuss various running coupling prescriptions available in the literature. Section 5 presents a concise discussion of our analysis. Some technical details are presented in Appendix A and B.
Langevin equation for fixed coupling LO JIMWLK evolution
In this section, we review the LO JIMWLK evolution and its stochastic formulation. The Hamiltonian for the LO JIMWLK evolution (see Ref. [43], see [17; 18; 44]) can be written in the following form
\[H_{\rm LO}=\frac{\alpha_{s}}{2}\int_{z}Q_{i}^{a}(z)Q_{i}^{a}(z) \tag{1}\]
where \(Q_{i}^{a}(z)\) has the interpretation of a single gluon emission operator1
Footnote 1: Throughout the manuscript, we use \(\int_{z}\equiv\int d^{2}z\) and \(\int_{k}\equiv\int\frac{d^{2}k}{(2\pi)^{2}}\) as shorthand notation for the coordinate and momentum space integrals over the transverse direction.
\[Q_{i}^{a}(z)=\frac{1}{\pi}\int_{x}K_{i}(x-z)\,q^{a}(x,z)\,,\qquad\quad q^{a}(x,z)=\left[U(x)-U(z)\right]^{ab}J_{R}^{b}(x). \tag{2}\]
The Weizsacker-Williams (WW) field appearing in the single gluon emission operator is
\[K_{i}(x)=\frac{x_{i}}{x^{2}}\,. \tag{3}\]
Additionally, \(U(x)\) is the Wilson line along the light cone in the adjoint representation at the transverse position \(x\). The fundamental Wilson line, \(V(x)\), is related to the adjoint one through
\[U^{ab}(x)=2\,{\rm tr}\left[V^{\dagger}(x)t^{a}V(x)t^{b}\right]\,.\]
Here, \(t_{a}\) are SU(N) generators in the fundamental representation. The operators \(J_{R}^{b}(x)\) form a SU(N) algebra and act on the fundamental and adjoint Wilson lines as right color rotations2
Footnote 2: Left rotations could be introduced similarly but are not required in this manuscript.
\[J_{R}^{a}(z)V(x) =\delta^{(2)}(x-z)V(x)t^{a}, J_{R}^{a}(z)V^{\dagger}(x) =-\delta^{(2)}(x-z)t^{a}V^{\dagger}(x) \tag{4}\] \[J_{R}^{a}(z)U(x) =\delta^{(2)}(x-z)U(x)T^{a}, J_{R}^{a}(z)U^{\dagger}(x) =-\delta^{(2)}(x-z)T^{a}U^{\dagger}(x)\,, \tag{5}\]
where \(T^{a}\) are SU(N) generators in the adjoint representations.
As a remark, note that the JIMWLK Hamiltonian (1) can be rewritten as
\[H_{\rm LO}=\int_{x,y,z}\mathcal{K}^{\rm LO}(x,y,z)\,q^{a}(x,z)\,q^{a}(y,z)\,, \tag{6}\]
with the kernel
\[\mathcal{K}^{\rm LO}(x,y,z)=\frac{\alpha_{s}}{2\pi^{2}}\,K_{i}(x-z)\,K_{i}(y- z)=\frac{\alpha_{s}}{2\pi^{2}}\,\frac{(x-z)_{i}}{(x-z)^{2}}\,\frac{(y-z)_{i}}{(y -z)^{2}}. \tag{7}\]
Considering the rapidity variable \(\eta\) as a fictitious time, the rapidity evolution operator from \(\eta_{0}\) to \(\eta_{1}\) is
\[\mathcal{U}(\eta_{0},\eta_{1})=\mathcal{P}e^{-\int_{\eta_{0}}^{\eta_{1}}d\eta H _{\rm LO}}\,. \tag{8}\]
It was demonstrated in Refs. [13; 14] that the evolution generated by (8) is equivalent to a stochastic process of the Langevin type. Indeed, in its standard formulation the
JIMWLK equation is a functional Fokker-Planck equation. As it is well known from statistical physics, there is a correspondence between Fokker-Planck and Langevin equations. JIMWLK equation is yet another example of this correspondence but functional equations.
The success of the Langevin reformulation lies in the observation that \(H_{\rm LO}\) is quadratic in the operator \(Q\). In the spirit of Hubbard-Stratonovich transformation, to make the evolution linear with respect to \(Q\), we introduce a local two-dimensional auxiliary vector field \(\xi^{a}_{i}(\eta,x)\)3,
Footnote 3: Here and in the rest of the paper we will systematically ignore the Gaussian normalization constant, which is nevertheless always there.
\[\mathcal{U}(\eta_{0},\eta_{1}) =\mathcal{P}_{\eta}\exp\biggl{\{}-\int_{\eta_{0}}^{\eta_{1}}d \eta H_{\rm LO}\biggr{\}}\] \[= \int D\xi\,\mathcal{P}_{\eta}\,\exp\biggl{\{}\int_{\eta_{0}}^{ \eta_{1}}d\eta\int_{z}\left[-i\sqrt{\alpha_{s}}Q^{a}_{i}(z)\xi^{a}_{i}(\eta,z )-\frac{1}{2}\vec{\xi}^{2}(\eta,z)\right]\biggr{\}}\] \[= \int D\xi\,\,\mathcal{U}_{\xi}(\eta_{0},\eta_{1})\,e^{-\int_{ \eta_{0}}^{\eta_{1}}d\eta\int_{z}\frac{1}{2}\vec{\xi}^{2}(\eta,z)}\,, \tag{9}\]
where \(\mathcal{P}_{\eta}\) corresponds to ordering of the \(Q\) operators along \(\eta\). Here, following the ideas borrowed from stochastic quantization, we introduce an evolution operator \(\mathcal{U}_{\xi}\) for a fixed configuration of the field \(\xi^{a}_{i}(\eta,x)\),
\[\mathcal{U}_{\xi}(\eta_{0},\eta_{1})=\mathcal{P}_{\eta}\,\exp\biggl{\{}-i\int_ {\eta_{0}}^{\eta_{1}}d\eta\int_{z}\sqrt{\alpha_{s}}Q^{a}_{i}(z)\xi^{a}_{i}( \eta,z)\biggr{\}}\,. \tag{10}\]
Then for an infinitesimally small rapidity interval \(\Delta\),
\[\mathcal{U}_{\xi}\equiv\mathcal{U}_{\xi}(\eta,\eta+\Delta)=\exp\biggl{\{}-i \Delta\int_{z}\sqrt{\alpha_{s}}Q^{a}_{i}(z)\xi^{a}_{i}(\eta,z)\biggr{\}}\,. \tag{11}\]
From Eq. (9), we see that the field \(\xi\) is a random Gaussian field. For a stochastic process with Gaussian noise, in order to account for all the effects linear in \(\Delta\), the evolution operator has to be expanded to the second order. This is simply due to the fact that
\[\langle\xi^{a}_{i}(\eta,x)\xi^{b}_{j}(\eta^{\prime},x^{\prime})\rangle=\delta_ {ij}\delta^{ab}\delta(\eta-\eta^{\prime})\delta^{(2)}(x-x^{\prime}) \tag{12}\]
which, after discretization in rapidity \(\eta_{n}=\eta_{0}+n\Delta\), reads
\[\langle\xi^{a,i}_{n}(x)\xi^{b,j}_{n^{\prime}}(x^{\prime})\rangle=\frac{1}{ \Delta}\delta^{ij}\delta^{ab}\delta_{nn^{\prime}}\delta^{(2)}(x-x^{\prime})\,. \tag{13}\]
Thus, it is convenient to rescale the noise variable as \(\hat{\xi}^{a\,i}_{n}(x)=\sqrt{\Delta}\xi^{a\,i}_{n}(x)\) with variance which is independent of the step size \(\Delta\). In what follows, we drop the hat in the notation of the field \(\hat{\xi}\). When expanded to the linear order in \(\Delta\), the evolution operator reads
\[\mathcal{U}_{\xi} =\exp\biggl{\{}-i\sqrt{\alpha_{s}\Delta}\int_{z}Q^{a}_{i}(z)\xi^{ a,i}_{n}(z)\biggr{\}}\] \[\approx 1-i\sqrt{\alpha_{s}\Delta}\int_{z}\xi^{a,i}_{n}(z)Q^{a}_{i}(z)- \frac{1}{2}\alpha_{s}\Delta\int_{z}\int_{z^{\prime}}\xi^{a,i}_{n}(z)\xi^{b,j} _{n}(z^{\prime})Q^{a}_{i}(z)Q^{b}_{j}(z^{\prime})\,. \tag{14}\]
Note that if the Gaussian integration over \(\xi\) is performed, the linear term in \(\xi\) drops while the quadratic term recovers the LO JIMWLK Hamiltonian as expected.
Rapidity evolution of any operator that is built out of Wilson lines, such as a color dipole, is performed in two steps: first one computes evolution of the Wilson lines on a fixed configuration of the noise and then averages the operator over the noise. Thus, one can formulate the evolution of any operator by just tracking the evolution of a single fundamental Wilson line. The action of the \(Q\) operator on a fundamental Wilson line \(V\) can be written as
\[Q_{i}^{a}(z)V(x) =\frac{1}{\pi}K_{i}(x-z)[U(x)-U(z)]^{ab}V(x)t^{b}\] \[=\frac{1}{\pi}K_{i}(x-z)V(x)\left[V^{\dagger}(x)t^{a}V(x)-V^{ \dagger}(z)t^{a}V(z)\right]\] \[=\frac{1}{\pi}K_{i}(x-z)\left[t^{a}-V(x)V^{\dagger}(z)t^{a}V(z)V^ {\dagger}(x)\right]V(x)\,, \tag{15}\]
where the adjoint Wilson lines are written in terms of the fundamental ones. Then, one step of the evolution of \(V\) is obtained by substituting Eq. (15) into Eq. (14),
\[\mathcal{U}_{\xi}V(x)=V(x)-i\frac{\sqrt{\alpha_{s}\Delta}}{\pi} \int_{z}K_{i}(x-z)\left[\xi_{n}^{i}(z)V(x)-V(x)V^{\dagger}(z)\xi_{n}^{i}(z)V( z)\right]\\ -\frac{\alpha_{s}\Delta}{2\pi^{2}}\int_{z}\int_{z^{\prime}}K_{i} (x-z)K_{j}(x-z^{\prime})\xi_{n}^{a,i}(z)\xi_{n}^{b,j}(z^{\prime})\\ \times\left[t^{a}t^{b}V(x)-t^{a}V(x)V(z^{\prime})t^{b}V(z^{ \prime})-t^{b}V(x)V(z)t^{a}V(z)\right.\\ +V(x)V^{\dagger}(z^{\prime})t^{b}V(z^{\prime})V^{\dagger}(z)t^{a}V (z)\right]\\ -\frac{\alpha_{s}\Delta}{2\pi^{2}}\int_{z}\int_{z^{\prime}}K_{i} (x-z)K_{j}(z-z^{\prime})\xi_{n}^{a,i}(z)\xi_{n}^{b,j}(z^{\prime})\\ \times\left[V(x)V^{\dagger}(z)[t^{b},t^{a}]_{-}V(z)+V(x)[V^{ \dagger}(z)t^{a}V(z),V^{\dagger}(z^{\prime})t^{b}V(z^{\prime})]_{-}\right] \tag{16}\]
where we denoted the commutators by \([A,B]_{-}\) to make the notation more transparent. Additionally, we introduced \(\xi_{n}^{i}(x)=t^{a}\xi_{n}^{a,i}(x)\). The commutator terms could be ignored when averaged over the noise, since the latter produces a function symmetric in color, \(\langle\xi^{a}\xi^{b}\rangle\sim\delta_{ab}\), while any correlation of \(\xi\) with any Wilson line will lead to contributions of order higher than linear in \(\Delta\). With this nuance out of the way, we have
\[\mathcal{U}_{\xi}V(x) =V(x)-i\frac{\sqrt{\alpha_{s}\Delta}}{\pi}\int_{z}K_{i}(x-z)\left[ \xi_{i}(z)V(x)-V(x)V^{\dagger}(z)\xi_{i}(z)V(z)\right]\] \[-\frac{\alpha_{s}\Delta}{2\pi^{2}}\int_{z}\int_{z^{\prime}}K_{i} (x-z)K_{j}(x-z^{\prime})\xi_{n}^{a,i}(z)\xi_{n}^{b,j}(z^{\prime})\] \[\quad\times\left[t^{a}t^{b}V(x)-t^{a}V(x)V(z^{\prime})t^{b}V(z^{ \prime})-t^{b}V(x)V(z)t^{a}V(z)\right.\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\left.+V(x)V^{\dagger}(z^{\prime})t^{b}V( z^{\prime})V^{\dagger}(z)t^{a}V(z)\right] \tag{17}\]
or simply
\[\mathcal{U}_{\xi}V(x) =V(x)-i\frac{\sqrt{\alpha_{s}\Delta}}{\pi}\int_{z}K_{i}(x-z)\left[ \xi_{i}(z)V(x)-V(x)V^{\dagger}(z)\xi_{i}(z)V(z)\right]\] \[-\frac{\alpha_{s}\Delta}{2\pi^{2}}\int_{z}\int_{z^{\prime}}K_{i}(x- z)K_{j}(x-z^{\prime})\xi_{i}^{a}(z)\xi_{j}^{b}(z^{\prime})\] \[\quad\times\left[t^{a}t^{b}V(x)-2t^{a}V(x)V(z^{\prime})t^{b}V(z^{ \prime})+V(x)V^{\dagger}(z^{\prime})t^{b}V(z^{\prime})V^{\dagger}(z)t^{a}V(z) \right]. \tag{18}\]
It is straightforward to show that this expression can be exponentiated into
\[\mathcal{U}_{\xi}V(x) =\exp\Bigg{(}-i\frac{\sqrt{\alpha_{s}\Delta}}{\pi}\int_{z}\vec{K} (x-z)\cdot\vec{\xi}_{n}(z)\Bigg{)}\] \[\quad\times V(x)\exp\left(i\frac{\sqrt{\alpha_{s}\Delta}}{\pi} \int_{z}\vec{K}(x-z)\cdot(V^{\dagger}(z)\vec{\xi}_{n}(z)V(z))\right). \tag{19}\]
The main advantage of this re-exponentiated expression compared to (18) is that it explicitly preserves unitarity of the Wilson line at each step of the evolution. To reproduce the standard form of JIMWLK, one has to perform the rotation of the noise, see Ref. [16],
\[\tilde{\xi}(z)=V^{\dagger}(z)\xi(z)V(z) \tag{20}\]
which leads to (renaming the noise again as \(\xi\))
\[\mathcal{U}_{\xi}V(x) =\exp\Bigg{(}-i\frac{\sqrt{\alpha_{s}\Delta}}{\pi}\int_{z}\vec{K} (x-z)\cdot(V(z)\vec{\xi}(z)V^{\dagger}(z))\Bigg{)}V(x)\] \[\quad\times\exp\Bigg{(}i\frac{\sqrt{\alpha_{s}\Delta}}{\pi}\int_{ z}\vec{K}(x-z)\cdot\vec{\xi}(z)\Bigg{)}\,. \tag{21}\]
Note that the rotation does not change the variance of the noise. To obtain (20), the coupling of the noise to the operator \(Q\) could be written in a slightly different but equivalent way
\[\mathcal{U}(\eta_{0},\eta_{1})\,=\,\int D\xi\;\mathcal{P}_{\eta}\,e^{\int_{ \eta_{0}}^{\eta_{1}}d\eta\int_{z}[-i\sqrt{\alpha_{s}}Q_{i}^{a}(z)U^{ab}(z) \xi^{b,i}(\eta,z)-\frac{1}{2}\vec{\xi}^{2}(\eta,z)]}\,. \tag{22}\]
## 3 Formal properties of JIMWLK with running coupling
Having reviewed the Langevin formalism for the fixed coupling in the previous section, we will focus on the running coupling case from now on. In this section, we will discuss general properties that should be obeyed by the JIMWLK evolution with a consistent prescription for running coupling. In the next section, we will instead study specific running coupling prescriptions one by one.
### JIMWLK and BK beyond fixed coupling
At NLO, new color structures appear in the JIMWLK Hamiltonian. But only NLO corrections involving the same color structure as the LO kernel can be interpreted as running coupling corrections to the LO JIMWLK kernel. Thus, any running coupling prescription for the improvement of the LO JIMWLK kernel can be written as
\[H=\int_{x,y,z}\mathcal{K}(x,y,z)\,q^{a}(x,z)\,q^{a}(y,z)\,, \tag{10}\]
with the same color operator \(q^{a}(x,z)\) as defined in Eq. (2), and some yet unspecified kernel \(\mathcal{K}(x,y,z)\) which encodes the running coupling effects. This kernel should reduce to the LO kernel (7) in the fixed coupling case. When expanded to the second order in \(\alpha_{s}\), it should also reproduce the terms identified as running coupling terms within the NLO JIMWLK Hamiltonian, as will be discussed in the next section.
Given that a lot of discussion in the literature is focused on the BK equation, it is useful to remind the connection between the JIMWLK and BK evolutions, with either fixed or running coupling. Consider the action of the JIMWLK Hamiltonian (10) on a color dipole \(S(u,v)=\mathrm{tr}\big{[}V(u)V^{\dagger}(v)\big{]}/N_{c}\). It is straightforward to obtain (see [44]) the BK equation
\[H\,S(u,v) = N_{c}\int_{z}K_{\mathrm{dipole}}(u,v,z)\,\left[S(u,z)S(z,v)-S(u, v)\right]\,, \tag{11}\] \[K_{\mathrm{dipole}}(u,v,z) \equiv \mathcal{K}(u,u,z)+\mathcal{K}(v,v,z)-\mathcal{K}(u,v,z)- \mathcal{K}(v,u,z)\,. \tag{12}\]
The relation between the kernels can be inverted to express \(\mathcal{K}\) in terms of \(K_{\mathrm{dipole}}\) as
\[\mathcal{K}(u,v,z)=-\frac{1}{2}K_{\mathrm{dipole}}(u,v,z)+f(u-z)+f(v-z) \tag{13}\]
As this equation demonstrates, this inversion is not unique: it includes an arbitrary function \(f\) of one transverse vector variable. This reflects the fact that the JIMWLK evolution is more general and encodes more information than the BK evolution. At LO, with fixed coupling, the relation between the JIMWLK and BK kernels is given by Eqs. (11) and (13) with \(f(X)=\frac{\alpha_{s}}{2\pi^{2}}\frac{1}{X^{2}}\).
### Positive semidefiniteness of the JIMWLK kernel
The Hamiltonian \(H\) must have all its eigenvalues non-negative. Otherwise, if the Hamiltonian has a negative eigenvalue, the relevant eigenstate will evolve with energy into a vacuum-like state that does not scatter.
An alternative take on the JIMWLK evolution is through its Fokker-Plank form (see e.g. Ref. [13]), in which the energy evolution is viewed as a diffusion process.
\[\partial_{\eta}\mathcal{O}[\alpha]= \frac{1}{2}\int d^{2}xd^{2}y\frac{\partial}{\partial\alpha^{a}(x ^{-},x)}\left(\eta^{ab}(x,y)\frac{\partial}{\partial\alpha^{b}(y^{-},y)} \mathcal{O}[\alpha]\right) \tag{14}\]
with
\[\eta^{ab}(x,y)=\int d^{2}z\mathcal{K}(x,y,z)\left[(U(x)-U(z))(U^{\dagger}(y)-U ^{\dagger}(z))\right]^{ab} \tag{15}\]
Here \(\alpha\)'s are phases of the Wilson lines. For Eq. (10) to have interpretation of a diffusion process, the diffusion "coefficient" has to be positive semi-definite. This must be true for arbitrary target (that is the configuration of the Wilson lines \(U\)) and arbitrary projectile (that can be thought as configuration of \(\frac{\delta}{\delta\alpha}\)). This is equivalent to the positive semi-definiteness of the kernel \({\cal K}(x,y,z)\). Hence below we will focus on the latter.
The property of positive semidefiniteness of the JIMWLK kernel is defined as follows. Regarded as an infinite matrix, the kernel4\({\cal K}(x,y,z)={\cal K}(X\equiv x-z,Y\equiv y-z)\) is positive semidefinite if
Footnote 4: Any kernel \({\cal K}(x,y,z)\) consistent with translational invariance can be written as a function of \(X\) and \(Y\) only.
\[\int d^{2}Xd^{2}Y\,\phi(X)\,{\cal K}(X,Y)\,\phi(Y)\,\geq\,0 \tag{12}\]
for any function \(\phi(X)\). This property is necessary in order to avoid run-away/unstable solutions of the JIMWLK evolution Eq. (1). For that reason, any kernel \({\cal K}(x,y,z)\) violating positive semidefiniteness (12) should be considered as unphysical. Moreover, that property is crucial in the derivation of the Langevin formulation, as will be discussed in the next subsection.
In order to explore consequences of the positive semidefiniteness property (12), it is convenient to restrict ourselves to trial functions of the form
\[\phi(X)=A_{1}\delta(X-X_{1})+A_{2}\delta(X-X_{2})\,. \tag{13}\]
Then, the inequality (12) yields
\[A_{1}^{2}{\cal K}(X_{1},X_{1})+A_{2}^{2}{\cal K}(X_{2},X_{2})+2 A_{1}A_{2}{\cal K}(X_{1},X_{2})\geq 0 \tag{14}\]
for any \(A_{1}\) and \(A_{2}\). Note that the positivity of this quadratic form in \(A_{1}\) and \(A_{2}\) is a necessary but not sufficient condition for Eq. (12)5. The relation (14) can be valid for any \(A_{1}\) and \(A_{2}\) only if
Footnote 5: A stronger condition can be obtained by extending the analysis to include \(N>2\) points in Eq. (13). This will reduce to a positive-semidefiniteness of the corresponding quadratic form or an \(N\times N\) matrix. A positive-semidefinite matrix \({\cal K}\) with entries \({\cal K}(X_{i},X_{j})\) can be written in the form \({\cal K}=R^{T}R\), where \(R\) can be singular.
\[{\cal K}(X_{1},X_{1}){\cal K}(X_{2},X_{2})-{\cal K}^{2}(X_{1},X _{2})\geq 0, \tag{15}\] \[{\cal K}(X_{1},X_{1})\geq 0 \tag{16}\]
are satisfied for any two coordinates \(X_{1}\) and \(X_{2}\).
Moreover, by choosing \(A_{1}=-A_{2}=A\) in the trial function, one arrives at the condition
\[{\cal K}(X_{1},X_{1})+{\cal K}(X_{2},X_{2})-2{\cal K}(X_{1},X_{2 })\geq 0\,. \tag{17}\]
Interestingly this inequality is equivalent to
\[K_{\rm dipole}(X_{1},X_{2})\geq 0 \tag{18}\]
as follows from Eq. (10). The inequality (17) is thus a necessary (but not sufficient) condition for the positive semidefiniteness of the JIMWLK Hamiltonian. It is a less restrictive condition than Eq. (16).
In the next section, we will check the validity of the inequalities (16), (17) and (17) for various prescriptions that can be proposed for rcJIMWLK, as a test for the positive semidefiniteness of the corresponding kernels.
As an example, the inequality (16) can be verified for the LO JIMWLK kernel (7) as follows:
\[\mathcal{K}^{\text{LO}}(X_{1},X_{1})\mathcal{K}^{\text{LO}}(X_{ 2},X_{2})-\mathcal{K}^{\text{LO}}(X_{1},X_{2})^{2} =\left(\frac{\alpha_{s}}{2\pi^{2}}\right)^{2}\left[\frac{1}{X_{1} ^{2}X_{2}^{2}}-\frac{(X_{1}\cdot X_{2})^{2}}{X_{1}^{4}X_{2}^{4}}\right]\] \[=\left(\frac{\alpha_{s}}{2\pi^{2}}\right)^{2}\frac{(X_{1}\times X _{2})^{2}}{X_{1}^{4}X_{2}^{4}}\geq 0 \tag{18}\]
where \(X_{1}\times X_{2}=\epsilon_{ij}(X_{1})_{i}(X_{2})_{j}\).
### Langevin formulation for a positive semidefinite JIMWLK evolution
Assuming that a positive semidefinite kernel is available, one can construct the corresponding Langevin formulation in the following way. For any positive semidefinite kernel, it is possible to define a "square root" \(\Psi\), such that
\[\frac{1}{2}\int_{V}\Psi_{i}(X,V)\Psi_{i}(Y,V)=\mathcal{K}(X,Y)\,. \tag{19}\]
Here, the subscript \(i\) demonstrates that, in general, the field \(\Psi\) can be a vector. The equations for a scalar field \(\Psi\) can be obtained by ignoring the subscript. In general, finding \(\Psi_{i}(X,V)\) is a rather nontrivial process. However, this can be achieved by performing a Schur decomposition since the Langevin equation is formulated on a discretized space. Once \(\Psi_{i}(X,V)\) is known, the corresponding Langevin equation can be obtained by following the same steps discussed in Sec. 2. Namely, one introduces a bi-local scalar noise field \(\zeta\), such that for a small rapidity interval \(\Delta\), the evolution operator reads
\[\mathcal{U}(\eta,\eta+\Delta) =\int D\zeta e^{\Delta\left(i\int_{x,y,z}\Psi_{i}(X,Y)\,q^{a}(x, z)\,\zeta_{i}^{a}(y,z)-\frac{1}{2}\int_{x,z}\zeta_{i}^{a}(x,z)\zeta_{i}^{a}(x, z)\right)}\,. \tag{20}\]
Indeed, integrating out the noise field \(\zeta_{i}^{a}(x,z)\) yields
\[\mathcal{U}(\eta,\eta+\Delta) =e^{-\Delta\,\int_{x,y,z}\mathcal{K}(X,Y)\,q^{a}(x,z)\,q^{a}(y,z)}\,. \tag{21}\]
Following the same steps introduced in Sec. 2 we obtain the following Langevin equation:
\[\mathcal{U}_{\zeta}V(x) =\exp\Bigg{(}-i\sqrt{\Delta}\int_{y,z}\vec{\Psi}(X,Y)\cdot(V(z) \vec{\zeta}(y,z)V^{\dagger}(z))\Bigg{)}V(x)\] \[\quad\times\exp\left(i\sqrt{\Delta}\int_{y,z}\vec{\Psi}(X,Y)\vec {\zeta}(y,z)\right). \tag{22}\]
For \(\mathcal{K}=\mathcal{K}^{\text{LO}}\), it is straightforward to demonstrate that the bi-local noise can be equivalently replaced with a local noise recovering the results of Section 2, see Appendix A.
Analysis of the available prescriptions for rcJIMWLK
### Scheme dependence of running coupling prescriptions
In theories like QCD and QED, adopting a running coupling prescription amounts to performing the all order resummation of a set of potentially large logarithms (which are leftovers after the UV renormalization procedure) in a perturbative series, thus improving the accuracy compared to strictly fixed order results. This resummation is very well understood and under control in QED. By contrast, in QCD, there is a significant scheme-dependence associated with such resummation, due notably to our poorer understanding of perturbative series at high orders. There are three main sources of scheme-dependence for running coupling prescriptions in QCD, associated with three successive steps in the construction of a running coupling prescription:
1. proper definition of the considered observable,
2. identification of running coupling correction among higher order corrections,
3. resummation of the identified running coupling corrections.
Most of the literature about running coupling prescriptions in QCD in general focuses on the second and third points, because well defined observables (like cross sections for simple processes) are considered. By contrast, in the case of the JIMWLK evolution, the first point is crucial, for the following reason. As is well known, new color operators arise at each order in the perturbative expansion of the JIMWLK Hamiltonian. In particular, some higher order diagrams are UV divergent even though they involve new color operators. At NLO, this is the case for diagrams in which a gluon splits into a gluon-gluon or a quark-antiquark pair, which then interacts with the target shockwave, and recombine later into a gluon. These diagrams are UV divergent in the limit of small gluon or quark-antiquark loop. However, due to the locality of the QCD counterterms, the UV divergence for such diagrams cannot be removed by a contribution containing the same color operator. Instead, one should use the property of color coherence of QCD: in the limit of small gluon or quark-antiquark loop, the loop is not resolved by the target, and thus has the same interaction with the target as its parent gluon. For that reason, even if these NLO diagrams are UV divergent and involve new color operators not present in the LO JIMWLK Hamiltonian, their UV divergences only involve the color operator present at LO. The strategy is then to carefully choose a basis of color operators to express the NLO JIMWLK Hamiltonian, which contains, in addition to the LO color operator, the new operators with appropriate subtractions by the LO color operator. In such a way, all the UV divergences are transferred to the coefficient of the color operator already present at LO, and UV renormalization can be performed with the standard QCD counterterms (for example in the \(\overline{\rm MS}\) scheme).
The choice of basis of color operators in this procedure is however not unique, due to the possibility to subtract UV finite contributions together with the UV divergences in the definition of the new color operators, which will lead to a different coefficient at NLO for the LO color operator. For example, in Ref. [40], Balitsky performed the UV
subtraction of the new color operator associated to a quark-antiquark loop with a term in which the parent gluon Wilson line is placed at the position of the quark. By contrast, in Ref. [41], Kovchegov and Weigert performed the UV subtraction of the new color operator associated to a quark-antiquark loop with a term in which the parent gluon Wilson line is instead placed at the center of the quark-antiquark pair (weighted by light-cone momentum fractions). These two choices of UV subtraction scheme, or equivalently of basis of color operators, lead to a different NLO correction to the coefficient of the LO color operator, and running coupling prescriptions for JIMWLK based on either choice will typically differ for that reason. See Ref. [45] for a quantitaive study of that scheme dependence. A priori, there is no compelling argument for preferring one UV subtraction scheme over the other. Moreover, infinitely many other UV subtraction schemes could be constructed and used, instead of the Balitsky [40] or the Kovchegov-Weigert [41] UV subtraction schemes.
In order to formulate a running coupling improvement of the LO JIMWLK kernel (7), the first scheme dependence that one encounters is thus associated with the proper definition of that kernel beyond LO, which requires to specify an appropriate basis of color operators to write the JIMWLK Hamiltonian beyond LO, corresponding to the choice of an UV subtraction scheme.
Once such a choice of basis is made, and corresponding NLO corrections are calculated, the second step is to decide which part of the NLO corrections to this kernel should be associated with running coupling effects. This choice is usually performed by selecting terms proportional to the coefficient \(\beta_{0}\) of the QCD \(\beta\) function, obtained via their dependence on the number of active quark flavors \(N_{f}\)[46]. Note however that this step can sometimes be ambiguous as well, due to the possible extra dependence of observables on \(N_{f}\) at higher orders, beyond the \(N_{f}\) dependence via \(\beta_{0}\). This strategy can be generalized to higher orders. For example, one could identify running coupling corrections among NNLO corrections by selecting terms with coefficients \(\beta_{0}{}^{2}\), \(\beta_{1}\) and \(\beta_{0}\).
Finally, once running coupling terms are identified among the higher order corrections, the third and last step is to resum them, in order to obtain a running coupling prescription, as stated above. In QED, this step corresponds to the summation of a geometric series. In QCD, this is not exactly the case, and perturbative series are not well understood at high orders. For that reason, various models have been proposed in the literature in order to write a resummed expression from the knowledge of the first few fixed-order corrections, but so far there is no consensus on a best resummation procedure. This leads to a resummation scheme ambiguity for running coupling prescriptions in QCD. This resummation scheme ambiguity can be in principle reduced by including the results from higher and higher fixed order calculations. But an ambiguity would always remains, due to the practical impossibility to obtain genuine all order results in perturbative QCD.
All in all, in the construction of a rcJIMWLK kernel from the NLO Hamiltonian, there are three types of scheme dependence, associated with the three steps in the general procedure:
1. UV subtraction scheme ambiguity,
2. ambiguity in the identification of the \(\beta_{0}\) terms,
3. resummation scheme ambiguity.
### NLO fixed-order JIMWLK kernel in the Balitsky scheme
At NLO, the JIMWLK Hamiltonian can be written as [17; 18]
\[H_{\rm NLO}=\int_{x,y,z}{\cal K}_{JSJ}(x,y,z)\;q^{a}(x,z)\,q^{a}(y,z)\,+\ldots \tag{4.1}\]
with the explicitly written term having the same color structure as the LO contribution, and extra NLO corrections involving new color operators not explicitly written. Hence,
\[{\cal K}_{JSJ}(x,y,z)={\cal K}^{\rm LO}(x,y,z)+{\cal K}_{JSJ}^{\rm NLO}(x,y,z)\,, \tag{4.2}\]
with the leading order kernel (2.7) that explicitly factorizes in \(x\) and \(y\) resulting in the representation (2.1).
As discussed in the previous subsection, the kernel \({\cal K}_{JSJ}^{\rm NLO}(x,y,z)\) is scheme dependent: its value depends on the chosen basis of color operators to write the JIMWLK Hamiltonian. Following Refs. [17; 18] in which the NLO JIMWLK Hamiltonian was derived, one can adopt the Balitsky scheme [40] to perform the UV subtraction of the new color operators, and thus to specify the chosen basis of color operators. Using that scheme to define \({\cal K}_{JSJ}^{\rm NLO}\), one finds
\[{\cal K}_{JSJ}^{\rm NLO}(x,y,z)= \frac{\alpha_{s}^{2}}{16\pi^{3}}\,\beta_{0}\,\left\{-\frac{(x-y)^ {2}}{X^{2}Y^{2}}\ln(x-y)^{2}\hat{\mu}^{2}\right.\] \[\left.+\frac{X^{2}-Y^{2}}{X^{2}Y^{2}}\ln\frac{X^{2}}{Y^{2}}+\frac {1}{X^{2}}\ln X^{2}\hat{\mu}^{2}+\frac{1}{Y^{2}}\ln Y^{2}\hat{\mu}^{2}\right\}\] \[+\left.\frac{\alpha_{s}^{2}}{8\pi^{3}}\,\frac{X\cdot Y}{X^{2}Y^{2 }}\,\left\{\left(\frac{67}{9}-\frac{\pi^{2}}{3}\right)N_{c}-\frac{10}{9}\,N_{f }\right\}\right. \tag{4.3}\]
after performing UV renormalization, using conventional dimensional regularization and the \(\overline{\rm MS}\) scheme. Hence, \(\alpha_{s}\) is now the renormalized coupling in this scheme, implicitly taken at the \(\overline{\rm MS}\) scale \(\mu_{\overline{\rm MS}}^{2}\). We have introduced the notation
\[\hat{\mu}\equiv\frac{1}{2}\,e^{\gamma_{E}}\,\mu_{\overline{\rm MS}}\,, \tag{4.4}\]
where \(\gamma_{E}\) is the Euler-Mascheroni constant. The first coefficient \(\beta_{0}\) of the QCD beta-function is normalized as
\[\beta_{0}=\frac{11}{3}\,N_{c}-\frac{2}{3}\,N_{f}\,. \tag{4.5}\]
The kernel \({\cal K}_{JSJ}^{\rm NLO}(x,y,z)\) from Eq. (4.3) contains two terms. In the first term, corresponding to the first two lines, the contributions from gluon loop diagrams (proportional to \(N_{c}\)) and the contributions from quark loop diagrams (proportional to \(N_{f}\)) have combined in such a way that an overall factor \(\beta_{0}\) is obtained. This term has a non-trivial dependence on the coordinates, involving in particular single logarithms. That whole term is then typically interpreted as a being associated with running coupling.
By contrast, in the second term in Eq. (4.3), corresponding to the third line, the quark and gluon diagrams contributions are not fully combining into a \(\beta_{0}\) factor. One could still single out a piece proportional \(\beta_{0}\) out of it, and interpret it as a contribution associated with running coupling, but there is no unique way to do it. This is precisely the second type of scheme dependence for running coupling prescriptions described in the previous subsection. However, that second term in Eq. (4.3) is proportional to \(\mathcal{K}^{\rm LO}(x,y,z)\), up to a constant factor. Hence, it cannot change the qualitative properties of the JIMWLK kernel, by contrast to the first term in Eq. (4.3). One could also entirely eliminate this second term by switching the UV renormalization scheme from the \(\overline{\rm MS}\) scheme to the gluon bremsstrahlung scheme [47; 48]. Or one could simply leave that second term as a harmless contribution to the NLO JIMWLK Hamiltonian, not to be resummed into a running coupling prescription. For these reasons, we will focus from now on the first term Eq. (4.3) only, proportional to \(\beta_{0}\), and discard the second term.
Note that at NLO accuracy, the kernel \(\mathcal{K}_{JSJ}(x,y,z)\) does not factorize anymore, preventing us to write a simple generalization of the expression (2.1) for the JIMWLK Hamiltonian beyond LO.
Before we proceed with the resummation into a running coupling prescription, it is instructive to check if the fixed order NLO kernel \(\mathcal{K}_{JSJ}(x,y,z)\) in Balitsky's UV subtraction scheme, written in Eqs. (4.2) and (4.3), is positive semi-definite, or at least if the inequality (3.10) is valid. Keeping terms up to \(\alpha_{s}^{3}\) terms only (since higher orders are not under control), one finds
\[\mathcal{K}_{JSJ}(X,X) \mathcal{K}_{JSJ}(Y,Y)-\mathcal{K}_{JSJ}^{2}(X,Y)=\mathcal{K}^{ \rm LO}(X,X)\mathcal{K}^{\rm LO}(Y,Y)-\left[\mathcal{K}^{\rm LO}(X,Y)\right]^ {2}\] \[+\mathcal{K}^{\rm LO}(X,X)\mathcal{K}_{JSJ}^{\rm NLO}(Y,Y)+ \mathcal{K}_{JSJ}^{\rm NLO}(X,X)\mathcal{K}^{\rm LO}(Y,Y)\] \[-\mathcal{K}^{\rm LO}(X,Y)\mathcal{K}_{JSJ}^{\rm NLO}(X,Y)- \mathcal{K}_{JSJ}^{\rm NLO}(X,Y)\mathcal{K}^{\rm LO}(X,Y)+O(\alpha_{s}^{4}) \tag{4.6}\]
First, to simplify our task, we consider the particular case of collinear vectors, \(Y=cX\). This choice completely eliminates the contribution of order \(\alpha_{s}^{2}\), that is
\[\mathcal{K}^{\rm LO}(X,X)\mathcal{K}^{\rm LO}(cX,cX)-\left[\mathcal{K}^{\rm LO }(X,cX)\right]^{2}=0\,. \tag{4.7}\]
The LO kernels are
\[\mathcal{K}^{\rm LO}(X,X)=\frac{\alpha_{s}}{2\pi^{2}}\frac{1}{X^{2}}\,,\qquad \mathcal{K}^{\rm LO}(cX,cX)=\frac{\alpha_{s}}{2\pi^{2}}\frac{1}{c^{2}X^{2}}\,, \qquad\mathcal{K}^{\rm LO}(X,cX)=\frac{\alpha_{s}}{2\pi^{2}}\frac{1}{cX^{2}} \tag{4.8}\]
while the NLO kernels read (keeping only the \(\beta_{0}\) contribution, since the other contribution, proportional to \(\mathcal{K}^{\rm LO}\), will drop at this order due to Eq. (4.7))
\[\mathcal{K}_{JSJ}^{\rm NLO}(X,X) = \frac{\alpha_{s}^{2}}{16\pi^{3}}\frac{2\beta_{0}}{X^{2}}\ln X^{2 }\hat{\mu}^{2},\qquad\quad\mathcal{K}_{JSJ}^{\rm NLO}(cX,cX)=\frac{\alpha_{s} ^{2}}{16\pi^{3}}\frac{2\beta_{0}}{c^{2}X^{2}}\ln c^{2}X^{2}\hat{\mu}^{2}\] \[\mathcal{K}_{JSJ}^{\rm NLO}(X,cX) = \frac{\alpha_{s}^{2}}{16\pi^{3}}\left\{-\beta_{0}\frac{(1-c)^{2} }{c^{2}X^{2}}\ln(1-c)^{2}X^{2}\hat{\mu}^{2}\right.\] \[+ \left.\beta_{0}\frac{1-c^{2}}{c^{2}X^{2}}\ln\frac{1}{c^{2}}+\frac {\beta_{0}}{X^{2}}\ln X^{2}\hat{\mu}^{2}+\frac{\beta_{0}}{c^{2}X^{2}}\ln c^{2} X^{2}\hat{\mu}^{2}\right\}\,.\]
Substituting these into Eq. (4.1) yields
\[\mathcal{K}_{JSJ}(X,X)\mathcal{K}_{JSJ}(cX,cX)-\mathcal{K}_{JSJ}^{2}( X,cX)\\ =\frac{\alpha_{s}^{3}\,\beta_{0}}{16\pi^{5}}\frac{(1\!-\!c)}{c^{3}X ^{4}}\Big{[}(1\!-\!c)\ln\!\big{(}(1\!-\!c)^{2}\big{)}+c\ln\!\big{(}c^{2}\big{)} \Big{]}+O(\alpha_{s}^{4})\,. \tag{4.20}\]
The function on the right-hand side of (4.2) vanishes at \(c=1\). It is now a simple exercise to check that this function is negative for values of \(c\) different from \(0\) and \(1\) (and goes to \(-\infty\) at \(c=0\)).
This proves that the fixed order NLO kernel \(\mathcal{K}_{JSJ}(x,y,z)\) in Balitsky scheme for UV subtraction violates the inequality (3.2) at least for some values of the coordinates \(X\) and \(Y\). Hence, it violates the condition of positive semidefiniteness (3.1).
For completeness, let us also discuss the conditions (3.2) and (3.2) for the JIMWLK and BK kernels at fixed NLO order with Balitsky's scheme UV subtraction. From the LO kernel (2.1), and the \(\beta_{0}\) part of Eq. (4.1), one finds
\[\mathcal{K}_{JSJ}(X,X)=\frac{\alpha_{s}}{2\pi^{2}}\frac{1}{X^{2}}\left[1+ \frac{\beta_{0}\,\alpha_{s}}{4\pi}\,\ln\big{(}X^{2}\dot{\mu}^{2}\big{)}\right]\,. \tag{4.21}\]
On the one hand, in the limit \(\alpha_{s}\to 0\), the expression (4.2) is dominated by the LO term, which is always positive, so that the condition (3.2) is satisfied in that limit. On the other hand, for any finite value of \(\alpha_{s}\) (and the corresponding value of \(\mu_{\overline{\rm MS}}\)), one finds from the expression (4.2) the equivalence
\[\mathcal{K}_{JSJ}(X,X)\geq 0\quad\Leftrightarrow\quad X^{2}\geq\frac{1}{\dot{ \mu}^{2}}\,e^{-\frac{4\pi}{\beta_{0}\,\alpha_{s}}}\,. \tag{4.22}\]
Note that in practice, that lower bound for \(X^{2}\) corresponds to an _extremely_ small distance, due to the essential singularity for \(\alpha_{s}\to 0\). Hence, the inequality (3.2) is valid in most of the available range for \(X\), but violated at extremely short distances. This is a further proof that the JIMWLK kernel at fixed NLO order with Balitsky's scheme UV subtraction is not positive semidefinite. The condition of positivity of the dipole kernel (3.2) can be studied in a similar way, and similar conclusions can be found. On the one hand, the dipole kernel is positive in the \(\alpha_{s}\to 0\) limit, due to the dominance of the LO contribution. But for any finite value of \(\alpha_{s}\) (or \(\mu_{\overline{\rm MS}}\)), that kernel becomes negative in specific regions in \(X\) and \(Y\), in which large negative logs arise in the NLO contribution and overcome the LO contribution.
The large logs multiplied by \(\beta_{0}\) leading to the violation of the conditions (3.2) and (3.2) are precisely the ones that running coupling prescriptions are supposed to resum. Hence, the violation of the conditions (3.2) and (3.2) for the JIMWLK and BK kernels at fixed NLO order simply emphasizes the need to use running coupling instead of pure fixed order in practice. Then, one can expect that the conditions (3.2) and (3.2) are restored by resumming running coupling effects. Hence, let us now study the JIMWLK kernel with Balitsky's prescription for running coupling.
### Balitsky's prescription for rcJIMWLK
The dipole kernel in Balitsky's running coupling prescription [40] reads
\[K^{\rm B}_{\rm dipole}(x,y,z)=\frac{\alpha_{s}((X-Y)^{2})}{2\pi^{2}} \left[-2\frac{X\cdot Y}{X^{2}Y^{2}}+\frac{\alpha(X^{2})}{\alpha_{s}(Y^{2})} \frac{1}{X^{2}}+\frac{\alpha_{s}(Y^{2})}{\alpha_{s}(X^{2})}\frac{1}{Y^{2}}\right] \tag{4.13}\]
where we adopted an abbreviated notation for the coupling \(\alpha_{s}\) at a squared momentum scale \(\mu_{X}^{2}\) constructed from a spacial distance \(|X|\)
\[\alpha_{s}(X^{2})\equiv\alpha_{s}\left(\mu_{X}^{2}=\frac{4}{X^{2 }}e^{-2\gamma_{E}}\right)\,. \tag{4.14}\]
Using the relation between the dipole kernel and the JIMWLK kernel given in Eq. (3.4), we obtain
\[\mathcal{K}^{\rm B}(x,y,z) = \frac{\alpha_{s}((X-Y)^{2})}{2\pi^{2}}\left[\frac{X\cdot Y}{X^{2 }Y^{2}}-\frac{\alpha(X^{2})}{2\alpha_{s}(Y^{2})}\frac{1}{X^{2}}-\frac{\alpha_ {s}(Y^{2})}{2\alpha_{s}(X^{2})}\frac{1}{Y^{2}}\right]+f(X)\,+f(Y)\,.\]
As discussed previously, the function \(f\) is undetermined in the inverse relation (3.4). Here however, it can be fully fixed by the requirement that the perturbative expansion of \(\mathcal{K}^{\rm B}(x,y,z)\) has to reproduce Eq. (4.2) with the \(\beta_{0}\) contribution from Eq. (4.3) at NLO accuracy. This yields
\[\mathcal{K}^{B}(x,y,z)=\frac{\alpha_{s}((X-Y)^{2})}{2\pi^{2}} \frac{X\cdot Y}{X^{2}Y^{2}} +\frac{\alpha_{s}(X^{2})}{4\pi^{2}}\frac{1}{X^{2}}\left(1-\frac{ \alpha_{s}((X-Y)^{2})}{\alpha(Y)}\right)\] \[+\frac{\alpha_{s}(Y^{2})}{4\pi^{2}}\frac{1}{Y^{2}}\left(1-\frac{ \alpha_{s}((X-Y)^{2})}{\alpha(X^{2})}\right)\,. \tag{4.16}\]
Indeed, the expression (4.16) can be expanded at NLO accuracy, thanks to the relation
\[\alpha_{s}(X^{2})= \,\alpha_{s}+\frac{\alpha_{s}^{2}\beta_{0}}{4\pi}\ln\left(X^{2} \hat{\mu}^{2}\right)+O(\alpha_{s}^{3})\,, \tag{4.17}\]
in which the coupling is always taken at the scale \(\mu_{\overline{\rm MS}}^{2}\) on the right hand side, and the notations (4.4) and (4.14) were used.
We now check if the resulting rcJIMWLK kernel is positive semidefinite. Due to asymptotic freedom, \(\alpha_{s}((X\!-\!Y)^{2})\to 0\) for \((X\!-\!Y)^{2}\to 0\). Using that property, one finds from Eq. (4.16) that
\[\mathcal{K}^{B}(x,x,z)\equiv\mathcal{K}^{B}(X,X)=\frac{\alpha_{s}( X^{2})}{2\pi^{2}}\frac{1}{X^{2}}\,. \tag{4.18}\]
The condition (3.11) is thus valid if the coupling \(\alpha_{s}(X^{2})\) is always positive. On the one hand, the coupling should indeed be positive for the consistency and stability of the theory. On the other hand, the one-loop expression for the coupling6,
Footnote 6: At one loop, the scale \(\Lambda\) corresponding to the Landau pole is obtained as \(\Lambda^{2}=\mu_{\overline{\rm MS}}^{2}\,e^{-\frac{4\pi}{N_{0}\alpha_{s}}}\). It is independent of \(\mu_{\overline{\rm MS}}\) at that accuracy, since the implicit dependence on \(\mu_{\overline{\rm MS}}\) through \(\alpha_{s}\) compensates the explicit dependence.
\[\alpha_{s}(X^{2})=\frac{4\pi}{\beta_{0}\ln\left(\frac{\mu_{X}^{ 2}}{\Lambda^{2}}\right)}=\frac{4\pi}{\beta_{0}\ln\left(\frac{4\,e^{-2\gamma_{E }}}{X^{2}\Lambda^{2}}\right)}\,, \tag{4.19}\]
is positive in the perturbative UV domain but diverges at \(\mu_{X}^{2}=\Lambda^{2}\) and then goes negative further in the IR domain. In phenomenological studies, the one-loop expression (4.2) is often modified in the IR in order to keep \(\alpha_{s}(X^{2})\) bounded and positive, for example by imposing a finite IR limit for \(\alpha_{s}\), also known as freezing coupling prescription. In such a case, \(\alpha_{s}(X^{2})\) is always positive, so that \(\mathcal{K}^{B}(X,X)>0\) for all \(X\), which corresponds to the condition (3.2). Moreover, we note that the dipole kernel (4.2) can be rewritten in the form
\[K^{B}_{\rm dipole}(x,y,z)=\frac{\alpha_{s}((X-Y)^{2})}{2\pi^{2} \alpha_{s}(X^{2})\alpha_{s}(Y^{2})}\frac{1}{X^{2}Y^{2}}\left[\alpha_{s}(X^{2} )Y-\alpha_{s}(Y^{2})X\right]^{2}\,.\]
Hence, provided that the coupling is always positive, thanks to the use of such prescription in the IR, one has \(K^{B}_{\rm dipole}(x,y,z)\geq 0\) for all values of \(x,y,z\), which correspond to the condition (3.2). Therefore, both conditions (3.2) and (3.2), which were violated in the case in the case of the fixed order NLO kernel (with Balitsky's UV subtraction scheme), are restored by the running coupling resummation, provided the coupling is prevented to go negative in the IR.
Let us now discuss the fate of the nonlinear condition (3.2) in the case of Balitsky prescription (4.2) for the rcJIMWLK kernel. Let us focus on the case of \(Y=cX\) for simplicity, and more precisely on the regime of \(c\to 1\), meaning that \(|x{-}y|\ll|x{-}z|\sim|z{-}y|\). We assume moreover that \(X\) is small enough, in the perturbative regime, so that the one-loop expression (4.2) for the running coupling is valid. Then, one can perform the Taylor expansion of the coupling \(\alpha_{s}(Y^{2})\) as
\[\alpha_{s}(Y^{2})=\alpha_{s}(c^{2}X^{2})=\alpha_{s}(X^{2})\Bigg{\{}1 \,-\frac{\beta_{0}\alpha_{s}(X^{2})}{2\pi}(1{-}c)+O((1{-}c)^{2})\Bigg{\}} \tag{4.30}\]
for \(c\to 1\). By contrast, \(\alpha_{s}((X-Y)^{2})\) goes to \(0\) logarithmically in that limit, as
\[\alpha_{s}((X-Y)^{2})=\alpha_{s}((1{-}c)^{2}X^{2})\sim\frac{4\pi} {\beta_{0}}\,\frac{1}{\ln\left(\frac{1}{(1{-}c)^{2}}\right)}\to 0\,. \tag{4.31}\]
Using Eq. (4.30), one finds the Taylor expansion
\[\mathcal{K}^{B}(Y,Y)= \,\mathcal{K}^{B}(cX,cX)=\frac{\alpha_{s}(X^{2})}{2\pi^{2}}\frac {1}{X^{2}}\Bigg{\{}1+2(1{-}c)\left[1{-}\frac{\beta_{0}\alpha_{s}(X^{2})}{4 \pi}\right]+O((1{-}c)^{2})\Bigg{\}}\,. \tag{4.32}\]
By contrast, the dipole kernel (4.2) becomes
\[K^{\rm B}_{\rm dipole}(X,cX)=\frac{\alpha_{s}((1{-}c)^{2}X^{2})}{2 \pi^{2}X^{2}}\Bigg{\{}(1{-}c)^{2}\left[1{-}\frac{\beta_{0}\alpha_{s}(X^{2})}{ 2\pi}\right]^{2}+O((1{-}c)^{3})\Bigg{\}}\,, \tag{4.33}\]
keeping the logarithmic dependence in \((1\!-\!c)\). Hence, one finds
\[\mathcal{K}^{B}(X,X)\mathcal{K}^{B}(Y,Y)-\left(\mathcal{K}^{B}(X,Y) \right)^{2}\] \[= \,\mathcal{K}^{B}(X,X)\mathcal{K}^{B}(cX,cX)-\frac{1}{4}\Big{[} \mathcal{K}^{B}(X,X)+\mathcal{K}^{B}(cX,cX)-K^{\rm B}_{\rm dipole}(X,cX)\Big{]} ^{2}\] \[= \,-\frac{1}{4}\Big{[}\mathcal{K}^{B}(cX,cX)\!-\!\mathcal{K}^{B}(X,X)\Big{]}^{2}+\mathcal{K}^{B}(X,X)K^{\rm B}_{\rm dipole}(X,cX)+O((1\!-\!c)^{3})\] \[= \,\frac{(1\!-\!c)^{2}\alpha_{s}(X^{2})}{4\pi^{4}X^{4}}\Bigg{\{}- \alpha_{s}(X^{2})\left[1\!-\!\frac{\beta_{0}\alpha_{s}(X^{2})}{4\pi}\right]^{ 2}+\alpha_{s}((1\!-\!c)^{2}X^{2})\left[1\!-\!\frac{\beta_{0}\alpha_{s}(X^{2}) }{2\pi}\right]^{2}+O(1\!-\!c)\Bigg{\}}\] \[\sim \,-\frac{(1\!-\!c)^{2}\alpha_{s}(X^{2})^{2}}{4\pi^{4}X^{4}}\left[ 1\!-\!\frac{\beta_{0}\alpha_{s}(X^{2})}{4\pi}\right]^{2}\leq 0\,. \tag{60}\]
Therefore, Balitsky's prescription for rcJIMWLK kernel (49) violates the nonlinear condition (35), and thus it is not positive semidefinite.
In addition to the regime \(c\to 1\) for \(Y=cX\), one can consider for completeness also the regime \(c\to 0\), corresponding to \(|z\!-\!y|\ll|x\!-\!z|\sim|x\!-\!y|\). In that regime, still assuming that \(X\) is small enough for the one-loop expression (48) for the running coupling to apply, one finds the Taylor expansion
\[\alpha_{s}((X-Y)^{2})=\alpha_{s}((1\!-\!c)^{2}X^{2})=\alpha_{s}(X^{2})\Bigg{\{} 1\,-\frac{\beta_{0}\alpha_{s}(X^{2})}{2\pi}\,c+O(c^{2})\Bigg{\}} \tag{61}\]
for \(c\to 0\). By contrast, \(\alpha_{s}(Y^{2})\) goes to \(0\) logarithmically in that limit, as
\[\alpha_{s}(Y^{2})=\alpha_{s}(c^{2}X^{2})\sim\frac{4\pi}{\beta_{0}}\,\frac{1}{ \ln\big{(}\frac{1}{c^{2}}\big{)}}\to 0\,. \tag{62}\]
Then, from Eq. (49), one obtains
\[\mathcal{K}^{B}(X,Y)= \,\mathcal{K}^{B}(X,cX)=\frac{\alpha_{s}(X^{2})}{2\pi^{2}X^{2}} \,\frac{1}{c}\,\left[1+\frac{\beta_{0}\alpha_{s}(c^{2}X^{2})}{4\pi}\right]+O(1) \tag{63}\]
by expanding in powers of \(c\) for \(c\to 0\), while keeping the full logarithmic dependence on \(c\). Consequently,
\[\mathcal{K}^{B}(X,X)\mathcal{K}^{B}(cX,cX)-\left[\mathcal{K}^{B} (X,cX)\right]^{2}\] \[= \,\frac{\alpha_{s}(X^{2})}{4\pi^{4}X^{4}}\,\frac{1}{c^{2}}\Bigg{\{} \alpha_{s}(c^{2}X^{2})-\alpha_{s}(X^{2})\left[1\!+\!\frac{\beta_{0}\alpha_{s} (c^{2}X^{2})}{4\pi}\right]^{2}+O(c)\Bigg{\}}\] \[\sim \,-\frac{\alpha_{s}(X^{2})^{2}}{4\pi^{4}X^{4}}\,\frac{1}{c^{2}}<0\,. \tag{64}\]
Hence, the nonlinear condition (35) is violated in both regimes \(c\to 1\) and \(c\to 0\), for \(Y=cX\).
### NLO fixed-order JIMWLK kernel in the Kovchegov-Weigert scheme
The Kovchegov-Weigert (KW) UV subtraction scheme [41] leads to the following NLO contribution to the \(\mathcal{K}_{JSJ}(X,Y)\) kernel
\[\mathcal{K}_{JSJ}^{\rm NLO,KW}(X,Y) =\frac{\alpha_{s}^{2}\beta_{0}}{8\pi^{3}}\,\frac{1}{(X^{2}\!-\!Y^ {2})}\left\{\frac{X\!\cdot\!Y}{X^{2}Y^{2}}\,\left[X^{2}\ln\left(X^{2}\hat{\mu}^ {2}\right)-Y^{2}\ln\left(Y^{2}\hat{\mu}^{2}\right)\right]-\ln\left(\frac{X^{2} }{Y^{2}}\right)\right\}\] \[\quad+\frac{\alpha_{s}^{2}}{8\pi^{3}}\,\frac{X\!\cdot\!Y}{X^{2}Y^ {2}}\,\left\{\left(\frac{67}{9}\!-\!\frac{\pi^{2}}{3}\right)N_{c}-\frac{10}{9} \,N_{f}\right\}\,. \tag{69}\]
As discussed in section 4.1, UV divergences appearing at NLO in the JIMWLK Hamiltonian can and should be transferred to the kernel \(\mathcal{K}_{JSJ}(X,Y)\) multiplying the color operator already present at LO. But this procedure is scheme dependent, so that \(\mathcal{K}_{JSJ}(X,Y)\) becomes scheme dependent at \(\alpha_{s}^{2}\) order, which explains the difference between the expression (69) obtained within the Kovchegov-Weigert scheme for UV subtraction and the expression (4.3) obtained within the Balitsky scheme. Note that the overall contribution proportional to \(\beta_{0}\ln\hat{\mu}^{2}\) is the same in both cases. Indeed, it represents the leftover from the UV divergence after the standard UV renormalization (in the \(\overline{\rm MS}\) scheme). The scheme dependence is associated with finite contributions which might be subtracted and transferred to \(\mathcal{K}_{JSJ}(X,Y)\) or not, together with the UV divergences. For that reason, the contributions in \(\beta_{0}\ln X^{2}\) or \(\beta_{0}\ln Y^{2}\) differ between Eqs. (4.3) and (69). By contrast, the contribution proportional to the LO kernel, on the second line of Eqs. (4.3) and (69) is the same in both of these UV subtraction schemes.
Including the LO contribution (7) and the NLO correction (69), one finds for the \(\mathcal{K}_{JSJ}\) kernel in the Kovchegov-Weigert scheme, in the collinear configuration \(Y=cX\),
\[\mathcal{K}_{JSJ}^{\rm KW}(X,X)\mathcal{K}_{JSJ}^{\rm KW}(cX,cX) -\left[\mathcal{K}_{JSJ}^{\rm KW}(X,cX)\right]^{2}=\frac{\alpha_{s}^{3}\beta _{0}}{16\pi^{5}\,X^{4}}\frac{(1\!-\!c)\ln\!\left(c^{2}\right)}{c^{2}(1+c)}+O (\alpha_{s}^{4})\leq 0 \tag{70}\]
due to Eq. (4.7). Indeed, the function of \(c\) on the right hand side of Eq. (70) vanishes at \(c=1\) and is strictly negative otherwise (and goes to \(-\infty\) at \(c\to 0\)). Hence, the fixed order NLO JIMWLK kernel in the KW UV subtraction scheme violates the nonlinear condition (3.1), and thus it is not positive semidefinite. Moreover, it also violates the condition (3.1), in the same way as in Balitsky's UV subtraction scheme.
### Kovchegov-Weigert prescription for rcJIMWLK
The Kovchegov-Weigert prescription for rcJIMWLK, based on a resummation of the NLO \(\beta_{0}\) terms from Eq. (69), reads [41]
\[\mathcal{K}^{KW}(X,Y)=\frac{1}{2\pi^{2}}\frac{\alpha_{s}(X^{2}) \alpha_{s}(Y^{2})}{\alpha_{s}(R^{2})}\frac{X\cdot Y}{X^{2}Y^{2}} \tag{71}\]
where
\[R^{2}=\left|X\right|\left|Y\right|\,\left(\frac{Y^{2}}{X^{2}} \right)^{\Theta/2}\,,\hskip 28.452756pt\Theta=\frac{X^{2}+Y^{2}}{X^{2}-Y^{2}}-2 \frac{X^{2}Y^{2}}{X\cdot Y}\frac{1}{X^{2}-Y^{2}}\,. \tag{72}\]
Eq. (4.31) implies that
\[\mathcal{K}^{KW}(X,X)=\frac{\alpha_{s}(X^{2})}{2\pi^{2}\,X^{2}}\,, \tag{4.33}\]
which is the same expression as in the case of Balitsky's prescription for rcJIMWLK, see Eq. (4.18). Hence, the condition (3.11) is satisfied provided the coupling \(\alpha_{s}(X^{2})\) is prevented to go negative in the IR, with an appropriate improvement of the one-loop expression (4.19).
Let us now check the validity of the nonlinear condition (3.10) for positive semidefiniteness, in the case of Eq. (4.31), focusing on the configurations \(Y=cX\). In that case, the scale \(R^{2}\) reduces to \(c^{2/(1+c)}X^{2}\). Thus we have
\[\mathcal{K}^{KW}(X,X)\mathcal{K}^{KW}(Y,Y)-\left[\mathcal{K}^{KW} (X,Y)\right]^{2}\] \[= \,\frac{\alpha_{s}^{2}(X^{2})\alpha_{s}^{2}(c^{2}X^{2})}{4\pi^{4 }c^{2}X^{4}}\left[\frac{1}{\alpha_{s}(X^{2})\alpha_{s}(c^{2}X^{2})}-\frac{1}{ \alpha_{s}^{2}(c^{2/(1+c)}X^{2})}\right]\,. \tag{4.34}\]
Assuming that \(X^{2}\), \(c^{2}X^{2}\) and \(c^{2/(1+c)}X^{2}\) correspond to small distances, in the perturbative domain, one can use the one-loop expression (4.19) of the running coupling in order to extract the \(c\) dependence, and find
\[\mathcal{K}^{KW}(X,X)\mathcal{K}^{KW}(Y,Y)-\left[\mathcal{K}^{KW} (X,Y)\right]^{2}\] \[= \,-\frac{\alpha_{s}^{2}(X^{2})}{4\pi^{4}c^{2}X^{4}}\frac{1}{ \left[\frac{4\pi}{\beta_{0}\alpha_{s}(X^{2})}+\ln\left(\frac{1}{c^{2}}\right) \right]^{2}}\left\{\frac{4\pi}{\beta_{0}\alpha_{s}(X^{2})}\,\frac{(1\!-\!c)}{( 1+c)}\ln\left(\frac{1}{c^{2}}\right)+\left[\frac{1}{(1+c)}\ln\left(\frac{1}{c ^{2}}\right)\right]^{2}\right\}\,. \tag{4.35}\]
Noticing that
\[\frac{(1\!-\!c)}{(1+c)}\ln\left(\frac{1}{c^{2}}\right)\geq 0\,, \tag{4.36}\]
with the equality only at \(c=1\), one finds the the expression (4.35) vanishes at \(c=1\), and is strictly negative for other real values of \(c\) (and goes to \(-\infty\) for \(c\to 0\)). Therefore, the condition (3.10) is violated, meaning that the Kovchegov-Weigert prescription for rcJIMWLK (4.31) is not positive semidefinite.
### Other resummation schemes for rcJIMWLK
We have shown so far the violation of positive semidefiniteness for two known prescriptions for rcJIMWLK. We also tried several other resummations schemes, starting from the NLO JIMWLK kernel Eq. (4.3) with Balitsky's UV subtraction scheme. We found all of them to violate the condition (3.10), and thus to violate positive semidefiniteness. Here we only list the most compelling resummations we considered:
* BLM-type [46] prescription for rcJIMWLK: it amounts to determine the value of the renormalization scale \(\mu_{\overline{\rm MS}}=\mu_{\rm BLM}\) for which the \(\beta_{0}\) part of the NLO contribution (4.3) vanishes. Then, the obtained rcJIMWLK is simply the LO kernel, but with
the coupling at that scale, \(\alpha_{s}(\mu_{\rm BLM}^{2})\) or, using the position space notation (4.14), \(\alpha_{s}(R_{\rm BLM}^{2})\), where \(\mu_{\rm BLM}^{2}=4e^{-2\gamma_{E}}/R_{\rm BLM}^{2}\). The scale \(R_{\rm BLM}^{2}\) is thus determined from Eq. (4.3) as \[0=\frac{\alpha_{s}^{2}\beta_{0}}{16\pi^{3}}\left\{2\frac{X\cdot Y }{X^{2}Y^{2}}\ln\frac{(X-Y)^{2}}{R_{\rm BLM}^{2}}+\frac{1}{X^{2}}\ln\frac{Y^{ 2}}{(X-Y)^{2}}+\frac{1}{Y^{2}}\ln\frac{X^{2}}{(X-Y)^{2}}\right\}\,,\] (4.37) so that \[R_{\rm BLM}^{2}=(X-Y)^{2}\left(\frac{(X-Y)^{2}}{Y^{2}}\right)^{-\frac{Y^{2}}{ 2X\cdot Y}}\left(\frac{(X-Y)^{2}}{X^{2}}\right)^{-\frac{X^{2}}{2X\cdot Y}}\,.\] (4.38) As a remark, the same exercise can be performed at the level of the BK kernel instead of the JIMWLK kernel. The result is provided, for example, in Ref. [49]. However, using the BLM procedure at the level of BK or at the level of JIMWLK lead to different running coupling prescriptions. Neither of them lead to a positive semidefinite rcJIMWLK kernel.
* Within Balistky's scheme for UV subtraction, Eq. (4.3), one can write a triumvirate expression for rcJIMWLK, following Kovchegov and Weigert. One finds in such a way \[\mathcal{K}^{tr}=\frac{1}{2\pi^{2}}\frac{\alpha_{s}(X^{2})\alpha_{s}(Y^{2})}{ \alpha(R_{tr}^{2})}\frac{X\cdot Y}{X^{2}Y^{2}}\] (4.39) where \[R_{tr}^{2}=\frac{X^{2}Y^{2}}{(X-Y)^{2}}\left(\frac{(X-Y)^{2}}{Y^{2}}\right)^ {\frac{Y^{2}}{2X\cdot Y}}\left(\frac{(X-Y)^{2}}{X^{2}}\right)^{\frac{X^{2}}{2 X\cdot Y}}\,.\] (4.40)
Moreover, some of the running coupling prescriptions used in the literature are simply guesses, not related to NLO calculations. As an example, the parent dipole prescription for the BK equation has been often used. In appendix B, the corresponding prescription for rcJIMWLK is written down, and found to violate positive semidefiniteness.
## 5 Discussion
Phenomenological studies at high energy require computations at NLO accuracy for precision and running coupling effects are known to be very important beyond strict LO accuracy. In this paper, we reviewed some of the running coupling prescriptions for BK-JIMWLK evolution. The main tool used in our analysis is the requirement of positive semidefiniteness of the Hamiltonian. This requirement is tightly related with the possibility of constructing a Langevin formulation of the evolution, i.e. such a construction is only possible if the Hamiltonian is positive semidefinite.
There are several sources of scheme dependence in the construction of a running coupling prescription for JIMWLK. The most crucial one turns out to be the freedom in the
choice of basis of color operators used to write the NLO JIMWLK Hamiltonian. The operators in that basis should contain suitable subtraction terms, so that all UV divergences appearing at NLO are transferred to the coefficient of the operator already appearing at LO. There is a freedom in that procedure, associated with the possibility of transferring finite contributions together with the UV divergences. For that reason, the NLO correction \(K^{\text{NLO}}_{JSJ}(x,y,z)\) to the coefficient of the LO operator is scheme dependent, which leads to an ambiguity in the large logs found at NLO to be resummed into the running coupling. A further important source of scheme dependence is the freedom in writing an all-order resummed expression for rcJIMWLK based on the knowledge of NLO corrections only.
We have considered the JIMWLK kernels including the fixed order NLO correction \(K^{\text{NLO}}_{JSJ}(x,y,z)\) found either in Balitsky's scheme [40] for UV subtraction (see Eq.(4.3)) or in the Kovchegov-Weigert scheme [41] (see Eq.(4.29)). In both of these cases, there is a rather trivial violation of positive semidefiniteness, due to the violation of the condition (3.11). This can be interpreted as the inconsistency, beyond LO, of fixed order JIMWLK evolution with fixed coupling in QCD. Moreover, the nonlinear condition (3.10) is violated in both of these UV subtraction schemes, providing a further obstruction to positive semidefiniteness. Note however that we have not included in our study the other NLO corrections to the JIMWLK Hamiltonian, associated to new color operators. Indeed, it is not clear how to check the positive semidefiniteness property of the entire NLO JIMWLK Hamiltonian.
We have also considered various prescriptions for rcJIMWLK, corresponding to the resummation of the \(\beta_{0}\) contribution in \(K^{\text{NLO}}_{JSJ}(x,y,z)\) found in either the Balitsky or Kovchegov-Weigert scheme, including the running coupling prescriptions originally proposed in Refs. [40] and [41]. In all cases, we found the nonlinear condition (3.10) to be violated, and thus the corresponding rcJIMWLK kernel not to be positive semidefinite. There might exist a positive semidefinite rcJIMWLK kernel corresponding to a resummation based on Eq. (4.3) or (4.29), that we have not thought of. However, we consider this possibility somewhat unlikely, because in all rcJIMWLK prescriptions we have considered, the violation of the nonlinear condition (3.10) seems to follow from its violation at the unresummed level, in either of these two UV subtraction schemes.
In a recent work [32], a new UV subtraction scheme was proposed for the construction of the color operator basis to write the NLO JIMWLK Hamiltonian, as an alternative to the UV subtraction schemes of Balitsky and of Kovchegov-Weigert. In that new scheme, only contributions with small parton pairs unresolved by the target are subtracted from the new color operators appearing at NLO. In that sense, the new scheme can be considered more physical than the previous ones, and it should reduce the risk of oversubtraction. Interestingly, in the basis of color operators constructed in that new scheme, the obtained \(\beta_{0}\) contribution in \(K^{\text{NLO}}_{JSJ}(x,y,z)\) can be naturally resummed into the daughter dipole prescription, corresponding to the replacement written in Eq. (1.2) in the LO JIMWLK kernel. That daughter dipole prescription for rcJIMWLK is positive semidefinite. In particular, the nonlinear condition (3.10) is obeyed thanks to a trivial generalization of Eq. (3.14). The daughter dipole prescription was successfully implemented to perform numerical simulation of rcJIMWLK (in its Langevin form) [15].
At this stage, the daughter dipole prescription (1.2) for rcJIMWLK is thus the only
known running coupling prescription which is both positive semidefinite (with a Langevin formulation) and obtained from NLO calculations, and in that sense it should be the preferred prescription in practical applications. This also provides further motivation for the scheme constructed in Ref. [32]. Moreover, this study seems also to suggest that the apparent impossibility to find a positive semidefinite rcJIMWLK based on the UV subtraction schemes of Balitsky or Kovchegov-Weigert might be the signal of a problem in these two schemes, for example of oversubtraction.
In this study, many examples have been found, in which a prescription for rcBK is positive semidefinite (in the sense of Eq. (3.13)), whereas the corresponding prescription for rcJIMWLK is not. The consequences of the violation of positive semidefiniteness condition can be considered in the following way. From the rcJIMWLK perspective, this seems to be a major obstacle since positive semidefiniteness is mandatory not only for any reasonable evolution Hamiltonian but also for a stochastic formulation, i.e. numerical implementation. On the other hand, rcBK (without further NLO corrections) has never displayed any instability or nonphysical behavior. It is broadly believed to be a reliable phenomenological tool. This might mean that the problems related to the violation of positive semidefiniteness are hidden in the \(N_{c}\) suppressed terms, which are neglected in the BK evolution. However, this would require analyzing Balitsky hierarchy and it is not clear how one can do it consistently other than performing the analysis of the JIMWLK Hamiltonian as we did in the current paper. An alternative thought is that BK "smears" the problem in some average sense, such that the overall result of the evolution appears reasonable. Further studies of these questions might sharpen our understanding of the validity and accuracy of rcBK.
We thank I. Balitsky, A. Dumitru, A. Kovner, and Yu. Kovchegov for stimulating discussions. V.S. thanks V. Kazakov for illuminating discussions.
This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics through the Contract No. DE-SC0020081 (V.S.) and the Saturated Glue (SURGE) Topical Collaboration (V.S.). M.L. is supported by the Binational Science Foundation grant #2021789 and by the ISF grant #910/23. GB is supported in part by the National Science Centre (Poland) under the research grant no 2020/38/E/ST2/00122 (SONATA BIS 10). This work has been performed in the framework of the MSCA RISE 823947 "Heavy ion collisions: collectivity and precision in saturation physics" (HIEIC).
We thank ExtreMe Matter Institute (EMMI), Physics Department of Muenster University (A. Andronic), Ben-Gurion University of the Negev, National Centre for Nuclear Research, and ECT\({}^{*}\) for their support and hospitality during various stages of completing this project.
LO JIMWLK from bi-local noise
Lets return to Eq. (3.16) where in order to reproduce LO JIMWLK kernel we consider anzatz \(\Psi_{i}(X,V)=\frac{X_{i}}{X^{2}}f(X,V)\)7
Footnote 7: This procedure is equivalent to finding eigenvectors and eigenstates of the kernel.
\[\int_{v}f(X,V)f(Y,V)=\mathcal{K}(X,Y)\frac{X^{2}Y^{2}}{X\cdot Y}\,.\] (A.1)
The LO Kernel divided by the WW kernel is a constant \(\mathcal{K}(X,Y)\frac{X^{2}Y^{2}}{X\cdot Y}=\frac{\alpha_{s}}{\pi^{2}}\). A particular solution of this equation is \(f(X,V)=f_{1}(V)\) where \(f_{1}(V)\) is any normalizable function, with
\[\int_{v}f_{1}^{2}(V)=\frac{\alpha_{s}}{\pi^{2}}\,.\] (A.2)
Substituting this into Eq. (3.16) we obtain
\[\int D\zeta e^{\Delta\left(i\int_{x,y,z}\frac{X_{i}}{X^{2}}f_{1}( Y)q^{a}(x,z)\zeta_{i}^{a}(y,z)-\frac{1}{2}\int_{x,z}\zeta_{i}^{a}(x,z)\zeta_{i}^ {a}(x,z)\right)}\] \[=\int D\zeta D\theta D\xi e^{i\Delta\int_{z}\theta_{i}^{a}(z) \left(\xi_{i}^{a}(z)-\int_{x}f_{1}(x-z)\zeta_{i}^{a}(x,z)\right)}e^{\Delta \left(i\int_{x,z}\frac{X_{i}}{X^{2}}q^{a}(x,z)\xi_{i}^{a}(z)-\frac{1}{2}\int_{ x,z}\zeta_{i}^{a}(x,z)\zeta_{i}^{a}(x,z)\right)}\] \[=\int D\theta D\xi e^{i\Delta\int_{z}\theta_{i}^{a}(z)\xi_{i}^{a}( z)-\frac{\Delta}{2}\int_{x,z}\theta^{2}(z)f_{1}^{2}(x-z)}e^{i\Delta\int_{x,z} \frac{X_{i}}{X^{2}}q^{a}(x,z)\xi_{i}^{a}(z)}\] \[=\int D\theta D\xi e^{i\Delta\int_{z}\theta_{i}^{a}(z)\xi_{i}^{a}( z)-\frac{\Delta\Delta}{2\pi^{2}}\int_{x}\theta^{2}(z)}e^{i\Delta\int_{x,z} \frac{X_{i}}{X^{2}}q^{a}(x,z)\xi_{i}^{a}(z)}\] \[=\int D\xi e^{\Delta\left(i\int_{x,z}\frac{X_{i}}{X^{2}}q^{a}(x,z )\xi_{i}^{a}(z)-\frac{2}{2\alpha}\int_{z}\xi^{2}(z)\right)}\] \[=\int D\xi e^{\Delta\left(i\frac{\sqrt{\alpha_{s}}}{\pi}\int_{x,z} \frac{X_{i}}{X^{2}}q^{a}(x,z)\xi_{i}^{a}(z)-\frac{1}{2}\int_{z}\xi^{2}(z)\right)}\]
where in the last line we performed the rescaling of the variable \(\xi\). This reproduces Eq. (2.9).
## Appendix B Parent dipole prescription
The parent dipole prescription (note that it does not reproduce NLO JIMWLK if expanded to the relevant order) reads
\[K_{\text{dipole}}^{\text{PD}}(x,y,z)=\frac{\alpha_{s}((X-Y)^{2})}{2\pi^{2}} \frac{(X-Y)^{2}}{X^{2}Y^{2}}\] (B.1)
for the BK kernel. The corresponding rcJIMWLK kernel can be fully fixed by Eq. (3.4) and the condition that going to fixed coupling one reproduces LO JIMWLK kernel
\[\mathcal{K}^{\text{PD}}(x,y,z)=-\frac{1}{2}K_{\text{dipole}}^{\text{PD}}(x,y,z)+\frac{\alpha_{s}(X^{2})}{4\pi^{2}}\frac{1}{X^{2}}+\frac{\alpha_{s}(Y^{2})} {4\pi^{2}}\frac{1}{Y^{2}}\,.\] (B.2)
In order to check for the validity of the non-linear condition (3.10), let us focus on the configuration \(Y=cX\), with \(c\to 0\). Then, the BK kernel (B.1) reduces to
\[K_{\rm dipole}^{\rm PD}(X,Y)=\frac{\alpha_{s}((1-c)^{2}X^{2})}{2\pi^{2}}\frac{(1- c)^{2}}{c^{2}X^{2}}=\frac{\alpha_{s}(X^{2})}{2\pi^{2}}\frac{1}{c^{2}X^{2}}+O \left(\frac{1}{c}\right)\,,\] (B.3)
whereas for the rcJIMWLK kernel one has
\[{\cal K}^{\rm PD}(X,X) =\frac{\alpha_{s}(X^{2})}{2\pi^{2}}\frac{1}{X^{2}}\] (B.4) \[{\cal K}^{\rm PD}(cX,cX) =\frac{\alpha_{s}(c^{2}X^{2})}{2\pi^{2}}\frac{1}{c^{2}X^{2}}\,,\] (B.5)
so that
\[{\cal K}^{\rm PD}(X,cX)=\frac{1}{4\pi^{2}X^{2}}\frac{1}{c^{2}} \Big{[}\alpha_{s}(c^{2}X^{2})-\alpha_{s}(X^{2})\Big{]}+O\left(\frac{1}{c} \right)\,.\] (B.6)
Hence, for \(c\to 0\), one finds
\[{\cal K}^{\rm PD}(X,X){\cal K}^{\rm PD}(cX,cX)-\Big{(}{\cal K}^{ \rm PD}(X,cX)\Big{)}^{2}\] \[= -\frac{1}{16\pi^{4}X^{4}}\frac{1}{c^{4}}\Big{[}\alpha_{s}(c^{2}X ^{2})-\alpha_{s}(X^{2})\Big{]}^{2}+O\left(\frac{1}{c^{3}}\right)\leq 0\,.\] (B.7)
Hence, the condition (3.10) is violated, meaning that the rcJIMWLK kernel (B.2) corresponding to the parent dipole prescription is not positive semidefinite.
|
2310.08177 | Improving Fast Minimum-Norm Attacks with Hyperparameter Optimization | Evaluating the adversarial robustness of machine learning models using
gradient-based attacks is challenging. In this work, we show that
hyperparameter optimization can improve fast minimum-norm attacks by automating
the selection of the loss function, the optimizer and the step-size scheduler,
along with the corresponding hyperparameters. Our extensive evaluation
involving several robust models demonstrates the improved efficacy of fast
minimum-norm attacks when hyper-up with hyperparameter optimization. We release
our open-source code at https://github.com/pralab/HO-FMN. | Giuseppe Floris, Raffaele Mura, Luca Scionis, Giorgio Piras, Maura Pintor, Ambra Demontis, Battista Biggio | 2023-10-12T10:03:25Z | http://arxiv.org/abs/2310.08177v1 | # Improving Fast Minimum-Norm Attacks with Hyperparameter Optimization
###### Abstract
Evaluating the adversarial robustness of machine-learning models using gradient-based attacks is challenging. In this work, we show that hyperparameter optimization can improve fast minimum-norm attacks by automating the selection of the loss function, the optimizer, and the step-size scheduler, along with the corresponding hyperparameters. Our extensive evaluation involving several robust models demonstrates the improved efficacy of fast minimum-norm attacks when hyped up with hyperparameter optimization. We release our open-source code at [https://github.com/pralab/HO-FMN](https://github.com/pralab/HO-FMN).
## 1 Introduction
Machine learning (ML) models are susceptible to adversarial attacks [1, 13], i.e., input samples carefully perturbed to mislead the model. To evaluate adversarial robustness, many different gradient-based attacks have been proposed, whose performance is significantly affected by the choice of the loss function to optimize, the optimization algorithm, and the step-size scheduler. From a practical perspective, attacks tend to be run with a "default" configuration and set of hyperparameters that are deemed to fit most of the cases. Yet, the attack effectiveness is highly case-dependent, implying that the choice of the configuration needs to be carefully tailored to the model rather than a de-facto standard choice. In AutoAttack (AA) [4], the authors try to overcome this limitation by proposing an ensemble of parameter-free attacks, each including an internal auto-tuning process for each relevant hyperparameter. With Adaptive AutoAttack (AAA) [16], the approach is configured to run the parameter-free AA to look for a fast and good evaluation or alternatively come forward with an extensive search on a pool of attacks.
In this paper, we aim to use a smart and effective search for the best configuration that adapts the attack to the model. Hence, we propose a systematic framework for configuring the state-of-the-art, fast minimum-norm (FMN) attacks properly instead of running extensive searches on multiple attacks. To this end, we develop our framework by rethinking the choice of the loss function, optimizer, and step-size scheduler as attack hyperparameters and then using a unified hyperparameter optimization procedure.
## 2 FMN Attacks with Hyperparameter Optimization
We introduce here a modified FMN attack algorithm, referred to as HO-FMN, in which the loss function, the optimizer, and the step-size scheduler, along with their hyperparameters, are all exposed to be optimized. We then provide details on the hyperparameter optimizer considered in this work.
**FMN Attacks.** FMN [10] aims to find minimum-norm adversarial perturbations. The objective of the attack is to find, for a model with decision function \(f(\cdot)\), the smallest perturbation \(\mathbf{\delta}\) to add to an input sample \(\mathbf{x}\) with true label \(y\) so that \(f(\mathbf{x}+\mathbf{\delta})\neq y\). To this end, it takes a PGD-step to optimize the perturbation by minimizing a loss function (i.e., the \(\delta\)-step) within a given perturbation budget \(\epsilon\), then it adjusts \(\epsilon\) to iteratively reduce the perturbation norm (i.e., the \(\epsilon\)-step). In this work, we improve the \(\delta\)-step in the FMN algorithm while leaving the \(\epsilon\)-step unchanged (as the latter only optimizes a scalar value). In this regard, we consider different step-size schedulers (other than the _cosine annealing_[8] used in the baseline FMN) and more sophisticated optimizers (using different gradient update strategies other than SGD). The attack loss on which we optimize (logit loss [2]) is also put into question as we look for better candidates.
**HO-FMN.** We report in Algorithm 1 a revisited formulation of the FMN attack, in which the role of the loss function \(L\), optimizer \(u\), and scheduler \(h\) are better isolated. This novel formulation of the FMN attack enables us to generalize it by allowing a different selection of each component, treating each of them as a different hyperparameter or attack configuration. While the overall algorithm remains conceptually unchanged, we modify the attack loss \(L\), the optimizer \(u\), and the step-size scheduler \(h\) used in the \(\delta\)-step. These are the elements that change from the original attack implementation and are made optimizable in
our work. In practice, given a model, we exploit hyperparameter optimization to find the best combination of loss, optimizer, and scheduler along with their hyperparameter values (e.g., the initial step size, etc.).
## 3 Experiments
We describe below the experimental setup used in our work and then the results. **Hyperparameter Tuning.** As introduced in Sect. 2, we aim to improve FMN by optimizing the choice of: (i) the loss function, selecting between the logit loss (LL) and the cross-entropy loss (CE); (ii) the optimizer, selecting between SGD (with and without Nesterov acceleration) and Adam (with and without AmsGrad); and (iii) the step-size scheduler, selecting among Cosine Annealing (CALR), Cosine Annealing with Warm Restarts (CAWR), MultiStep (MSLR), and Reduced On Plateau (RLROP). We define for optimizers and step-size schedulers a search space made by the possible hyperparameter values and sampling options. In particular, for each optimizer, we tune the initial step size, the momentum, and weight decay; for each scheduler, we tune the most important parameters such as the milestones in MSLR, the iteration parameters in CALR and CAWR and the factor in RLROP. The search space is then parsed in the context of the FMN hyperparameter optimization. The algorithms responsible for finding the best optimizer/scheduler configuration are the CFO search algorithm [15] and the ASHA scheduler [6]. We leverage the Ray Tune framework* for handling the hyperparameter optimization [7].
Footnote *: [https://docs.ray.io/en/latest/tune/index.html](https://docs.ray.io/en/latest/tune/index.html)
**Dataset.** We take a subset of 100 samples from the CIFAR10 test set for running our hyperparameter optimization, where for every model we analyzed each and every loss/optimizer/scheduler configuration. Upon finding a specific set of best hyperparameters, we use a separate set of 1000 samples (also taken from the CIFAR10 test set) to run the FMN attack on the models and discuss the results.
**Perturbation Model.** We restrict our analysis here to the \(\ell_{\infty}\)-norm attacks, as it is one of the most problematic cases for the baseline FMN algorithm.
**Models.** We consider 9 state-of-the art robust models from RobustBench [3]: _M0_, the WideResNet-70-16 in [14]; _M1_, the WideResNet-28-1 in [14]; _M2_, the WideResNet-70-16 in [5]; _M3_, the WideResNet-106-16 in [11]; _M4_, the WideResNet-28-10 in [5]; _M5_, the WideResNet-70-16 in [9]; _M6_, the ResNet-152 in [12]; _M7_, the WideResNet-28-10 in [9]; and _M8_, the ResNet-18 in [5].
**Performance Metrics.** For HO-FMN, we select the configuration that achieves the smallest median \(||\mathbf{\delta}||_{\infty}\), following [10]. We then use it to evaluate the robust accuracy (RA) of the models. For minimum-norm attacks, RA can be evaluated over the entire range of perturbation sizes by imposing a threshold on the distances found, i.e., we can compute the full robustness-perturbation curve.
**Experimental Results.** We evaluate the candidate configurations by running HO-FMN on the 9 selected models. We show the resulting robust accuracy values at \(\epsilon=8/255\) in Table 1. We highlight how the LL loss consistently finds
better perturbations than CE and that, in the case of the most unfortunate configuration, the models' robustness would have been considerably overestimated.
From the results in Table 1 we select the configurations Adam+RLROP+LL and SGD+CALR+LL, and run the evaluation on our test set of 1000 CIFAR10 samples with the discovered best hyperparameters.*
Footnote *: [https://pytorch.org/docs/stable/optim.html](https://pytorch.org/docs/stable/optim.html)
We report the hyperparameters found with the first setting, which we name HO-FMN (Adam), listed for each model as (learning_rate, weight_decay, factor, amsgrad), M0: (5,534 0,025 0,327, False); M1: (8,801 0,043 0,366, False); M2: (4,073 0,019 0,286, False); M3: (9,616 0,024 0,301, False); M4: (7,078 0,019 0,260, False); M5: (7,078 0,019 0,260, False); M6: (4,194 0,020,235, False); M7: (9,339 0,023 0,352, True); M8: (4,073 0,019 0,286, False). We leave the other parameters of Adam+RLROP fixed: eps=\(10^{-8}\), betas=(0.9, 0.999), patience=\(5\), threshold=\(10^{-5}\).
We list the hyperparameters for the second configuration, HO-FMN (SGD), as (learning_rate, weight_decay, momentum, dampening), M0: (4,453 0,917 0,010 0,085); M1: (1,523 0,880 0,041 0,037); M2: (1,222 0,916 0,010 0,089); M3: (3,837 0,924 0,001 0,114); M4: (1,013 0,926 0,014 0,122); M5: (3,141 0,936 0,010 0,136); M6: (1,222 0,943 0,010 0,058); M7: (2,562 0,911 0,010 0,071); M8: (2,124 0,922 0,010 0,104), and we keep fixed T_max=100, eta_min=0, last_epoch=-1.
We show the resulting robust accuracy for increasing perturbation sizes in Figure 1, compared with the value reported in RobustBench (computed only for \(||\mathbf{\delta}||=8/255\)). We remark that the values found are comparable with AA. However, our attack is able to compute the full robustness-perturbation curve
\begin{table}
\begin{tabular}{l l l|c c c c c c c c|c} \hline \hline
**Optim.** & **Sched.** & **Loss** & **M0** & **M1** & **M2** & **M3** & **M4** & **M5** & **M6** & **M7** & **M8** & **Mean** \\ \hline AA & & & 0.71 & 0.67 & 0.66 & 0.65 & 0.63 & 0.63 & 0.63 & 0.61 & 0.59 & 0.64 \\ \hline SGD & CALR & LL & **0.76** & 0.74 & 0.70 & 0.66 & 0.66 & 0.68 & 0.66 & 0.58 & 0.62 & 0.67 \\ \hline Adam & RLROP & LL & **0.76** & **0.72** & **0.68** & **0.64** & **0.58** & **0.62** & **0.56** & **0.56** & **0.60** & **0.64** \\ \hline Adam & CALR & LL & 0.78 & 0.74 & 0.68 & 0.64 & 0.62 & 0.66 & 0.64 & 0.58 & 0.62 & 0.66 \\ Adam & MSLR & LL & 0.78 & 0.74 & 0.70 & 0.66 & 0.62 & 0.66 & 0.64 & 0.58 & 0.62 & 0.67 \\ SGD & RLROP & LL & 0.76 & 0.74 & 0.70 & 0.64 & 0.64 & 0.68 & 0.64 & 0.58 & 0.62 & 0.67 \\ Adam & CAWR & LL & 0.78 & 0.74 & 0.72 & 0.66 & 0.64 & 0.66 & 0.58 & 0.62 & 0.67 \\ SGD & CALR & LL & 0.78 & 0.74 & 0.70 & 0.66 & 0.64 & 0.68 & 0.66 & 0.58 & 0.62 & 0.67 \\ SGD & CAWR & LL & 0.78 & 0.74 & 0.70 & 0.66 & 0.66 & 0.68 & 0.66 & 0.58 & 0.62 & 0.68 \\ SGD & MSLR & LL & 0.78 & 0.74 & 0.74 & 0.66 & 0.66 & 0.68 & 0.66 & 0.58 & 0.62 & 0.68 \\ \hline Adam & MSLR & CE & 0.90 & 0.90 & 0.86 & 0.76 & 0.86 & 0.70 & 0.86 & 0.88 & 0.83 \\ SGD & MSLR & CE & 0.96 & 0.94 & 0.88 & 0.84 & 0.82 & 0.92 & 0.80 & 0.82 & 0.80 & 0.86 \\ SGD & CALR & CE & 1.00 & 0.96 & 0.90 & 0.88 & 0.86 & 0.94 & 0.86 & 0.88 & 0.84 & 0.90 \\ SGD & CAWR & CE & 1.00 & 0.96 & 0.92 & 0.90 & 0.86 & 0.94 & 0.88 & 0.90 & 0.88 & 0.92 \\ Adam & CALR & CE & 1.00 & 1.00 & 0.94 & 0.96 & 0.90 & 0.96 & 0.92 & 0.94 & 0.94 & 0.95 \\ Adam & CAWR & CE & 1.00 & 1.00 & 0.96 & 0.96 & 0.94 & 0.96 & 0.92 & 0.94 & 0.94 & 0.96 \\ Adam & RLROP & CE & 1.00 & 1.00 & 0.96 & 0.96 & 0.92 & 0.98 & 0.92 & 0.94 & 0.94 & 0.96 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results for HO-FMN, compared with reference values for AA. We highlight in yellow HO-FMN (SGD) and in blue HO-FMN (Adam).
with one single optimization. Achieving the same result with AA is only possible by running AA multiple times with a range of different perturbation sizes, which would require a significant increase in computational complexity.
## 4 Conclusions and Future Work
In this work, we investigated the use of hyperparameter optimization to improve the performance of the FMN attack algorithm. Our findings highlight that hyperparameter optimization can improve FMN to reach competitive performance with AutoAttack while providing a more thorough adversarial robustness evaluation (i.e., computing the whole robustness-perturbation curve).
We argue that the same approach can be combined with other attacks, different threat models, or more configurations. In future work, we will extend our analysis beyond the \(\ell_{\infty}\)-norm FMN attack, considering \(\ell_{0}\), \(\ell_{1}\), and \(\ell_{2}\) norms.
We remark that adding more hyperparameters to tune would make the search space bigger, resulting in a longer optimization time. To this end, we will also try to develop sound heuristics to make hyperparameter tuning faster, designing faster exploration phases in the initial steps of the FMN optimization process.
Figure 1: Robust accuracy computed on M0-M8 with HO-FMN in the two versions Adam and SGD, and with AA at the fixed value of \(\epsilon\)=8/255.
## Acknowledgements
We would like to express our gratitude to Sarthak Gupta for his preliminary experiments that contributed to the realization of this project. This work has been carried out while G. Piras was enrolled in the Italian National Doctorate on AI run by the Sapienza University of Rome in collaboration with the University of Cagliari. It has been supported by the PRIN 2017 project RexLearn (grant no. 2017TWNMH2), funded by the Italian Ministry of Education, University and Research; by BMK, BMDW, and the Province of Upper Austria in the frame of the COMET Programme managed by FFG in the COMET Module S3AI; and by Fondazione di Sardegna under the project "TrustML: Towards Machine Learning that Humans Can Trust", with CUP: F73C22001320007.
|
2310.17057 | Post-main sequence thermal evolution of planetesimals | White dwarfs that have accreted planetary materials provide a powerful tool
to probe the interiors and formation of exoplanets. In particular, the high
Fe/Si ratio of some white dwarf pollutants suggests that they are fragments of
bodies that were heated enough to undergo large-scale melting and iron core
formation. In the solar system, this phenomenon is associated with bodies that
formed early and so had short-lived radionuclides to power their melting,
and/or grew large. However, if the planetary bodies accreted by white dwarfs
formed during the (pre)-main sequence lifetime of the host star, they will have
potentially been exposed to a second era of heating during the star's giant
branches. This work aims to quantify the effect of stellar irradiation during
the giant branches on planetary bodies by coupling stellar evolution to thermal
and orbital evolution of planetesimals. We find that large-scale melting,
sufficient to form an iron core, can be induced by stellar irradiation, but
only in close-in small bodies: planetesimals with radii $\lesssim$ 30 km
originally within $\sim$ 2 AU orbiting a 1$-$3$\,M_{\odot}$ host star with
solar metallicity. Most of the observed white dwarf pollutants are too massive
to be explained by the accretion of these small planetesimals that are melted
during the giant branches. Therefore, we conclude that those white dwarfs that
have accreted large masses of materials with enhanced or reduced Fe/Si remain
an indicator of planetesimal's differentiation shortly after formation,
potentially linked to radiogenic heating. | Yuqi Li, Amy Bonsor, Oliver Shorttle | 2023-10-25T23:36:02Z | http://arxiv.org/abs/2310.17057v1 | # Post-main sequence thermal evolution of planetesimals
###### Abstract
White dwarfs that have accreted planetary materials provide a powerful tool to probe the interiors and formation of exoplanets. In particular, the high Fe/Si ratio of some white dwarf pollutants suggests that they are fragments of bodies that were heated enough to undergo large-scale melting and iron core formation. In the solar system, this phenomenon is associated with bodies that formed early and so had short-lived radionuclides to power their melting, and/or grew large. However, if the planetary bodies accreted by white dwarfs formed during the (pre)-main sequence lifetime of the host star, they will have potentially been exposed to a second era of heating during the star's giant branches. This work aims to quantify the effect of stellar irradiation during the giant branches on planetary bodies by coupling stellar evolution to thermal and orbital evolution of planetesimals. We find that large-scale melting, sufficient to form an iron core, can be induced by stellar irradiation, but only in close-in small bodies: planetesimals with radii \(\lesssim 30\) km originally within \(\sim 2\) AU orbiting a 1-3 \(M_{\odot}\) host star with solar metallicity. Most of the observed white dwarf pollutants are too massive to be explained by the accretion of these small planetesimals that are melted during the giant branches. Therefore, we conclude that those white dwarfs that have accreted large masses of materials with enhanced or reduced Fe/Si remain an indicator of planetesimal's differentiation shortly after formation, potentially linked to radiogenic heating.
keywords: white dwarfs - planets and satellites: general - planets and satellites: dynamical evolution and stability - planets and satellites: interiors - planet-star interactions
## 1 Introduction
In an era of exoplanet detection, there is a growing interest in the interior dynamics and volatile content of exoplanets, two properties that are crucial for their habitability. However, our understanding of exoplanet interiors are limited by detection techniques which only reveal their bulk properties, and poorly constrained interior modeling (Dorn et al., 2015; Wang et al., 2018, 2022). Fortunately, white dwarfs (WDs) that have accreted debris of tidally disrupted planetesimals (planetary building blocks) provide a potentially powerful tool to probe the interiors and formation processes of exoplanets.
White dwarfs, remnants of degenerate stellar cores, should originally preserve atmospheres predominantly composed of hydrogen/helium due to the rapid gravitational settling of heavier elements after radiative levitation becomes negligible (effective temperature \(\lesssim 20000\) K). Therefore, the metallic absorption features in the spectra of \(\sim 20\)%-\(50\)% of the WDs possibly originate from recent/ongoing accretions (Zuckerman et al., 2003, 2010; Koester et al., 2014; Wilson et al., 2019).
The observed WD pollutants reveal diverse compositions for planetary bodies (Putrika & Xu, 2021). A large fraction of pollutants resemble the bulk Earth, consistent with tidal disruption and accretion of thermally processed rocky planetesimals (Harrison et al., 2018; Doyle et al., 2019; Harrison et al., 2021a; Trierweiler et al., 2023). Exceptions with excess oxygen compared to that hosted in metal oxides indicate the likely accretion of water (Farihi et al., 2013; Hoskin et al., 2020). High Ca/Na, Ca/Mn relative to stellar abundances, corresponding to depletion of moderately volatiles is typically assumed to originate from the formation processes of the planetary system, for instance, incomplete condensation and devolatilisation during secondary melting (Lodders, 2003; O'Neill & Palme, 2008; Pringle et al., 2014; Siebert et al., 2018; Harrison et al., 2021a). A significant dispersion in Fe/Si is usually interpreted as a natural consequence of asynchronously accreted core/mantle-rich fragments from bodies differentiated due to decay of short-lived radioactive elements (e.g., \({}^{26}\)Al) (Jura & Young, 2014; Bonsor et al., 2020; Buchan et al., 2022; Curry et al., 2022; Brouwers et al., 2023).
In the solar system, differentiation of planetesimals: large-scale melting and formation of global magma ocean, followed by gravitational segregation of iron from silicates and the formation of an iron core, is thought to be powered by the decay of short-lived radionuclides (e.g., \({}^{26}\)Al) and/or violent impacts (Keil, 2000; Chambers, 2004). This magma ocean phase, accompanied with further (moderately) volatile depletion distinct from that of incomplete condensation (Schaefer & Fegley, 2008; Vollstaedt et al., 2020), shapes the interiors of terrestrial planets.
However, there is no evidence that these early-stage thermal processes inside the solar system planetary bodies are common in the exoplanetary systems. For materials accreted by white dwarfs, late-stage thermal processes induced by stellar evolution, for instance,
heating due to stellar irradiation and tidal dissipation, may also result in (moderately) volatile depletion (Jura & Xu, 2010; Malamud & Perets, 2016, 2017a, 2017b), large-scale melting and iron core formation, thereby mimicking their early-stage counterparts taking place around planetary formation. To probe the formation stages of a planetary system via WD pollutants, it is thus essential to distinguish between the early-stage and late-stage thermal processes.
This paper models the thermal evolution of planetesimals induced by stellar irradiation and quantifies in what locations and under what conditions planetesimals are sufficiently heated to undergo differentiation (large-scale melting and formation of an iron core) during the giant branches of the host star. As a result, we comment on the possibility that a high level of Fe/Si in the atmosphere of a white dwarf originates from planetary bodies differentiated under stellar irradiation instead of radiogenic heating/impacts.
### Journey of planetesimals from formation to accretion onto the white dwarf
The thermal and orbital evolution of planetesimals accreted by the white dwarf is coupled to stellar evolution (Figure 1). After the formation of the planetary system, remaining planetesimals experience a long period of steady heating from the main-sequence host star, with its interior temperature approaching a constant value. Afterwards, the interior temperature of planetesimals rises significantly when the host star climbs towards the tip of the red giant branch (RGB, shell hydrogen fusion). Low-mass stars go through helium flash (runaway nuclear fusion) in their degenerate core, while intermediate-mass stars can ignite core helium burning quietly, with the former reaching much higher tip RGB luminosity and radius. Then, stellar luminosity drops, until rising up again near the end of core helium burning (CHB) phase, before entering asymptotic giant branch (AGB, shell
Figure 1: Thermal history of white dwarf pollutants: formation from protoplanetary disks, main sequence and post-main sequence life of the star, scattering into the Roche limit, circumstellar disks formation, accretion. The color bars show the general luminosity and radius variations of the host star, and the thermally pulsing asymptotic giant branch stellar wind density.
helium fusion), when the (average) luminosity keeps rising. Towards the end of AGB, these stars undergo thermal pulsations (TPAGB, cyclical thin shell fusion), accompanied with a rapid increase in average luminosity (Hansen & Kawaler, 1994; Cristallo et al., 2015). At this time, planetesimals experience a short period of intense heating, predominantly affecting a depth equal to the corresponding diffusion length scale. Close-in bodies also experience stronger drag force and/or tidal decay when interacting with the expanding (circumstelar) stellar envelope, spiralling inwards, potentially being disrupted and engulfed by the host star (Maloney & Gallagher, 2010; Mustill & Villaver, 2012; Villaver et al., 2014; Jia & Spruit, 2018).
When the host star enters its WD phase after losing a significant fraction of its mass, gravitational perturbation from the planetary system becomes stronger. Planetesimals may be scattered onto highly-eccentric orbits, which in turn decay under tidal interactions with the orbital energy dissipating as heat inside these planetesimals. Finally, once scattered into the Roche limit, planetesimals become tidally disrupted, forming a circumstellar disk, circularized and accreted onto the WD via a combination of the Poynting-Robertson (PR) effect, gas drag, and the Yarkovsky effect (Rafikov, 2011, 2011, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020, 2021).
### Paper layout
This paper is organized as follows. In Section 2 we summarize the methodology, a synthesis of stellar (2.1), planetesimal's orbital (2.2) and thermal evolution (2.3) codes to obtain the temperature profile evolution of planetesimals, from which we quantify the degree of differentiation (2.4) in a given system. In Section 3, we present theoretical stellar evolutionary tracks (3.1), and the resultant temperature profile evolution of sample planetesimals (3.2). We also present the simulation results of parameter space study in Section 3.4: the maximum central temperature (3.4.1) and the degree of differentiation (3.4.2) throughout the thermal history of individual planetesimals, with various sizes and orbits. Then, we quantify the mass fraction of planetesimals in a given population that are differentiated as a consequence of stellar evolution until asymptotic giant branch in Section 3.4.3. In Section 4, we discuss the limitations, their possible alterations to the results (4.1), and the implications (4.2) for observations of white dwarf pollutants. In Section 5, we summarize this study.
## 2 Method
Figure 2 summarises our method. The thermal evolution of planetesimals crucially depends on their sizes (\(R_{p}\)) and the stellar irradiation. The strength of this irradiation is quantified by the equilibrium temperature at the surface of the planetesimal, a function of stellar luminosity and distance of the body from the host star. First, we model the evolutionary track of the host star utilizing MIST (Section 2.1), from which we deduce the orbital evolution (\(r(t)\)) and survival of a planetesimal for a given initial position (2.2). Second, based on \(r(t)\), and the stellar luminosity variation (\(L_{*}(t)\)) from the stellar evolutionary track, we compute the equilibrium surface temperature of the planetesimal (\(T(x=\pm R_{p},t)\)), serving as the boundary condition of the thermal evolution code (2.3.1). Third, we solve the time-depend temperature profile (\(T(x,t)\)) inside the planetesimal numerically (2.3). Finally, by choosing a critical point \(T_{crit}\) such that any region of a planetesimal heated above which is assumed to melt, we quantify the degree of stellar-induced differentiation in planetesimal size (\(R_{p}\))-initial position (\(a_{0}\)) space (3.4).
### Stellar evolution
Evolution of the host star plays an important role in the thermal processing and survival of close-in rocky bodies. Stellar luminosity determines the irradiation on planetesimals, whilst stellar mass loss triggers orbital expansion of planetesimals, potentially weakening the irradiation. Additionally, the radius of the star is the key to the survival of planetesimals, as close-in bodies may be engulfed by the stellar envelope.
The set of evolutionary tracks MIST (MESA isochrones and stellar tracks, Paxton et al., 2011, 2013, 2015; Dotter, 2016; Choi et al., 2016) are used to follow the stellar parameters with a range of initial conditions: initial mass of 1, 2 and 3\(\,M_{\odot}\) with an initial metallicity of 0.014. MIST interpolates a grid of single stellar evolutionary tracks from the one dimensional MESA (Modules for Experiments in Stellar Astrophysics) code (Jermyn et al., 2023), without the need to specify overshooting parameters and fine-tuning of resolutions in order to resolve thermal pulsations of the thermally pulsating AGB (TPAGB) star. MESA solves stellar evolutionary equations utilizing implicit Newton-Raphson method, additionally accounting for time-dependent convection in non-steady state. MESA includes a variety of equation of states, e.g., Skye for fully ionized matter (Jermyn et al., 2021), FreeEOS for partially ionized matter (Irwin, 2012). In terms of opacity, MESA accounts for molecular opacity at low temperature region (e.g., close to the surface of TPAGB star), Compton opacity at extremely high temperature and conductive opacity in degenerate region. As a result, MESA is able to model complex post-main sequence evolution of stars, including thermal pulsations driven by nuclear burning in a thin shell.
### Orbital evolution
The orbital evolution of a planetesimal, together with the stellar luminosity, determines the strength of irradiation on the planetesimal. As the star loses mass, planetary orbits expand. This orbital expansion conserves the specific angular momentum of the planetesimal and reduces the stellar irradiation received (2.2.1). With the expansion of stellar radius, close-in planetesimals approach and/or enter the stellar envelope. Survival and orbital evolution of these planetesimals depend on their interactions with the environment, via friction, ablation and disruption (2.2.2). We neglect orbital tidal decay (Appendix B2, Veras & Fuller, 2019; Mustill & Villaver, 2012) since it is usually negligible for planetesimal-sized bodies.
#### 2.2.1 Stellar mass loss
The gravitational force on a close-in planetesimal is usually dominated by the host star during the majority of its (post) main sequence life. The direction of gravitational attraction is parallel to the planetesimal-star separation vector. As a result, there is no net torque on the planetesimal (\(\bar{r}\times\bar{F}=0\)), and its specific angular momentum \(J\) is conserved:
\[J=\sqrt{GM_{*}o_{0}a_{0}(1-e_{0}{}^{2})}=\sqrt{GM_{*}(t)a(t)(1-e(t)^{2})}, \tag{1}\]
where \(a_{0}\), \(e_{0}\) and \(M_{*,0}\) are the initial orbital semi-major axis, eccentricity and stellar mass, respectively, and \(a(t)\), \(e(t)\) and \(M_{*}(t)\) are the corresponding values at a later time \(t\).
For close-in rocky bodies, the mass loss timescale far exceeds the orbital timescale of planetesimals. In this case, mass loss is adiabatic (Appendix A), without inducing any eccentricity variation (Veras et al., 2011). Consequently, conservation of \(J\) can be simplified to:
\[a(t)=a_{0}\frac{M_{*,0}}{M_{*}(t)}, \tag{2}\] \[\dot{a}=-\frac{\dot{M}_{*}}{M_{*}}a.\]
#### 2.2.2 Expanding envelope
During the thermally pulsing asymptotic giant branch (TPAGB), the stellar envelope of a low/intermediate-mass star expands to several AUs. Meanwhile, drastic mass loss of the TPAGB star leads to the formation of a circumstellar envelope. As a result, close-in planetesimals directly interact with stellar materials, spiralling inwards and losing masses.
First, environmental gases/dust exert ram pressure on the moving planetesimal. If ram pressure exceeds the binding energy of the body per unit volume, disruption (break up of planetesimal) would occur. The disruption threshold can be approximated as (Jia & Spruit, 2018):
\[\rho_{ext}|\bar{v}|^{2}\sim\frac{GM_{p}\rho_{P}}{R_{p}}, \tag{3}\]
where \(\bar{v}\) is the velocity of the planetesimal relative to the environmental materials, and \(\rho_{ext}\) is the density of the environment (stellar/circumstellar envelope):
\[\rho_{ext}(r)=\begin{cases}\rho_{*}(r)&0<r\leq R_{*}\\ -\frac{\dot{M}_{*}}{4\pi r^{2}v_{sw}}&r>R_{*}\end{cases}. \tag{4}\]
where \(v_{sw}\) is the radial speed of isotropic stellar wind driven by radiation pressure, \(v_{sw}\sim-\frac{L_{*}}{c\dot{M}_{*}}\), \(c\) is the speed of light. The left hand side of Equation 3 represents ram pressure (\(P_{ram}\)) exerted on the planetesimal and the right hand side is the gravitational binding energy (\(E_{b}\)) of the planetesimal per unit volume. For primitive (uniform composition) bodies with \(M_{p}\propto R_{p}^{3}\), as \(P_{r}\) is independent of \(R_{p}\) and \(E_{b}\) scales as \(R_{p}^{2}\), disruption of the planetesimal is possibly catastrophic.
Second, motion relative to environmental gases/dust leads to friction and loss of planetesimal's angular momentum, causing inward spiraling (Villaver & Livio, 2009). The orbital speed of a planetesimal that may enter the stellar envelope during pulsations (\(\sim 10\,\mathrm{km/s}\)) far exceeds the planetesimal's escape velocity (\(\sim 100\,\mathrm{m/s}\)), indicating that the drag acting on the planetesimal is always in the hydrodynamic regime, and that gravitational focusing and accretion of environmental gases/dust onto the planetesimal is negligible. The hydrodynamic drag force can be expressed as (Staff et al., 2016; Jia & Spruit, 2018; MacLeod et al., 2018; O'Connor et al., 2023):
\[\bar{f}=-\frac{1}{2}C_{d}\rho_{ext}\pi R_{p}\,^{2}\bar{v}|\bar{v}|, \tag{5}\]
where \(C_{d}\) is the drag coefficient of the planetesimal, which is a function of Mach number and Reynolds number, and approaches a constant for large relative speed \(v\) (appropriate for our case) (O'Connor et al., 2023). We choose \(C_{d}=1\) and 0.5 to investigate the qualitative behaviour of the planetesimal's inward spiraling.
We track the orbital evolution of the planetesimal throughout TPAGB by solving the equation of motion numerically with the additional friction force term in polar coordinates \((r,\phi)\), and consider that the body is lost once its disruption threshold (Equation 3) is met:
\[(\bar{r}-r\dot{\phi}^{2})\dot{r}+(2\dot{r}\dot{\phi}+r\dot{\phi})\dot{\phi}=- \frac{GM_{*}(<r)}{r^{2}}\dot{r}+\frac{\dot{f}}{M_{p}}. \tag{6}\]
If the planetesimal's equation of motion can be decoupled to radial and tangential components, assuming that the motion of the envelope materials during thermal pulsations is dominated by the radial component (\(\bar{v}=(\dot{r}-v_{ext})\dot{r}+r\dot{\phi}\dot{\phi}\)), then:
\[\begin{split}&\bar{r}-r\dot{\phi}^{2}=\\ &-\frac{GM_{*}(<r)}{r^{2}}-\frac{1}{2M_{p}}C_{d}\rho_{ext}(\dot{r }-v_{ext})\sqrt{(\dot{r}-v_{ext})^{2}+r^{2}\dot{\phi}^{2}},\\ & 2\dot{r}\dot{\phi}+r\dot{\phi}=-\frac{1}{2M_{p}}C_{d}\rho_{ext}r \dot{\phi}\sqrt{(\dot{r}-v_{ext})^{2}+r^{2}\dot{\phi}^{2}}.\end{split} \tag{7}\]
We additionally include the mass loss/size shrinking of the planetesimal in collisions with gases/dust (non-thermal ablation), which can be approximated as (Jia & Spruit, 2018):
Figure 2: Key steps of methodology: a synthesis of stellar and planetesimal’s orbital evolution giving the stellar luminosity (\(L_{*}(t)\)) and planetesimal’s position \((r\,(t))\), which together provide the stellar irradiation, whose strength is quantified as the planetesimal’s surface temperature (\(T(x=\pm R_{P},t)\)). The thermal evolution of the planetesimal is then simulated to predict its internal temperature evolution (\(T(x,t)\)), which reveals the fraction of the body that undergoes large-scale melting and differentiation. The roundrects and rectangles representing numerical simulations and outputs, respectively. \(x\) is the radial coordinate inside the planetesimal, \(t\) is the time coordinate.
\[\dot{M}_{P} \sim-2R_{P}^{2}\rho_{ext}^{\frac{2}{3}}\rho_{p}^{-\frac{1}{2}}v, \tag{8}\] \[\dot{R}_{P} =\frac{\dot{M}_{P}}{4\pi R_{P}^{2}\rho_{P}}.\]
We neglect thermal ablation (sublimation) as each thermal pulse only lasts for \(\sim 100\,\mathrm{yr}\), corresponding to a diffusion length scale of \(\lesssim 100\,\mathrm{m}\).
### Thermal evolution
The key to assessing the regions where a thermal process occurs is the interior temperature profile of the planetesimal. This is tracked during the evolution of the star by the radial thermal diffusion equation (Narasimhan, 1999; Jura and Xu, 2010), where we assume that spherical symmetry is preserved. When the temperature at a nodal point reaches the critical point, \(T_{crit}\), the corresponding thermal process is assumed to occur instantaneously (see Appendix C for differentiation timescale). We make the simplification that thermal and physical properties of the planetesimal remain constant. Whilst a convenient simplification for the modeling it is also the case that heat only transports via conduction as convection in 1-D is suppressed by thermal inversion (strong irradiation on the surface).
Based on these assumptions, the radial thermal diffusion equation can be expressed as:
\[\frac{\partial T}{\partial t}=\frac{1}{x^{2}}\frac{\partial}{\partial x} \left(\alpha_{d}x^{2}\frac{\partial T}{\partial x}\right), \tag{9}\]
where \(x\) is the radial coordinate, thermal diffusivity \(\alpha_{d}=\frac{\kappa}{\rho c_{P}}\) is a function of thermal conductivity, \(\kappa\), bulk density \(\rho\) and specific heat capacity \(c_{P}\) of the body. We choose similar thermal/physical properties as Ricard et al. 2009 and Lichtenberg et al. 2021 (Table 1). For planetesimals with identical radius, the dominating factor distinguishing their thermal evolution is \(\alpha_{d}\). The sensitivity of our results to \(\alpha_{d}\) is discussed in Section 4.1.3.
We solve Equation 9 numerically by implicit difference method to ensure the stability and accuracy of the long-timescale numerical integration (Gerya, 2019):
\[\frac{T_{i}^{n+1}-T_{i}^{n}}{\Delta t}=\frac{\alpha_{d}}{x_{i}^{2} }\left(x_{i}^{2}\frac{T_{i+1}^{n+1}-2T_{i}^{n+1}+T_{i-1}^{n+1}}{\Delta x^{2}} +2x_{i}\frac{T_{i+1}^{n+1}-T_{i-1}^{n+1}}{2\Delta x}\right), \tag{10}\] \[T_{i}^{n}=-\frac{\alpha_{d}\Delta t}{x_{i}^{2}}\left(\frac{x_{i} ^{2}}{\Delta x^{2}}-\frac{x_{i}}{\Delta x}\right)T_{i-1}^{n+1}+\left(\frac{ \Delta x_{d}\Delta t}{\Delta x^{2}}+1\right)T_{i}^{n+1}\] \[-\frac{\alpha_{d}\Delta t}{x_{i}^{2}}\left(\frac{x_{i}^{2}}{ \Delta x^{2}}+\frac{x_{i}}{\Delta x}\right)T_{i+1}^{n+1},\]
where \(i\) represents the number of radial steps and \(n\) represents the number of time steps. We assign \(10^{4}\) nodal points to a planetesimal's interior, and \(10^{5}\) time steps to a planetesimal's thermal evolution during each stellar evolutionary phase (main sequence, red giant branch, core helium burning and asymptotic giant branch).
In practice, thermal properties are pressure and temperature dependent. Furthermore, heating in the body can be asymmetric, leading to convection in 3D. These factors may be important in thermal processing of a planetesimal and we discuss further the corresponding consequences in Section 4.1.3.
#### 2.3.1 Surface equilibrium temperature
The boundary condition of Equation 9 is the surface equilibrium temperature of the planetesimal, a quantification for the strength of irradiation. The surface temperature acts as the driving force of heat penetration into the interior of the planetesimal. We simplify the problem by assuming circular orbits for all planetesimals, but will discuss the effect of eccentricity by introducing time-averaged equilibrium temperature over one orbital period in Section 4.1.4(Me and Rivera-Valentin, 2017). The surface equilibrium temperature of a planetesimal on a circular orbit has the form:
\[T_{eq}(r)=\left(\frac{L_{*}(t)(1-A_{B})}{16\pi e\sigma\beta_{r}r(t)^{2}}\right) ^{\frac{1}{3}}, \tag{11}\]
where \(r\) is the distance of the planetesimal from the host star, equivalent to semi-major axis \(a\) for circular orbits, \(L_{*}\) is the stellar luminosity, \(A_{B}\) is the bond albedo of the planetesimal, \(\epsilon\) is infrared emissivity (\(\epsilon\approx 1\)), \(\sigma\) is Stefan-Boltzmann constant, \(\beta_{r}\) is the fraction of re-radiation (\(\beta_{r}\) ranges from 0.5 for a tidally locked body, to 1 for a fast rotator). For simplicity, we choose \(\beta_{r}=1\) to preserve spherical symmetry.
### Differentiation fraction
Based on the temperature profile evolution, We quantify the degree of differentiation in each planetesimal by the fraction of volume (\(f_{V}\)) that could have melted (had its temperature increase above \(T_{crit}\)) during the giant branches. We then repeat this process for a population of bodies with different sizes and initial semi-major axis. Noting that planetesimals are not uniformly distributed in planetesimal size (\(R_{P}\))-initial semi-major axis (\(a_{0}\)) space, we further introduce the distribution of planetesimals \(R_{P}\)-\(a_{0}\) space, in order to quantify the degree of differentiation of the planetesimal population.
We consider that the white dwarfs accrete rocky planetesimals that started with semi-major axes between \(a_{crit}\) (critical orbit within which planetesimals end up being engulfed by the host star) and \(10\,\mathrm{AU}\), with the total mass of planetesimals lying between \(a_{0}\) and \(a_{0}+da_{0}\) to be given by \(\frac{\partial M_{\mathrm{total}}}{\partial R_{P}}\propto a_{0}^{-\beta}\), where \(\beta\) is chosen to be \(\frac{1}{2}\) based on Minimum Mass Solar Nebula model (MMSN) (Hayashi, 1981; Crida, 2009).
We further assume that observable white dwarf pollutants mainly come from rocky planetesimals with radii between 10 and 100 km, with the number of planetesimals between \(R_{P}\) and \(R_{P}+dR_{P}\) to be given by \(\frac{\partial N}{\partial R_{P}}\propto R_{P}^{-\alpha}\), where \(\alpha\) is chosen to be 4 based on the distribution of minor objects in the Solar system (Ivezic et al., 2001; Schlichting et al., 2013). The size distribution can be transformed to \(\frac{\partial M_{\mathrm{total}}}{\partial R_{P}}\propto\frac{\partial N}{ \partial R_{P}}M_{P}(R_{P})\propto R_{P}^{3-\alpha}\). We combine spatial and size distributions of planetesimals and express the joint distribution as:
\[\frac{\partial^{2}M_{total}}{\partial R_{P}\partial a_{0}}\propto R_{P}^{3- \alpha}a_{0}^{-\beta}. \tag{12}\]
\begin{table}
\begin{tabular}{c c c c} \hline \hline \(\kappa\left(\mathrm{W}/(\mathrm{m}\cdot\mathrm{K})\right)\) & \(\rho\left(\mathrm{kg}/\mathrm{m}^{3}\right)\) & \(c_{P}\left(\mathrm{J}/(\mathrm{K}\cdot\mathrm{kg})\right)\) & \(\alpha_{d}\left(\mathrm{m}^{2}/\mathrm{s}\right)\) \\ \hline
3 & 3411.6 & 1000 & \(8.8\times 10^{-7}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Constant thermal properties of sample planetesimals.
The choice of power law indexes above may not apply generally in other systems. We discuss the effect of these indexes in Section 4.1.4.
Toassess the degree of differentiation in a system based on the joint distribution of planetesimals, we consider two scenarios. In the first, \(f_{Mm}\), the ratio of total mass of material whose temperature exceed \(T_{crit}\) to the total mass of planetesimals accreted onto the white dwarf is considered, regardless of whether planetesimals fully differentiate:
\[f_{Mm}=\frac{\int\int f_{V}(R_{p},a_{0})R_{p}^{3-\alpha}a_{0}^{-\beta}dR_{p}da_ {0}}{\int\int R_{p}^{3-\alpha}a_{0}^{-\beta}dR_{p}da_{0}}, \tag{13}\]
where \(f_{V}(R_{p},a_{0})\) is the volume fraction of differentiation for a planetesemil of radius \(R_{p}\) with initial semi-major axis \(a_{0}\).
In the second, \(f_{Mp}\), only those planetesimals that melt \(>95\%\) of their volume (defined as fully differentiated/iron core formation) are considered, under the assumption that the processes leading to the identification of core or mantle-rich material in white dwarf atmospheres require the formation of an iron core in the parent body:
\[f_{Mp}=\frac{\int\int F_{V}[f_{V}(R_{p},a_{0})]R_{p}^{3-\alpha}a_{0}^{-\beta} dR_{p}da_{0}}{\int\int R_{p}^{3-\alpha}a_{0}^{-\beta}dR_{p}da_{0}}, \tag{14}\]
where \(F_{V}[f_{V}]\) is of the form:
\[F_{V}[f_{V}]=\begin{cases}1&f_{V}\geq y\%\\ 0&f_{V}<y\%\end{cases}. \tag{15}\]
The applied integration limits to Equation 13 and 14 are summarized in Table 2.
## 3 Results
### Stellar evolutionary tracks
In this section we summarise and compare the MIST evolutionary tracks (Section 2.1) of two samples, I: \(1\,M_{\odot}\) low-mass star and II: \(2\,M_{\odot}\) intermediate-mass star (Figure 3), both of which go through AGB and end in C-O white dwarfs, while undertaking distinct evolutionary tracks near their RGB tips.
Figure 3 and Table 3 illustrate that sample I is more enhanced in size and luminosity during its RGB tip (approaching half of its tip AGB values) compared to sample II, whose tip RGB luminosity (radius) is around 1% of its tip AGB value. This occurs because low-mass (\(\sim 0.8\)-\(2\,M_{\odot}\)) stars, unlike their intermediate-mass (\(\sim 2\)-\(8\,M_{\odot}\)) counterparts, lack gravitational pressure in their cores and start helium burning with helium flash (runaway fusion in the contracted degenerate cores), until thermal pressure dominates, boosting their sizes and luminosity significantly at the RGB tips.
Mass loss patterns for both samples are similar: rapid mass loss near the end of TPAGB. In this case, Equation 2 indicates that outward migration of planetesimals mainly occurs at the end of thermal pulsations, suppressing the growth in planetesimal surface temperature (\(T_{s}\), Equation 11) with stellar luminosity (\(L_{*}\)). The \(T_{s}\) of a planetesimal initially orbiting sample II at \(1.5\,\)AU shows clear decreasing trend near the end of TPAGB, despite the generally increasing \(L_{*}\) (Figure 3, lower panel). For the same reason, planetesimals are much more likely to escape from the envelope via stellar mass loss during late pulsations.
\begin{table}
\begin{tabular}{c c c c c} \hline \(a_{0,min}\) & \(a_{0,max}\) & \(R_{p,min}\) & \(R_{p,max}\) \\ \hline \(a_{crit}\) & \(10\,\)AU & \(10\,\)km & \(100\,\)km \\ \hline \end{tabular}
\end{table}
Table 2: Integration limits of sample planetesimals’ initial semi-major axis (\(a_{0}\)) and size (\(R_{p}\)) for Equation 13 and 14.
Figure 3: Evolution of stars with \(M_{*}=1\,M_{\odot}\) and \(2\,M_{\odot}\) and the resultant equilibrium temperature for planetesimals with \(a_{0}=1.5\,\)AU and \(e=0\). The zoom-in plots correspond to thermally pulsing asymptotic giant branch (TPAGB) with the origin of the time axes marks the start of TPAGB phase. The age of the star at the start of of its TPAGB phase (\(t_{0}\)) is added.
### Temperature profile evolution
As is described in Section 2.3, the time evolution of planetesimal's interior temperature profile is calculated based on the stellar irradiation, quantified by \(T_{s}\) (Equation 11). We show the thermal evolution of 3 sample systems (listed in Table 4) in Figure 4.
In all 3 samples, the temperature evolution in the interior of planetesimals lags behind that of its surface temperature (\(T_{s}\)). The time delay of a layer's temperature relative to \(T_{s}\) can be estimated by the diffusion time scale (\(t_{diff}=\frac{d_{diff}^{2}}{\sigma_{M}}\)) corresponding to the depth of this layer and is especially relevant for rapid \(T_{s}\) variations, e.g., during the tip RGB and TPAGB. For instance, thermal pulsations lasting \(\sim\) 100-1000 yr correspond to a diffusion length scale \(d_{diff}\sim 100\) m, thus only affecting a thin surface layer of this size.
The interior temperature of the planetesimals in sample IA and IIA rises to the corresponding equilibrium temperature within a small fraction of stellar main sequence life, since the diffusion timescale (\(\tau_{diff}\)) of 30 km is around 30 Myr, much shorter compared to the main-sequence lifetime of the host stars. The initial temperature profile of planetesimals are unimportant unless the host star is more massive than 3 \(M_{\odot}\) and \(R_{p}\) exceeds 100 km (at which point, the diffusion length scale corresponds to the main sequence lifetime of these shorter-lived stars satisfies: \(d_{diff}\) (\(\tau_{main~{}sequence}\)) \(\lesssim R_{p}\)). This is supported by the uniform temperature profile of these 3 samples at the start of RGB (Figure 4, lower panel).
Most of the thermal processes in planetesimals of sample IA and IB occur at the tip RGB, while AGB heating in IA and IB only penetrates layers of depths \(\sim 10\) km because of its short duration. On the contrary in sample IIA, AGB heating is much more significant than its RGB counterpart. This is consistent with the results in Section 3.1: the enhanced RGB luminosity of low-mass stars due to helium flash increases their ability to heat planetesimals during the RGB stage, compared with their AGB stage, whereas intermediate-mass stars do not go through the He flash, and therefore their planetesimal heating is dominated by the AGB phase.
When exposed to intense stellar irradiation (as in the case during the giant branches), a planetesimal's interior temperature decreases inwards (thermal inversion). Thermal processing requiring strong irradiation, e.g., differentiation, starts at the exterior of planetesimals. This thermal inversion ceases at the start of CHB phase, when the stellar luminosity drops before rising up again. At this point, the interior temperature of a planetesimal decreases outwards at its outer layer, preserving longer if the body is larger (sample IB).
### Envelope entry
In this section we track the orbital evolution of planetesimals that eventually enter the stellar envelope during its TPAGB, based on planetesimal's equation of motion (Section 2.2.2). Our simulation verifies the planetesimal's orbital evolution is still dominated by outward migration from stellar mass loss, rather than stellar wind drag before envelope entry (see Appendix B1). We present the orbital evolution and mass loss for 4 sample planetesimals of \(R_{p}=100\) km after entering the envelope of a 2 \(M_{\odot}\) star, with \(a_{0}=1.3\) and 1.4 AU, and \(C_{d}=1\) and 0.5, respectively, in Figure 5.
Planetesimals in all four systems considered spiralwards to their disruption limits (Equation 3) within 50 yr. According to Equation 5, deceleration due to drag force scales as \(\frac{R_{p}^{2}}{R_{p}}\propto\frac{1}{R_{p}}\), indicating that smaller planetesimals experience a stronger deceleration from drag and spiral in faster. Therefore, planetesimals with \(R_{p}\lesssim 100\) km cannot survive in the stellar envelope when ignoring other effects (e.g., interactions with the planetary system). Meanwhile, much larger planetesimals/planets, for instance, with \(R_{p}\sim 1000\) km feels less drag effect (\(\lesssim 10\)% decay in \(a\) during the first envelope entry) and may have enough energy to (partially) unbind the envelope.
Planetesimal mass loss due to ablation is negligible in all 4 samples. Ablation mainly occurs when the planetesimal is close to its disruption limit, where both environmental density and planetesimal's speed relative to the stellar envelope increase rapidly.
### Thermal processing inside planetesimals
In this section we present the degree of thermal processing/differentiation in planetesimal radius (\(R_{p}\))-initial semi-major axis (\(a_{0}\)) space based on thermal equations described in Section 2.3. Section 3.4.1 shows the maximum temperature raised at the centre (\(x=0\)) and half radius (\(x=\frac{R_{p}}{2}\)) of the sample planetesimals in our parameter space. Section 3.4.2 demonstrates the volume fraction of melting (\(f_{V}\)) in this space for two temperature thresholds (\(T_{crit}\)), 1400 K and 1800 K. Based on \(f_{V}(R_{p},a_{0})\), we present, in Section 3.4.3, the estimated mass fraction of differentiated samples according to two definitions, \(f_{Mm}\) in Equation 13 and \(f_{Mp}\) in 14.
#### 3.4.1 Maximum central temperature
We present the maximum temperature at \(x=0\) and \(\frac{R_{p}}{2}\) in planetesimals of two sample systems, I: 1 \(M_{\odot}\) and II: 2 \(M_{\odot}\) host star, in Figure 6. This figure shows that any thermal process triggered by stellar evolution take place preferentially in smaller-sized bodies closer to the host star, with its \(R_{p}\) and \(a_{0}\) dependence determined by the corresponding temperature threshold.
As is shown in the left panels of Figure 6, all completely melted planetesimals (which satisfy \(T(x=0)\geq T_{crit}=1800\) K) later enter the stellar envelope during the thermally pulsing AGB (fall to the left of the red dotted line). Considering the uncertainties in modeling thermal pulsations of AGB stars, we smooth out the oscillations of stellar radius and compute the corresponding engulfment limit, represented by the blue dashed lines in Figure 6 (see Section 4.1.1). This new engulfment limit leaves a narrow triangle-like parameter space of completely melted planetesimals that survive the AGB phase of the host star (sample I: \(0.8\,\mathrm{AU}\lesssim a_{0}\lesssim 1\,\mathrm{AU}\), \(R_{p}\lesssim 28\) km, II: \(1\,\mathrm{AU}\lesssim a_{0}\lesssim 1.3\,\mathrm{AU}\), \(R_{p}\lesssim 18\) km). This result emphasizes the importance of thermal pulsations on the survival of the most thermally processed planetesimals.
On the other hand, if \(T_{crit}\) is lowered to 1400 K (see Section 4.1.2 for the reason of this choice), some completely melted planetesimals survive the host star's thermal pulsations (sample I: \(1.3\,\mathrm{AU}\lesssim a_{0}\lesssim 1.7\,\mathrm{AU}\), \(R_{p}\lesssim 28\) km, II: \(1.5\,\mathrm{AU}\lesssim a_{0}\lesssim 2.2\,\mathrm{AU}\), \(R_{p}\lesssim 22\) km). Meanwhile, there is an increasingly larger area under a lower temperature contour in Figure 6. In other words, a lower \(T_{crit}\) corresponds to a much larger \(R_{p}-a_{0}\) space where the planetesimal is heated above this threshold over a given fraction of its volume. This
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Sample & \(M_{*}\) (\(M_{\odot}\)) & Sub-sample & \(R_{p}\) (km) & \(a_{0}\) (AU) \\ \hline I & 1 & IA & 30 & 1.5 \\ & & IB & 100 & 1.5 \\ \hline II & 2 & IIA & 30 & 1.5 \\ \hline \hline \end{tabular}
\end{table}
Table 4: List of the parameters of 3 sample systems in Figure 4.
result stresses the strong dependence of the degree of differentiation on \(T_{crit}\).
The thermal processing in sample I is mainly limited by the strength of heating, whereas in sample II the thermal processing is mainly limited by the timescale of heating. Compared to sample I (upper panels of Figure 6) that undergoes helium flash during its RGB, a given temperature contour in sample II (lower panels, quiet ignition of core helium burning) intersects the axes at smaller \(R_{P}\) but larger \(a_{0}\) (for instance, see the 1800 K contour lines). Furthermore, the increase in \(R_{p}\) value of each contour line-the \(R_{p}\) axis intersections from \(x=0\) (left panels of Figure 6) to \(x=\frac{R_{P}}{2}\) (right panels) is larger for sample II (lower panels), indicating that compared to sample I, the temperature gradient in sample II is steeper. These features are consistent with the facts that 1. the maximum thermal processing in sample I occurs at the RGB phase, which is much longer (lasting around 1.5 Gyr, corresponding to \(d_{diff}\sim 100\) km) with much smaller average luminosity gradient (1.7 \(L_{\odot}\)/Myr) compared to the AGB life of sample II (20 Myr, \(d_{diff}\sim 10\) km, 272.3 \(L_{\odot}\)/Myr, when planetesimals undergo the maximum heating), and 2. the tip AGB of sample II is around five times as luminous as the tip RGB of sample I.
Figure 4: The upper panel shows the temperature evolution at the centre (\(x=0\)), half of the radius (\(x=\frac{R}{2}\)) and the surface (\(x=R\)) of a planetesimal in sample system Ia and IIA (Table 4). The period of each evolutionary phase is normalised by its overall timescale, with the absolute time along the horizontal axis of the plot. The lower panel shows the temperature profile evolution of planetesimals in sample system Ia, IB and IIA. The contour plots start from the red giant branch (RGB) of the host star, followed by the core helium burning phase (CHB) and the asymptotic giant branch (AGB), with the age of the star increasing clock-wisely. Same normalisation as the upper panel is applied.
#### 3.4.2 Volume fraction of melting
We present the fraction of volume that reaches sufficient temperatures for melting at any point during its evolution of each planetesimal in \(R_{p}\)-\(a_{0}\) space (\(f_{V}(R_{p},a_{0})\)) for two sample systems, I: \(1\,M_{\odot}\) and II: \(2\,M_{\odot}\) host star considering two \(T_{crit}\), \(1400\,\)K (right panels) and \(1800\,\)K (left panels) in Figure 7.
The similarities between Figure 6 and Figure 7 are summarized below:
* Smaller planetesimals closer to the star undergo larger-scale melting
* All planetesimals where more than 10% by volume reaches temperatures above \(1800\,\)K are engulfed
* Compared to \(1800\,\)K, the fraction of planetesimals where a portion of their volume reaches above \(1400\,\)K increases significantly
With the decrease in \(R_{p}\), the contours of Figure 7 converge to the critical value of \(a_{0}\) (black dashed vertical lines in the left panels), beyond which the surface temperature of planetesimals can never reach \(T_{crit}\). This convergence results from the decrease of the duration when \(T_{s}>T_{crit}\) with the increase in \(a_{0}\).
#### 3.4.3 The degree of differentiation
Due to the fact that observations of white dwarf pollutants may occur at any point of a white dwarf's accretion history, such that the sizes and initial positions of the accreted bodies are unconstrained, it is useful to consider the average effect of stellar irradiation on the interiors of a population of planetesimals. We quantify this average effect by the mass fraction of the planetesimal population where large-scale melting is induced by the host star's giant branches. In order to calculate this fraction, we consider a nominal population of planetesimals, orbiting within \(10\,\)AU surviving the giant branches, with sizes between \(10\) and \(100\,\)km, as described in Section 2.4 with \(\alpha=4\), \(\beta=\frac{1}{2}\,\frac{(\partial^{2}M_{sat}}{\partial R_{p}a_{0}\partial a _{0}}\propto R_{p}^{-1}a_{0}^{-\frac{1}{3}})\). The volume fraction of melting (\(f_{V}(R_{p},a_{0})\)), as described in Section 3.4.2 is integrated over planetesimal size (\(R_{p}\))-initial semi-major axis (\(a_{0}\)) space to get the mass fraction of melted regions (\(f_{Mm}\), Equation 13), and the mass fraction of planetesimals melted over \(95\%\) of the volume (\(f_{Mp}\) with \(y=95\), Equation 14), relative to the total mass of the planetesimal population. In this section we present \(f_{Mm}\) and \(f_{Mp}\) at different \(T_{crit}\) in 3 sample systems, I: \(1\,M_{\odot}\) and II: \(2\,M_{\odot}\) and III: \(3\,M_{\odot}\) host star in Figure 8.
Both \(f_{Mm}\) and \(f_{Mp}\) in all 3 samples increase rapidly with the decrease in the critical temperature of melting, \(T_{crit}\). There are 2 orders of magnitude difference from \(800\,\)K to \(1800\,\)K in \(f_{Mm}\) for all 3 samples. \(f_{Mp}\) always lie below the corresponding \(f_{Mm}\), converging rapidly to 0 when \(T_{crit}\) approaches the maximum surface temperature of planetesimals within \(100\,\)K, indicating its much stricter requirement of small \(R_{p}\) and \(a_{0}\) (long-lasting intense heating). For \(f_{Mm}\) (\(f_{Mp}\)) to exceed \(10\%\), \(T_{crit}\) requires lowering to \(\lesssim 1300\,\)K (\(1200\,\)K) for sample I, \(\lesssim 1400\,\)K (\(1200\,\)K) for sample II and \(\lesssim 1400\,\)K (\(1000\,\)K) for sample III.
At \(T_{crit}\gtrsim 1100\,\)K, \(f_{Mm}\) of identical \(T_{crit}\) decreases with stellar mass. In this region, the rapid increase in stellar luminosity with stellar mass is the dominant factor affecting the degree of differentiation. The contrary occurs at \(T_{crit}\lesssim 900\,\)K, where the duration of intense irradiation becomes more important. A similar feature presents in \(f_{Mp}\), at \(T_{crit}\gtrsim 1700\,\)K and \(\lesssim 1200\,\)K, stressing the growing importance of the timescale of intense heating for large-scale melting.
## 4 Discussion
Our model predicts that stellar evolution may lead to differentiation in small planetesimals close to the host star. However, the size of the parameter space where large-scale melting occurs, as well as its contributions to the white dwarf pollutants will largely depend on how thermal pulsations occur and the critical temperature of melting: for differentiated planetesimals to occupy \(10\%\) of the mass of the population described in Section 3.4.3, \(T_{crit}\) must be lowered to \(\sim 1300\,\)K.
In this section we discuss the main limitations in this study: stellarplanetesimal orbital evolution during TPAGB (Section 4.1.1), thermal evolution of planetesimals (Section 4.1.2 and 4.1.3), and planetesimal distributions (Section 4.1.4), together with their possible impacts on the results. Based on the white dwarf atmospheric model, we discuss the observability of accretion of planetesimals differentiated under stellar irradiation (Section 4.1.5).
By comparing our predictions to observational data of polluted white dwarfs, we further discuss the necessity of early thermal processes such as radiogenic heating, impacts and incomplete condensation (Section 4.2).
### Limitations
#### 4.1.1 Thermal pulsing asymptotic giant branch
Thermal pulsations are accompanied with unstable cyclical nuclear burning at the base of He and H shells. No steady state is available because of the significantly different reaction rates of H and He burning. Powered by the intense helium flash, a convective zone forms in
Figure 5: Orbital decay and ablation of planetesimals with different initial semi-major axis (\(a_{0}\)) and drag coefficient (\(C_{d}\)) inside the stellar envelope. \(M_{*}=2\,M_{\odot}\), \(R_{p}=100\,\)km, \(e=0\).
the He intershell, affecting stellar pulsations and dredge-ups (Hansen and Kawaler, 1994; Cristallo et al., 2015). As a result, thermal pulsations are extremely sensitive to specific assumptions of convection, for instance, convective overshooting. In reality, convection is a 3-D process, which cannot be fully simulated by MESA, a 1-D stellar evolution code (Paxton, 2021; Jermyn et al., 2023).
On the other hand, with the rapid mass loss of the host star during thermal pulsations, the gravitational attraction from the host star weakens, while the interactions of planetesimals with the planetary system becomes stronger. The orbital evolution of planetesimals may be dominated by resonances, and planetesimals may be scattered inwards/outwards. A massive planet can also unbind the loosely bound pulsating stellar envelope, altering the condition of engulfment and the stellar evolutionary track.
However, we have shown that the expansion of the stellar envelope during thermal pulsations, without any planetesimal-planet interaction, leads to engulfment of the most thermally processed planetesimals. For instance, Figure 7 shows that almost all planetesimals with melting covering \(>15\%\) of their volumes (\(f_{V}\gtrsim 15\%\)) with \(T_{crit}=1800\) K are engulfed during the TPAGB.
To account for the uncertainties in modelling thermal pulsations and the additional effect of the planetary system, we repeat the analysis in Section 3.4.3 while smoothing out pulsations (similar to SSE code, Hurley et al., 2013) for two sample systems, I: \(1\,M_{\odot}\) and II: \(2\,M_{\odot}\) host star, shown in Figure 9. In both samples, there is \(\sim 5\%\)-\(10\%\) increase in \(f_{Mm}\) and \(f_{Mp}\). This increase is especially significant at high \(T_{crit}\), e.g., 1600 K-1800 K, where orders of magnitude difference are presently.
Furthermore, the luminous AGB phase may lead to non-negligible Yarkovsky effect and YORP effect arising from asymmetric thermal emission of planetesimals, altering their orbital (semi-major axis and eccentricity) and spin parameters (spin angular velocity and obliquity). We simulate the maximum Yarkovsky effect and the maximum YORP effect acting on a planetesimal (see Appendix D). Our sim
Figure 6: The maximum temperature at the centre (\(x=0\)) and half of the radius (\(x=\frac{R_{P}}{2}\)) of the planetesimal, throughout its thermal history under various circumstances. The blue (red) dashed (dotted) line corresponds to the critical orbit after (before) smoothing pulsations. The positions of sample IA (black circle) IB (triangle) and IIA (square) in Figure 4 are added.
ulations illustrate that for our smallest planetesimals (\(R_{p}=10\,\)km ), the outward/inward Yarkovsky drift can reach \(\sim 0.4\,\)AU at the end of TPAGB life of the system. However, compared to the uncertainty in the planetesimal's critical semi-major axis at the end of the TPAGB phase (\(\sim 1\,\)AU) arising from modelling of thermal pulsations, and accounting for the fact that the maximum Yarkovsky drift is only reached under extreme circumstances (e.g., tidal locking, maximum/minimum entries of Yarkovsky matrix), the Yarkovsky effect may only play a minor role in the survival of close-in planetesimals. On the other hand, the maximum YORP spin up has the possibility to break up the smallest planetesimals whose angular velocity already exceeds \(\sim 0.5\,\omega_{break}(=\sqrt{\frac{4\pi G\rho}{3}})\). Therefore, when accounting for the YORP effect, there may be an extra loss in the smallest and hence the most thermally processed planetesimals, resulting in fewer planetesimals accreted by the white dwarf that underwent large-scale melting induced by giant branch heating.
Another uncertainty of TPAGB is the mass loss pattern. Although an adiabatic approximation is still valid in our parameter space, mass loss could be axial instead of isotropic, such that only a small proportion of stellar luminosity contributes to the momentum of ejecting masses, leading to much slower stellar wind and denser circumstellar envelope. When exposed to the dense axial stellar wind, ablation of planetesimal becomes faster, accompanied with faster inward spiraling. As a result, although the orbital evolution of planetesimals unaffected by the axial stellar wind remain dominated by stellar mass loss, the engulfment regions, and thus the critical orbits, extend outwards for planetesimals inside the overdense stellar wind. More close-in (the most thermally processed) planetesimals are engulfed during the TPAGB, reducing the degree of thermal processing of the whole planetesimal population ending up accreted by the white dwarf.
In this study we assume a constant metallicity, 0.014 for all sample
Figure 7: Volume fraction that is once melted (\(V_{V}\)) for planetesimals of different sizes and initial positions. The blue (red) dashed (dotted) lines correspond to the critical orbit after (before) smoothing stellar thermal pulsations. The black dashed vertical line corresponds to the semi-major axis beyond which the maximum surface temperature of planetesimals drop below the given melting threshold. The positions of sample IA (black circle) IB (triangle) and IIA (square) in Figure 4 are added.
host stars, which is unable to represent the whole population of white dwarfs. For stars undergoing identical evolutionary phases, a higher metallicity corresponds to larger opacity, and hence a less compact star with more energy dissipated in the interior, resulting in stronger thermal expansion during pulsations. Meanwhile, stars with considerably distinct metallicty may undergo distinct evolutionary phases. For instance, when the metallicity increases from 0.014 to \(\sim 0.05\) for a 1 solar mass star, the star no longer undergoes an AGB phase, corresponding to a scenario that more close-in planetesimals avoid engulfment with the degree of thermal processing (dominated by the RGB phase) in individual bodies remained. The degree of thermal processing averaging over all planetesimals accreted by the white dwarf hence increases.
#### 4.1.2 Critical point of melting
In this work we consider a simple model in which any portion of a planetesimal heated above a critical temperature is considered to have melted. The key question is whether a single critical point is valid and if so, what the appropriate critical temperature would be. Planetesimals have mixed compositions and hence melting ranges, whose boundaries correspond to the highest temperature for solidus (\(T_{solidus}\), no melting) and the lowest temperature for liquidus (\(T_{liquidus}\) complete melting), instead of a single melting point marking the critical transition from solid to liquid. However, we can define a critical point of melting, as the transition from solid-like to liquid-like behaviours, which occurs typically around a liquid fraction of 40% (Costa, 2005; Caricchi et al., 2007). This critical liquid fraction may be reached at different temperatures (within the melting range) in different regions depending on the exact composition, local gravity, etc. However, given the other uncertainties in this model, using a single temperature provides a reasonable qualitative behaviour.
The uncertainty in the critical point (\(T_{crit}\)) lies in both the poorly constrained melting range (\(T_{solidus}\)-\(T_{liquidus}\)) and the position of \(T_{crit}\) within this melting range: the degree of melting required for efficient melt migration.
According to the model in Johansen et al. 2023, the pressure dependent melting range boundaries, \(T_{solidus}\) and \(T_{liquidus}\) for planetary silicates can be estimated as:
\[T_{melt}=T_{0}\left(1+\frac{P}{P_{0}}\right)^{q}, \tag{16}\]
where \(T_{0}\sim 1661\) K, \(P_{0}\sim 1.336\) GPa, \(q\sim 0.134\) for \(T_{solidus}\) and \(T_{0}\sim 1982\) K, \(P_{0}\sim 6.594\) GPa, \(q\sim 0.186\) for \(T_{liquidus}\). For the largest primitive planetesimal in our parameter space (\(R_{p}\)=100 km), there is only an increase of \(\approx 3\) (1) K in \(T_{solidus}\) (\(T_{liquidus}\)) from the surface to the centre of the body because of pressure variation (\(P(r)\sim\frac{2\pi G\rho^{2}}{(R_{p}^{2}-r^{2})}\)). On the other hand, the melting range covers \(\sim 300\) K (1661 K-1982 K at 0 pressure), which contributes to the main uncertainty in \(T_{crit}\). Furthermore, Scheinberg et al. 2015 suggests a slightly different melting range of \(\sim 1400\) K-1800 K, and McCoy et al. 1999 concludes that \(T_{solidus}\sim 1600\) K with \(T_{crit}\sim 1700\) K based on experiments. Based on these studies, \(T_{crit}\) may vary from 1400 K to 2000 K by maximum, corresponding to around 10% (5%) difference in \(f_{Mm}\) (\(f_{Mp}\)) as is shown in Figure 8.
#### 4.1.3 Thermal penetration
Equally important to the critical temperature is the thermal diffusivity \(\alpha_{d}\), which has a strong influence on the ability of heat to penetrate into the planetesimals (\(d_{diff}\propto\sqrt{\alpha_{d}}\)), and which we approximate as a constant. In reality, \(\alpha\) is a potentially strong function of both the composition of the planetesimals and local temperature. As planetesimals heat up, \(\alpha\) likely decreases and thus, could decrease the ability of heat to penetrate during the AGB/RGB. A quick calculation is presented based on the model in Miao et al. 2014:
\[\alpha_{d}=a+\frac{b}{T}, \tag{17}\]
where \(\alpha_{d}\) has the unit of mm\({}^{2}\)/s, \(a\) ranges from \(\sim 0.13\)-0.20 and
Figure 8: Mass fraction of melted regions in single planetesimals (\(f_{Mm}\), solid lines), and mass fraction of planetesimals with over 95% of the volume undertaking melting (\(f_{Mp}\), dashed lines). We choose \(\alpha=4,\beta=\frac{1}{2}\) in Equation 13 and 14.
Figure 9: Comparison between \(f_{Mm}\), mass fraction of melted regions in single planetesimals, and \(f_{Mp}\), mass fraction of planetesimals melted over 95% of the volume, relative to the total mass of the planetesimal population, in 1 (blue, upper panel) and 2 (red, lower panel) solar mass star systems, before and after smoothing out pulsations (green), with the same assumptions in Figure 8.
\(b\) ranges from \(\sim 210\)-\(330\) for different types of rocks. There is a factor of \(\sim 5\) decrease in \(\alpha_{d}\) from our initial condition, \(T=150\) K, to \(T_{solidus}\sim 1400\) K, equivalent to a factor of 2 decrease in the penetration depth during AGB/RGB. This means that the radius of the largest planetesimal that undergoes large-scale melting in planetesimal may be reduced by maximum to 15 km from 30 km. Meanwhile, a planetesimal's instantaneous surface temperature is a function of latitude instead of a constant, resulting in asymmetric heating and a loss of spherical symmetry. This temperature contrast can lead to 3-D convection in melted regions and possibly faster thermal penetration. In reality, a thermal evolution model involving not only conduction, but also convection, phase transition, and variations in the local thermal and physical properties may be required to model the evolving interior of the planetesimal, at the cost of longer computational time and additional free parameters, for instance, initial composition of the body (Malamud & Perets, 2016).
#### 4.1.4 Initial distributions of planetesimals accreted by WDs
Whilst the limits on the maximum size of a planetesimal where melting occurs as a function of distance to the star is robust (given the above assumptions), the fraction of planetesimals accreted by white dwarfs that are core-mantle differentiated is much more uncertain, due to planetesimal distributions and the position/size of the accretion zone.
We apply \(\alpha=4\), \(\beta=\frac{1}{2}\)\((\frac{\partial^{2}M_{\rm wind}}{\partial R_{p}\partial a_{0}}\propto R_{p}^{-1}a_{0}^{- \frac{1}{2}})\) in the simulations, assuming \(R_{p}\) and \(a_{0}\) are independent variables and the distributions are unaltered by scattering. In terms of spatial distribution (\(\beta\)), we utilize the result of Minimum Mass Solar Nebula model (MMSN, Hayashi, 1981; Crida, 2009), not necessarily applicable to other systems. In terms of size distribution, \(\alpha\), we apply the assumption of collisional evolution (Dohnanyi, 1969; Gaspar et al., 2012; Pan & Schlichting, 2012), which is steeper than that of planetesimals formed in the streaming instability (Simon et al., 2016, 2017), with \(\alpha\lesssim 3\). There is no guarantee that these values apply to all planetary systems and to all size ranges. Furthermore, the planetary system can alter the distributions via resonances. In Figure 10, we show how \(\alpha\) and \(\beta\) may change our results, \(f_{Mm}(T_{crit})\) and \(f_{Mp}(T_{crit})\), in a system with a 1 solar mass host star. With the increase in \(\alpha\) (\(\beta\)), more masses of the planetesimal population concentrate towards the host star (are occupied by small-sized bodies), leading to a higher degree of differentiation of the whole planetesimal population.
Scattering and accretion of planetesimals towards the WD rely on the interactions with the planetary system, which can be non-uniform in space and size. For instance, Li et al. 2022 simulates the accretion of solar system planetesimals onto the Sun when it becomes a white dwarf and concludes that the geometry of the solar system leads to inside-out accretion (the inner bodies are accreted earlier than their outer counterparts). In Figure 11, we present \(f_{Mm}(T_{crit})\) and \(f_{Mp}(T_{crit})\) for three subsets of planetesimal population described in Section 3.4.3, initially lying between a: 1.5-3 AU, b: 1.5-5 AU, and c: 3-5 AU around a 1 solar mass host star. In general, \(f_{Mm}\) and \(f_{Mp}\) decrease rapidly with the outward migration or expansion of the scattering zone.
In this study, we only consider spherically symmetric planetesimals and neglect their shape distribution, as well as the possible alterations to their shapes after melting. In reality, the shape of a planetesimal may affect its orbital and thermal evolution, as well as the scattering and disruption processes, for instance, disruption outside the Roche limit (Katz, 2018; Veras et al., 2020; McDonald & Veras, 2021), which are beyond the scope of this study.
In the solar system, it is common for asteroids to reside in an elliptical orbit (Malhotra & Wang, 2017). In this case, Equation 11 only represents instantaneous equilibrium temperature which varies with the true anomaly. We introduce the time-averaged equilibrium temperature (\(<T_{eq}>\)) for one orbital period (\(\tau\)) (Me ndez & Rivera-Valentin, 2017):
\[\begin{split}&<T_{eq}>=\frac{1}{\tau}\int_{0}^{\tau}T_{eq}(r)dt \\ &=\frac{1}{2\pi}\int_{0}^{2\pi}\left[\frac{L_{*}(1-A_{B})}{16\pi \sigma\beta_{r}a^{2}(1-e\cos E)^{2}}\right]^{\frac{1}{2}}(1-e\cos E)dE\\ &=T_{eq}(r=a)\frac{2\sqrt{1+e}}{\pi}E_{2}\left(\sqrt{\frac{2e}{1+ e}}\right),\end{split} \tag{18}\]
Figure 11: Mass fraction of differentiated samples under different integration range in a 1 solar mass star system (\(\alpha=4\), \(\beta=\frac{1}{2}\)).
Figure 10: Mass fraction of differentiated samples under different planetesimal size (\(\alpha\)) and spatial distributions (\(\beta\)) relative to \(\alpha=4\), \(\beta=\frac{1}{2}\).
where \(r=a(1-e\cos E)\), with \(E\) the eccentric anomaly, \(E_{2}\) is the complete elliptic integral of the second kind, \(T_{eq}(r)=\left(\frac{L_{e}(1-A_{R})}{16\pi\,e\,\sigma R_{p}r^{2}}\right)^{\frac {1}{4}}\) (Equation 11). Compared to \(T_{eq}\), \(<T_{eq}>\) decreases by 10% for identical semi-major axis and \(e\gtrsim 0.1\). Meanwhile, as the pericentre distance of elliptical orbits (\(r_{peri}\)) satisfies \(r_{peri}=a(1-e)<a\), critical \(a_{0}\) increases compared to that for circular orbits. For identical initial semi-major axis distribution, a population of planetesimals on circular orbits are of the highest degree of differentiation with other conditions kept constant.
#### 4.1.5 Observability of planetesimals differentiated under stellar irradiation
The smallest planetesimals, which the model predicts to undergo the largest degree of melting, also produce the weakest signal when accreted, and are not necessarily observable throughout the cooling age of the white dwarf. Our ability to detect calcium (Ca) in a white dwarf atmosphere depends crucially on the signal to noise (S/N) achieved in the observed spectra, alongside the spectral resolution. In order to investigate whether the planetesimals differentiated under stellar irradiation are detectable, we compute the minimum detectable atmospheric Ca/H and Ca/He of the polluted white dwarfs in Zuckerman et al. 2003, 2010 at an equivalent width of 14 mA (the limiting equivalent width required to resolve the Ca lines for S/N of 30 and spectrograph resolution of 40000), and find the linear relation between the minimum detectable relative abundances of calcium (\(\log_{10}\left[\frac{n(\mathrm{Ca})}{n(\mathrm{Hx})}\right]_{lim}\)) and the white dwarf's effective temperature (\(T_{eff}\)):
\[\log_{10}\left[\frac{n(\mathrm{Ca})}{n(\mathrm{Hx})}\right]_{lim}=m\frac{T_{ eff}}{K}+b, \tag{19}\]
where \(n(\mathrm{Ca})\) is the number density of calcium and \(n(\mathrm{Hx})\) is the number density of hydrogen/helium in the atmosphere of the white dwarf. \(m\approx 3.8\times 10^{-4}(4.2\times 10^{-4})\), \(b\approx-14.2(-16.6)\) for hydrogen (helium) atmospheres. The minimum mass of calcium in the atmosphere of the white dwarf (\((M_{\star},\mathrm{Ca})_{lim}\)) that is observable is of the form:
\[(M_{\star},\mathrm{Ca})_{lim}=\left[\frac{n(\mathrm{Ca})}{n(\mathrm{Hx})} \right]_{lim}\frac{A_{\mathrm{Ca}}}{A_{\mathrm{Hx}}}M_{\star}q, \tag{20}\]
where \(q\) is the ratio of white dwarf's atmosphere mass to the white dwarf mass, obtained from Montreal White Dwarf Database (MWDD, Dufour et al., 2017; Bedard et al., 2020), \(A_{\mathrm{Ca}}\) and \(A_{\mathrm{Hx}}\) are atomic masses of calcium and hydrogen/helium, respectively. We can estimate the minimum size of the individually accreted body (\(R_{lim}\)) that can explain the current atmospheric Ca abundance of the white dwarf by:
\[R_{lim}=\left(\frac{M_{\star},\mathrm{Ca}}{4\pi\,p_{f}m_{\mathrm{e},\mathrm{ Ca}}}\right)^{\frac{1}{3}}, \tag{21}\]
where \(M_{\star},\mathrm{Ca}_{\mathrm{i}}\) is the mass of Ca in the atmosphere of the white dwarf. We assume that sample planetesimals have a bulk-Earth like calcium mass fraction \(f_{m,\mathrm{Ca}}\approx 1.5\%\).
The observational limit (minimum size of the accreted body \(R_{lim,current}\), and minimum calcium mass in the atmosphere of the white dwarf, \((M_{\star},\mathrm{Ca})_{lim}\) that can lead to observable signals) as a function of WD's effective temperature calculated from Equation 19, 20 and 21 is shown in Figure 12 for WDs with \(M_{\star}=0.5\), 0.6, 0.7 \(M_{\odot}\) and H, He-dominated atmospheres, respectively. Observations of instantaneous accretion are more sensitive to WDs with H-dominated atmosphere until \(T_{eff}\lesssim 8000\) K (\(R_{lim}\sim 1\) km), after which \(R_{lim}\) of WDs with He-dominated atmosphere is lower.
In reality, planetesimals are unlikely to accrete instantaneously, rather spreading their accretion over a longer time period that we call \(t_{acc}\) (we define the start of accretion to be \(t=0\), such that \(t_{acc}\) also marks the time when the accretion stops). This is important to consider, as the abundance of Ca in the atmosphere of the white dwarf (\(M_{\star},\mathrm{Ca}\)) has a different relationship to the accretion rate (\(\dot{M}_{p}\), assumed to be constant) before and after \(t=t_{acc}\)(Koester, 2009; Jura & Young, 2014):
Figure 12: The minimum size of the planetesimal (\(R_{lim,current}\)) and the corresponding mass of calcium in the atmospheres of the white dwarf (\((M_{\star},\mathrm{Ca})_{lim}\)), that leads to observable photospheric calcium. The mass fraction of calcium is assumed to be 1.5% in the sample planetesimals. We fit a linear relationship between white dwarf’s effective temperature (\(T_{eff}\)) and the detection limit of \(\log_{10}\left[\frac{n(\mathrm{Ca})}{n(\mathrm{Hx})}\right]\), to the transformed Ca/H and Ca/He (original data in Zuckerman et al., 2003, 2010) at a limiting equivalent width of 14 mA & a S/N of 30 and spectrograph resolution of 40000 at given \(T_{eff}\). The kink at \(T_{eff}\sim 12000\) K and 22000 K for white dwarfs with H and He-dominated atmospheres marks the transition from radiative to convective behaviour (Bergeron et al., 2011; Caron et al., 2023).
Figure 13: The minimum size of the planetesimal resulting in detected photospheric calcium assuming the same detection limits as Figure 12 and that the observed metal pollution originates from a single body that accreted onto the white dwarf over 1 Myr (upper panel) and 1 Kyr (lower panel) assuming \(t=t_{acc}\) in Equation 22, respectively. The mass fraction of calcium is assumed to be 1.5% in the sample planetesimals.
\[M_{*,\mathrm{Ca}}(t)=\begin{cases}\dot{M}_{P}f_{m}\mathrm{Ca}t_{\mathrm{Ca}}\left(1-e ^{-\frac{t_{\mathrm{acc}}}{\mathrm{Ca}}}\right)&t\leq t_{\mathrm{acc}}\\ \dot{M}_{P}f_{m}\mathrm{Ca}t_{\mathrm{Ca}}\left(1-e^{-\frac{t_{\mathrm{acc}}}{ \mathrm{Ca}}}\right)e^{-\frac{t_{\mathrm{acc}}}{\mathrm{Ca}}}&t>t_{\mathrm{acc} }\end{cases}, \tag{22}\]
where \(t_{\mathrm{Ca}}\) is the sinking timescale of Ca obtained from MWDD (Paquette et al., 1986). We consider a special case where \(t=t_{acc}\) (the maximum of \(M_{*,\mathrm{Ca}}(t)\)) in this study and estimate the minimum size of a planetesimal (\(R_{lim}\)) that produces observable Ca lines:
\[R_{lim}=\left(\frac{3\dot{M}_{P,lim}t_{acc}}{4\pi\rho}\right)^{\frac{1}{3}}, \tag{23}\]
where \(\dot{M}_{P,lim}=\frac{\left(M_{*,\mathrm{Ca}}\right)_{lim}}{\left(1-e^{-\frac {t_{\mathrm{acc}}}{\mathrm{Ca}}}\right)}\).
By including the sinking timescale of Ca and the accretion timescale of planetesimal debris, we show the corresponding observational limit (\(R_{lim}\)) in Figure 13 with 2 distinct \(t_{\mathrm{acc}}\), i: 1 Myr (upper panel) and iii: 1 Kyr (lower panel) in the same parameter space as Figure 12.
In white dwarfs where the pollutants accrete over 1 Myr, the minimum size of a planetesimal that leads to observable Ca lines, \(R_{lim}\) decreases by \(\approx\) 2 orders of magnitude from \(T_{eff}=25000\) K to 7000 K for white dwarfs with both H and He-dominated atmospheres. If the pollutants come from a single body, accretion of a body with \(R_{P}\lesssim 10\) km is only detectable in a white dwarf with He-dominated atmosphere and \(T_{eff}\lesssim 10000\) K.
If, instead, pollutants accrete over 1000 yr, the predicted \(R_{lim}\) approaches that of instantaneous accretion (Figure 12) at \(T_{eff}\lesssim 7000\) K (20000 K) for white dwarfs with a H (He)-dominated atmospheres. These transitions occur at \(t_{Ca}\gg t_{acc}\), such that the mass of white dwarf pollutants at the end of accretion is representative of the total mass of the accreted body. The dependence of \(R_{lim}\) on \(t_{acc}\) is stronger for white dwarfs with H-dominated atmosphere because of their much shorter \(t_{\mathrm{Ca}}\) compared to their counterparts with He-dominated atmosphere at identical \(T_{eff}\) (about 2-3 orders of magnitude).
Large-scale melting (\(\gtrsim 95\%\) of the volume) triggered by stellar evolution, as is shown in Figure 7, can only occur in planetesimals with \(R_{P}\lesssim 30\) km even when \(T_{crit}\) is lowered to 1400 K. Figure 12 and 13 illustrate that those small (\(R_{P}\lesssim 30\) km) planetesimals potentially undergo large-scale melting induced by giant branch evolution are only detectable (assuming single body accretion) in the scenario that the body accretes rapidly (\(t_{acc}\lesssim 1\) Kyr) or the white dwarf cools down to \(T_{eff}\lesssim 16000\) K.
For He-white dwarfs cooler than \(\sim 16000\) K, it is possible to detect planetesimals as small as \(\sim 30\) km, with this limit decreasing to \(\sim 1\) km when the white dwarf cools to \(T_{eff}=7000\) K. For these white dwarfs, more smaller-sized bodies, potentially being more thermally processed become observable, when accreted individually. However, the coolest He-white dwarfs also have extremely long Ca sinking timescale, which adds the possibility that the pollutants consist of several accreted bodies.
Compared to the preferred parameter space where planetesimals differentiate under radiogenic heating (\(R_{P}\gtrsim 50\) km (Curry et al., 2022)), stellar irradiation preferentially differentiate smaller-sized bodies. Therefore, irradiation-differentiated bodies become observable later in the cooling age of the white dwarf, with a definitive upper bound of the absolute pollutant abundances equal to the largest body that may differentiate under stellar irradiation (30 km according our model).
### Implications
#### 4.2.1 Do we observe post-main sequence differentiation?
Our results show that over-abundances of siderophile/lithophile elements in the atmosphere of a cool white dwarf can result from a parent body differentiated by stellar irradiation, but only in the case that the absolute abundances of WD pollutants drops below that of a 30 km planetesimal.
In this section we further discuss if any observed white dwarfs, whose pollutants are identified as core/mantle-rich, could result from accretion of small-sized planetesimals differentiated during stellar evolution. In order to investigate this we make the broad assumption that the observed atmospheric abundances are dominated by a single body, noting that this may not be the case (Turner and Wyatt, 2019).
The majority of white dwarfs with helium-dominated atmospheres identified in the literature as accreting core or mantle-rich (or even crust-rich) material are highly polluted (e.g., SDSSJ0736+4118, SDSSJ0744+4649, Hollands et al., 2017; Harrison et al., 2021). Such white dwarfs have likely accreted a body of \(\gtrsim 50\) km in radius to match the observed Ca abundances. Optimistically, a 50 km planetesimal can melt \(\sim 40\%\) of its volume under stellar irradiation (Figure 7, right panels). This low degree of differentiation is unlikely to explain the visible siderophile/lithophile character of these white dwarf pollutants. In this case, radiogenic heating, which preferentially differentiates planetesimals with \(R_{P}\gtrsim 50\) km (Curry et al., 2022) may be a better explanation. A notable exception among these He-white dwarfs with core/mantle-rich pollutants is WD0122-227 whose abundances were interpret as core-rich by Swan et al., 2019; Buchan et al., 2022. The \(\log_{10}\left[\frac{n(\mathrm{Ca})}{n(\mathrm{He})}\right]\) of -10.1 corresponds to the accretion of a \(\sim 30\) km planetesimal, which if it resided interior to \(\sim 1.5\) AU of a 1-3 \(M_{\odot}\) star and melted at 1400 K, would have undergone large-scale melting on the giant branches.
Among the white dwarfs with H-dominated atmospheres, WD2105-820, PG0843+516 Swan et al., 2019; Kilic et al., 2020 have been identified as core/mantle-rich and have low metal abundances in their atmospheres, equivalent to accretion of a \(\lesssim 1\) km planetesimal. These weakly polluted systems are consistent with the instantaneous accretion of planetesimals sufficiently small to have been differentiated by giant branch heating. However, the uncertainty lies in the short sinking timescale of Ca (ranging from \(\sim 10^{-2}\) yr to 10 yr) in the H-dominated atmospheres of white dwarfs, adding the probability that current Ca mass is well below that contained in the parent body whose accretion continues over many sinking timescales.
In general, we do not expect the core/mantle-rich pollutants of white dwarfs with He-dominated atmospheres presented in the literature to date to result from planetesimals differentiated during giant branches. Considering the much longer sinking timescale compared to H-white dwarfs, future observations targeting at pollutants in the He-dominated atmospheres of white dwarfs with over-abundances in siderophile/lithophile elements but low absolute abundances of Ca may provide more evidence for stellar-induced differentiation.
#### 4.2.2 Depletion of moderately volatiles
The degree of depletion in moderately volatiles, including elements such as Mn, Na, is considered as a powerful tool to probe the early thermal history of a planetesimal (Jura and Young, 2014). For instance, bulk Earth is depleted in these species relative to chondritic meteorites (Harrison et al., 2018), most likely explained by the incomplete condensation of Na-bearing minerals out of the nebula gas early in planet formation (O'Neill and Palme, 2008; Pringle et al., 2014; Siebert
et al., 2018). Planetary materials accreted by white dwarfs are often Na-poor compared to solar compositions. (Doyle et al., 2020; Harrison et al., 2021) and Harrison et al. 2021) argue that depletion in moderately volatiles could occur early in planet formation, due to incomplete condensation of the nebula gas, or after the nebula gas has dissipated due to large-scale melting in magma oceans following impacts or radiogenic heating.
For those planetesimals that undergo large-scale melting to form a magma ocean induced by stellar irradiation on the giant branches (Section 3.4.2), additional loss of moderately volatiles such as Na and Mn is anticipated, as well as segregation of the iron from the silicates. For those planetesimals heated insufficiently for the silicates to melt and iron to mobilise, depletion of moderately volatiles such as Na largely depends on the thermal properties of the host minerals. Hence, it is unclear whether and how much the moderately volatiles will be lost. If Na is hosted in refractory silicates, whose melting is necessary for Na to be movable, the critical temperature of Na depletion may be coincide with/exceed the melting temperature of silicates. In this case, the observed white dwarf pollutants depleted in moderately volatiles is a signature of heating during the planetary formation stage (Harrison et al., 2021). Meanwhile, if Na (partially) exists in the minerals of lower thermal stability, as proposed by Masiero et al. 2021 who suggest Na hosted in solatile or repeline could degas above \(\sim 1000\) K, although a high level of moderately volatile-element depletion in massive white dwarf pollutants almost certainly originate from heating around planetary formation, it is unclear whether stellar irradiation could be the cause of moderately volatile-element depletion in the less massive (\(\lesssim\) mass of a 50 km planetesimal) white dwarf pollutants.
## 5 Conclusion
Many white dwarfs have accreted planetary materials that are rich in either core or mantle material. The formation of iron cores requires a period of large-scale melting. This work shows that very few planetesimals orbiting white dwarfs underwent large-scale melting triggered by stellar irradiation on the giant branches. For a solar like host star, a planetesimal must initially reside between \(\sim 1.3\) AU and 1.7 AU and be smaller than \(\sim 30\) km in radius in order to be melted over 95% of the volume and survive the asymptotic giant branch of the host star, even when the critical temperature of melting is lowered to 1400 K.
This work highlights planetesimal size as a key differentiator between large-scale melting that occurs due to heating on the giant branches and that due to the decay of \({}^{26}\)Al soon after planet formation. For Solar System abundances of \({}^{26}\)Al, the latter prefers bodies \(\gtrsim 50\) km in radius, whilst the former occurs only in bodies where \(R_{p}\lesssim 30\) km. Thus, accretion of these small planetesimals that underwent large-scale melting due to giant branch heating are most likely to be seen in cool white dwarfs, especially those with relatively low absolute abundances of pollutants. They do not represent the current observed population of core/mantle-rich white dwarf pollutants.
## Acknowledgements
We would like to thank Andrew Buchan and Laura Rogers for providing relation 19 and summarizing the white dwarf data used in Section 4.2. AB acknowledges the support of a Royal Society University Research Fellowship, URFR1211421. XL acknowledges the support of a STFC studentship.
## Data Availability
Stellar equivalent evolutionary tracks from MIST is available at [https://waps.cfa.harvard.edu/MIST/](https://waps.cfa.harvard.edu/MIST/). MESA stellar evolution code is available at [https://docs.mesastar.org/en/release-r23.05.1/](https://docs.mesastar.org/en/release-r23.05.1/). Other codes and data used in this work are available upon reasonable request to the author, Yuqi Li.
|
2310.16349 | DiffRef3D: A Diffusion-based Proposal Refinement Framework for 3D Object
Detection | Denoising diffusion models show remarkable performances in generative tasks,
and their potential applications in perception tasks are gaining interest. In
this paper, we introduce a novel framework named DiffRef3D which adopts the
diffusion process on 3D object detection with point clouds for the first time.
Specifically, we formulate the proposal refinement stage of two-stage 3D object
detectors as a conditional diffusion process. During training, DiffRef3D
gradually adds noise to the residuals between proposals and target objects,
then applies the noisy residuals to proposals to generate hypotheses. The
refinement module utilizes these hypotheses to denoise the noisy residuals and
generate accurate box predictions. In the inference phase, DiffRef3D generates
initial hypotheses by sampling noise from a Gaussian distribution as residuals
and refines the hypotheses through iterative steps. DiffRef3D is a versatile
proposal refinement framework that consistently improves the performance of
existing 3D object detection models. We demonstrate the significance of
DiffRef3D through extensive experiments on the KITTI benchmark. Code will be
available. | Se-Ho Kim, Inyong Koo, Inyoung Lee, Byeongjun Park, Changick Kim | 2023-10-25T04:17:13Z | http://arxiv.org/abs/2310.16349v1 | # DiffRef3D: A Diffusion-based Proposal Refinement Framework for 3D Object Detection
###### Abstract
Denoising diffusion models show remarkable performances in generative tasks, and their potential applications in perception tasks are gaining interest. In this paper, we introduce a novel framework named DiffRef3D which adopts the diffusion process on 3D object detection with point clouds for the first time. Specifically, we formulate the proposal refinement stage of two-stage 3D object detectors as a conditional diffusion process. During training, DiffRef3D gradually adds noise to the residuals between proposals and target objects, then applies the noisy residuals to proposals to generate hypotheses. The refinement module utilizes these hypotheses to denoise the noisy residuals and generate accurate box predictions. In the inference phase, DiffRef3D generates initial hypotheses by sampling noise from a Gaussian distribution as residuals and refines the hypotheses through iterative steps. DiffRef3D is a versatile proposal refinement framework that consistently improves the performance of existing 3D object detection models. We demonstrate the significance of DiffRef3D through extensive experiments on the KITTI benchmark. Code will be available.
## 1 Introduction
Denoising diffusion models have demonstrated remarkable performance in the field of image generation [14, 23, 24]. Recently, the attempts to extend the use of the diffusion model to visual perception tasks [22, 23, 24, 25, 26] have exhibited notable success. However, these successes are limited to perception tasks in image data, while the application to perception tasks involving other modalities (e.g., LiDAR point clouds) remains unsolved. In this paper, we introduce DiffRef3D, a novel framework that utilizes the diffusion process for the task of LiDAR-based 3D object detection.
DiffRef3D is mostly inspired by DiffusionDet [1] which aims to solve 2D object detection by adopting the diffusion process to Sparse R-CNN [24]. The key concept of DiffusionDet is to scatter random noisy boxes along the image scene and gradually refine the noisy boxes to the object bounding boxes by the reverse diffusion process. However, a straightforward extension of the DiffusionDet pipeline to the 3D domain encounters challenges attributed to the inherent dissimilarity between image and point cloud data. Firstly, since an image is consecutive grid-structured data, meaningful features can be extracted from a random location. In contrast, the random placement of boxes in 3D scenes often falls short of providing any information due to the sparsity of point cloud data caused by occlusion and the inherent limitation of LiDAR sensors. Secondly, while 2D objects have various sizes in an image frame due to perspective, objects in point cloud data are relatively small compared to the 3D scene. Given these characteristics of 3D scenes, scattering noisy boxes for object detection without any prior is akin to finding a needle in a haystack. Inevitably, the diffusion model on 3D point cloud space requires extended prerequisites such as coarse information about object sizes and displacements. Therefore, we design DiffRef3D to be a proposal refinement framework that enhances two-stage detectors, leveraging guidance of the region proposals generated from the first stage.
In DiffRef3D, we formulate the proposal refinement stage of two-stage detection models as a conditional diffusion process. For each proposal, our framework applies a noisy
Figure 1: **Diffusion model for 2D and 3D object detection**: (**a**) A diffusion model where \(q\) and \(p_{\theta}\) are the forward and reverse diffusion process respectively. (**b**) Diffusion model for 2D object detection (DiffusionDet). (**c**) We formulate 3D object detection as a conditional diffusion process based on the region proposal. Noisy boxes (or hypotheses) are colored in yellow, and predictions are colored in green.
box residual and generates a hypothesis. The model is trained to denoise the residuals by the reverse diffusion process and predict object bounding boxes, using the knowledge of both the proposal and hypothesis regions. Figure 1 highlights conceptual differences between DiffusionDet and DiffRef3D, which are proposed for 2D and 3D object detection tasks, respectively. Specifically, DiffRef3D employs region proposals as the condition for both its forward and reverse diffusion process, as opposed to DiffusionDet, which is built upon an unconditional diffusion process. To this end, we introduce a hypothesis attention module (HAM), which treats features extracted from hypotheses to be conditioned on region proposals. During training, the noisy residuals are generated through the forward diffusion process from the true residuals between the proposals and target object bounding boxes. For inference, the residuals are sampled from a Gaussian distribution, and the model iteratively refines the prediction by denoising the hypotheses. We adopt the iterative sampling method from DDIM [22] to predict intermediate hypotheses during inference. DiffRef3D can be implemented with existing two-stage 3D object detectors and consistently improve performances compared to the baseline. We demonstrate the performance and scalability of DiffRef3D through extensive experiments held on the KITTI benchmark [1].
In summary, our main contributions are as follows:
* DiffRef3D serves as a generalizable proposal refinement framework that can be applied to existing two-stage 3D object detectors, resulting in consistent performance improvements.
* To the best of our knowledge, DiffRef3D represents the pioneering effort in employing the diffusion process for 3D object detection, interpreting the proposal refinement stage as a conditional diffusion process.
## 2 Related Works
### LiDAR-based 3D Detection
3D object detectors can be broadly categorized into two methods: point-based methods and voxel-based methods. Point-based methods [13, 12] take the raw point clouds as inputs and employ permutation invariant operations to aggregate the point features. Voxel-based methods [15, 16] divide point clouds into a regularly structured grid and apply convolutional operations to extract features. Two-stage models employ the above methods to serve as a region proposal network (RPN), introducing the proposal refinement stage that further improves the prediction quality.
PointRCNN [17] performs foreground segmentation and generates initial proposals using a PointNet++ [13] backbone, and then extracts RoI features from the point clouds inside the proposals for refinement. PV-RCNN [17] proposes a Voxel Set Abstraction module to encode the voxel-wise feature volumes into a small set of keypoints, then aggregates to the RoI-grid points to take advantage of both point-based and voxel-based methods to refine proposals. Voxel R-CNN [14] follows a typical pipeline of anchor-based two-stage models, using SECOND [16] as RPN and refine proposals with the voxel features pooled for RoI. CenterPoint [16] predicts proposals from object centers computed from class-specific heatmap then point features are extracted from 3D centers of each face of the proposals for refinement. CT3D [18] embeds and decodes proposals into effective proposal features based on neighboring point features for box prediction by channel-wise transformer.
There are special cases of two-stage models which utilize additional proposal refinement modules in order to adopt the cascade detection paradigm. 3D Cascade RCNN [15] iteratively refines proposals through cascade detection heads while considering point completeness score. CasA [14] progressively refines proposals by cascade refinement network while aggregating features from each stage by cascade attention module. The previous cascade detection paradigm boosts performance by leveraging the advantages of multiple predictions such as ensemble, however it lacks flexibility in sampling steps and increases memory usage for additional detection heads. In contrast, iterative sampling of diffusion-based models exploits advantages without increasing memory usage for each sampling step and sampling steps are adjustable without additional training.
### Diffusion Framework on Perception Task
The diffusion framework [10, 13] has shown remarkable performance in the field of image generation. Notably, DDIM [22] expedites the sampling process through the utilization of a more efficient class of iterative implicit probabilistic models. Drawing attention to its denoising capabilities and performance, there have been exploding recent efforts to apply it to perception tasks [16, 17] beyond generation tasks. DiffusionDet [15] first adopt the diffusion model to the 2D object detection problem by denoising random boxes to object bounding boxes via the diffusion process. DiffusionInst [13] utilizes additional mask branches and instance-aware filters upon DiffusionDet to address the instance segmentation task, and the following works [16, 17] show competitive performance on semantic segmentation. However, diffusion-based perception models have been mostly explored in the image domain. Through DiffRef3D, we demonstrate the first application of the diffusion framework to a 3D perception task that involves point cloud input.
## 3 Preliminaries
### Proposal Refinement Module
A proposal refinement module in two-stage 3D object detectors takes the initial proposals of the RPN as input and exploits the information of the RoIs to refine them. Formally, the refinement module takes a proposal \(X^{P}\) and outputs the
box residual \(\hat{x}\) and the confidence score \(\hat{c}\):
\[f^{P} =\text{RoIPool}(X^{P}), \tag{1}\] \[\hat{x},\hat{c} =\text{Det}(f^{P}). \tag{2}\]
The RoI feature pooling operation \(\text{RoIPool}(\cdot)\) extracts proposal feature \(f^{P}\) from \(X^{P}\). \(\text{Det}(\cdot)\) represents the detection head, which consists of a regression branch producing \(\hat{x}\) and a classification branch producing \(\hat{c}\). \(\hat{x}\) is supervised with the target residual \(x^{gt}\) between the proposal and its target object box \(X^{T}\), i.e., \(x^{gt}=X^{T}\odot X^{P}\) where \(\ominus\) represents box encoding functions [20], and the target for \(\hat{c}\) is calculated with the intersection over union (IoU) between \(X^{P}\) and \(X^{T}\).
### Conditional Diffusion Model
During training, a small amount of Gaussian noise is gradually added to the data sample \(x_{0}\) through a Markovian chain of the forward diffusion process. The forward diffusion process is formulated as:
\[q(x_{t}|x_{t-1})=\mathcal{N}(x_{t};\sqrt{1-\beta_{t}}x_{t-1},\beta_{t}\mathbf{ I}), \tag{3}\]
where \(\beta_{t}\) represents a noise variance schedule [14, 15, 16]. For brevity, let \(\alpha_{t}=1-\beta_{t}\), \(\bar{\alpha_{t}}=\prod_{i=1}^{t}\alpha_{i}\), and a noisy sample \(x_{t}\) at timestep \(t,0\leq t\leq T\), is derived as follows:
\[q(x_{t}|x_{0})=\mathcal{N}(x_{t};\sqrt{\bar{\alpha_{t}}},(1-\bar{\alpha_{t}}) \mathbf{I}), \tag{4}\]
\[x_{t}=\sqrt{\bar{\alpha_{t}}}x_{0}+(1-\bar{\alpha_{t}})\epsilon,\text{where } \epsilon\sim\mathcal{N}(0,\mathbf{I}). \tag{5}\]
The conditional diffusion model learns a reverse diffusion process, aiming to progressively refine \(x_{t}\) back to \(x_{0}\) with the guidance of the conditional input \(y\). The reverse diffusion process is formulated as follows:
\[p_{\theta}(x_{0:T}|y)=p(x_{T})\prod_{t=1}^{T}p_{\theta}(x_{t-1}|x_{t},y). \tag{6}\]
## 4 DiffRef3D
In this section, we introduce DiffRef3D, a diffusion-based proposal refinement framework for 3D object detection. We formulate the refinement module of two-stage detectors as a conditional diffusion model that takes proposal \(X^{P}\) as the conditional input and denoises the noisy residual \(x_{t}\) sampled from timestep \(t\). DiffRef3D reconstructs the actual residual towards the object \(\hat{x}_{0}^{(t)}\) by the reverse diffusion process and also outputs the confidence score \(\hat{c}^{(t)}\):
\[\hat{x}_{0}^{(t)},\hat{c}^{(t)}=\text{DiffRef3D}(x_{t},t|X^{P}). \tag{7}\]
As shown in Fig. 2, DiffRef3D can be implemented on any existing refinement module that follows the general formulation of Eq. 1, 2 by adding a simple parallel module named the hypothesis attention module (HAM). Details of the HAM architecture, training, and inference procedures are described in the following subsections.
### Hypothesis Attention Module
The hypothesis \(X_{t}^{H}\) is a secondary inspection region derived by applying the given box residual to the proposal. i.e., \(X_{t}^{H}=X^{P}\oplus x_{t}\). As the baseline refinement module extracts the proposal feature \(f^{P}\) (Eq. 1), HAM also extracts the hypothesis feature \(f_{t}^{H}\) using the same RoI feature pooling operation:
\[f_{t}^{H}=\text{RoIPool}(X_{t}^{H}). \tag{8}\]
The hypothesis feature is further processed to incorporate the information of \(x_{t}\) and \(t\) by applying a self-attention block \(\text{Attn}(Q,K,V)\) and a temporal transformation block \(\text{TT}(\cdot,t)\).
The self-attention block is a common multi-head attention module [20] that takes \(f_{t}^{H}\) as input for the query, key, and value. Except, we embed \(x_{t}\) as a positional encoding to the query. We utilize a shallow MLP \(g_{\phi}(\cdot)\) to convert \(x_{t}\) as the same dimensional vector with \(f_{t}^{H}\). Formally, the attention feature of the hypothesis \(a_{t}^{H}\) is computed as:
\[a_{t}^{H}=\text{Attn}(f_{t}^{H}+g_{\phi}(x_{t}),f_{t}^{H},f_{t}^{H}). \tag{9}\]
\(a_{t}^{H}\) is a feature vector that represents the information of the hypothesis region with consideration of its relative displacement from the proposal.
The temporal transformation block reweights \(a_{t}^{H}\) with respect to \(t\) to adjust the effect of the noise signal which gets dominant as timestep increases. Specifically, the output of HAM \(h_{t}\) is formulated as follows:
\[h_{t}=\text{TT}(a_{t}^{H},t)=W_{t}\cdot a_{t}^{H}+b_{t} \tag{10}\]
where \(W_{t},b_{t}\) are the scaling and shifting factors derived from an MLP \(s_{\psi}(\cdot)\):
\[[W_{t};b_{t}]=s_{\psi}(t). \tag{11}\]
Finally, \(h_{t}\) is aggregated with the proposal feature for prediction:
\[\hat{x}_{0}^{(t)},\hat{c}^{(t)}=\text{Det}(f^{P}+h_{t}). \tag{12}\]
Figure 2: **Diffusion-based Refinement Module. Based on an existing proposal refinement module, DiffRef3D further explores a secondary region (hypothesis) feature by processing through the hypothesis attention module.**
```
1deftrain(pr_box,gt_box):
2#pr_box:proposalbox[B,N,7]
3#gt_box:groundtruthbox[B,*,7]
4
5#Computesidualandclasslabel
6x_0,c_gt = \
7assign_target(pr_box,gt_box)
8
9t=randint(0,T)
10eps=normal(mean=0,std=1)#[B,N,7]
11SNR=2
12x_t=sqrt(1-alpha_bar_t)*x_0+(1-alpha_bar_t)*eps/SNR
13
14
15#Predict
16x_hat,c_hat=\
17DiffRef3D(pr_box,x_t,t)
18
19#Computetaskspecificlosses
20reg_loss=get_reg_loss(x_0,x_hat)
21cls_loss=get_cls_loss(c_gt,c_hat)
22returnreg_loss+cls_loss
```
**Algorithm 1**DiffRef3D training procedure
### Training
During the training stage, We set the reconstruction target \(x_{0}=x^{gt}\) and sample noisy residual \(x_{t}\) by the forward diffusion process. The model is trained to perform the reverse diffusion process of reconstructing the true residual, which is identical to refining a hypothesis to the target object bounding box. We elucidate the methodology for generating hypotheses from proposals and present the overall training process of DiffRef3D in pseudo-code through Algorithm 1.
Hypothesis generationFrom our attempts, spreading random boxes along the point cloud scene to detect objects fails to train 3D object detection by the diffusion process. Therefore, we propose hypothesis generation to map random signals into 3D boxes rather than mapping normalized values directly into a point cloud scene. Utilizing proposals as a reference frame to generate a hypothesis guarantees that random boxes start in close proximity to the potential object locations. Before we apply the forward diffusion process on the target residual \(x^{gt}\), normalization is preceded since the range of the residual differs according to the size of each proposal. Details of the normalization process are provided in supplementary materials. Noise scaled by signal-to-noise ratio (SNR) is applied to \(x_{0}\) according to Eq. 5 results in noisy residual \(x_{t}\). Finally, hypothesis \(X_{t}^{H}\) is obtained by applying noisy residual \(x_{t}\) on proposal \(X_{t}^{P}\).
Training objectivesRecent diffusion models [10] are trained with L2 loss between predicted noise and actual noise added to the data sample. However, noise prediction could be replaced by direct prediction of data samples. A number of diffusion-based approaches [1, 13, 12, 11] on perception tasks report that it is better to predict data samples and train on task-specific losses. Following consensus from previous works, we also trained our model with task-specific losses, e.g., binary cross-entropy loss for classification, and smooth-L1 loss for regression.
### Inference
During the inference stage, DiffRef3D refines the hypothesis generated from random Gaussian noise and follows iterative sampling steps from DDIM [13] for progressive refinement. Starting from a random Gaussian noise \(x_{T}\sim\mathcal{N}(0,1)\), DDIM progressively denoises Gaussian noise into data samples with step size \(\Delta t\), i.e., \(x_{T}\to x_{T-\Delta t}\to...\to x_{0}\). The progressive refinement is formulated as follows:
\[x_{t-\Delta t}=\hat{x}_{0}^{(t)}\sqrt{\alpha_{t-\Delta t}}+\epsilon^{(t)}\sqrt {1-\alpha_{t-\Delta t}-\sigma_{t}^{2}}+\sigma_{t}\epsilon \tag{13}\]
\[\epsilon^{(t)} =\frac{x_{t}-\alpha_{t}\hat{x}_{0}^{(t)}}{\sqrt{1-\alpha_{t}}} \tag{14}\] \[\sigma_{t} =\sqrt{(\frac{1-\alpha_{t-\Delta t}}{1-\alpha_{t}})}\sqrt{(1- \frac{\alpha_{t}}{\alpha_{t-\Delta t}})} \tag{15}\]
where \(\epsilon^{(t)}\) represents noise term added to data sample and \(\sigma_{t}\) represents stochastic parameter, respectively. Algorithm 2 describes the inference procedure of DiffRef3D in pseudo-code.
Iterative samplingDiffRef3D generates hypotheses in the first sampling step using residuals sampled from a Gaussian distribution. These hypotheses perform bounding box predictions in conjunction with proposal features, and the predicted outcomes serve two purposes: firstly, as the subsequent proposal in the next sampling step, and secondly, they contribute to inferring the next-step hypothesis according to Eq. 13. The iterative sampling steps share similarities with the cascade detection paradigm [13, 12]
et al. (2022); however, in the case of cascade detection, distinct weights for each iteration's detection head are required, limiting the number of iterations. In contrast, iterative sampling employs a single-weight detection head, minimizing memory consumption and allowing customizable sampling repetitions.
## 5 Experiments
In this section, we delve into the intricate implementation details and outline the evaluation setup adopted to assess the performance of DiffRef3D. Moreover, we conduct ablation studies to comprehensively examine each element of DiffRef3D and affirm the validity of our design choices.
### Implementation Details
#### Architecture
We implement DiffRef3D on three popular two-stage 3D object detectors: Voxel R-CNN (Deng et al., 2021), PV-RCNN (Shi et al., 2020), and CT3D (Sheng et al., 2021). The models enhanced with DiffRef3D framework are denoted as DiffRef3D-V, DiffRef3D-PV, and DiffRef3D-T, respectively. The model architectures mostly follow the original baseline, and only the HAM module is newly introduced. Given the RoIPool\((\cdot)\) output feature dimension \(d\), \(d=96\) for Voxel R-CNN, 128 for PV-RCNN and CT3D), \(a_{t}^{H}\) and \(h_{t}\) are also \(d\)-dimensional vectors. Attn\((\cdot)\) is a multi-head attention module with the head number of 8, and \(g_{\phi}(\cdot)\) is a two-layer MLP with the hidden dimension of \(d\). \(s_{\psi}(\cdot)\) is also a two-layer MLP, with the hidden dimension of \(4d\).
#### Training
For consistency, training configurations for the 3D object detectors mirror those reported in the respective baseline works. In the context of the diffusion process, we set the maximum timestep \(T\) to 1000 and we adopt a cosine noise schedule (Nichol and Dhariwal, 2021) for the forward diffusion process. From our observations, SNR for signal scaling results in optimal performance at 2. During the training stage, only one hypothesis was generated for each proposal. All models were trained using 4 NVIDIA RTX 3090 GPUs, and the training duration was largely consistent with the baseline models due to the limited computational overhead introduced by the HAM.
#### Inference
In the inference phase, DiffRef3D can control the number of sampling steps and the number of hypothesis generation for each proposal. However, we observed that increasing the number of hypotheses yields only minimal performance improvement. Therefore, we fixed the number of hypothesis generation for each proposal as one. We also employ an ensemble method to make usage of the output of intermediate steps by averaging the predictions.
### Results on KITTI Benchmark
The KITTI benchmark (Geiger et al., 2012) is one of the most popular benchmarks for 3D object detection evaluation. The dataset consists of 7,481 LiDAR samples for training and 7,518 LiDAR samples for testing. Training samples are divided into the _train_ set with 3,712 samples and the _val_ set with the remaining 3,769 samples. We present two performance evaluations for comparative analysis: For evaluation on KITTI _val_ set, we compared DiffRef3D-V, DiffRef3D-PV, and DiffRef3D-T trained with the _train_ set with their respective baselines. For KITTI _test_ set, we trained a model with our framework using randomly selected 5,981 samples, validated with the remaining 1,500 samples, and evaluated the performance on the online test leaderboard. The performance were assessed across three classes: _car_, _pedestrian_, and _cyclist_ on three different levels (easy, moderate, and hard). The results on both the _val_ and _test_ sets were evaluated using the mean average precision calculated at 40 recall positions.
#### Comparison with baseline models
Table 1 outlines the efficacy of DiffRef3D implementation across three baseline models on the KITTI _val_ set. We present the performance of the baseline model by incorporating the officially released model weights from each study except Voxel R-CNN which is reported without _pedestrian_ and _cyclist_ classes results. Therefore we reproduced Voxel R-CNN detection results on three classes based on their open-source code. DiffRef3D improves detection performance for _pedestrian_ and _cyclist_ classes for all baseline models and difficulties, while performance remains consistent or decreases slightly in _car_ class. We consider this is due to _car_ class proposal boxes being sufficiently large to exploit local features which results DiffRef3D performs similar to baseline models on _car_ class
\begin{table}
\begin{tabular}{c|c c c|c c c|c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{Car 3D AP\({}_{R40}\)} & \multicolumn{3}{c|}{Ped. 3D AP\({}_{R40}\)} & \multicolumn{3}{c}{Cyc. 3D AP\({}_{R40}\)} \\ & Easy & Mod. & Hard & Easy & Mod. & Hard & Easy & Mod. & Hard \\ \hline Voxel R-CNN (Deng et al., 2021) & 92.24 & 84.57 & 82.30 & 64.97 & 58.07 & 52.66 & 91.83 & 73.13 & 68.40 \\ DiffRef3D-V (1 step) & 92.53 & 84.98 & 82.55 & 67.32 & 60.45 & 55.28 & 89.13 & 71.80 & 67.60 \\ DiffRef3D-V (3 step) & 92.89 & 84.77 & 82.64 & 69.70 & 63.07 & 57.70 & 93.33 & 74.20 & 69.63 \\ \hline PV-RCNN (Shi et al., 2020) & 92.10 & 84.36 & 82.48 & 64.26 & 56.67 & 51.91 & 88.88 & 71.95 & 66.78 \\ DiffRef3D-PV (1 step) & 91.67 & 84.21 & 81.93 & 64.56 & 58.79 & 54.44 & 92.37 & 72.44 & 68.42 \\ DiffRef3D-PV (3 step) & 91.84 & 84.33 & 82.15 & 65.71 & 59.46 & 55.13 & 93.76 & 73.94 & 69.60 \\ \hline CT3D (Sheng et al., 2021) & 92.34 & 84.97 & 82.91 & 61.05 & 55.57 & 51.10 & 89.01 & 71.88 & 67.91 \\ DiffRef3D-T (1 step) & 92.99 & 85.13 & 82.83 & 65.09 & 59.13 & 54.36 & 88.20 & 71.27 & 67.13 \\ DiffRef3D-T (3 step) & 93.28 & 85.13 & 82.97 & 67.49 & 61.29 & 56.17 & 90.92 & 72.61 & 68.52 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison with baseline models on the KITTI _val_ set
detection. DiffRef3D with sampling steps of 3 improves the baseline Voxel R-CNN, PV-RCNN, and CT3D with 5.00%, 2.79%, and 5.72% AP (R40), respectively, in the moderate _pedestrian_ class. Moreover, DiffRef3D improves the _cyclist_ detection performance by 1.07%, 1.99%, and 0.73%, respectively. The performance improvements primarily stem from the hypothesis generation, which contributes additional local features from individual proposals, consequently yielding greater enhancements in the _pedestrian_ and _cyclist_ classes. Notably, as the sampling step increases in DiffRef3D, detection performance increases in the trade-off of computation costs. Nevertheless, the performance improvement becomes less pronounced relative to the escalating computational demands as the sampling step is raised. Therefore, we present the results of the sampling steps of 1 and 3 in comparison with baseline models.
Comparison with state-of-the-art methodsTo further comparisons with state-of-the-art methods, we report results of DiffRef3D-V on the KITTI _test_ set since it shows the best overall performances among three baseline enhanced models. Therefore we trained DiffRef3D-V on a random split of training samples and reported results on the KITTI _test_ set with a sampling step of 3. Table 2 summarizes the comparison with state-of-the-art methods. DiffRef3D-V shows comparable performance in _car_ and _cyclist_ with other models, while performance lags slightly on _pedestrian_. Nevertheless, as DiffRef3D is applicable to other two-stage 3D detectors published later, it can consistently show competitive performance with state-of-the-art models.
### Ablation study
We conduct extensive ablation studies on the KITTI _val_ set to verify our design choices of DiffRef3D.
Effectiveness of HAM and diffusion processTo investigate the effects of HAM and the diffusion process (denoted as D.P. in Table 3), we assessed performance by implementing HAM and D.P. with Voxel R-CNN as our baseline model. DiffRef3D-V refers to configuration with HAM and D.P. all enabled. During ablation study, box ensemble is applied as default in multiple sampling steps. Moreover, we also applied iterative update of box predictions on baseline either for fair comparison. Table 3 summarizes the result of the experiment. In the absence of the D.P., which implies that hypotheses are sampled from random Gaussian noise during both training and testing, AP (R40) decreased by 0.4% and 0.1% in sampling steps of 1 and 2, respectively. This suggests hypotheses generated by noise that does not follow the diffusion process distracts baseline refinement module. Integrating all of our proposed components results in AP (R40) improvements compared to baseline with 0.83%, 1.76%, and 2.69% for each sampling steps, respectively. Note that performance improvement introduced by iterative sampling steps is higher with D.P., baseline AP (R40) increased by 0.39% and DiffRef3D-V increased by 2.32% at sampling steps of 2 compared to sampling steps of 1.
\begin{table}
\begin{tabular}{c|c c c|c c c|c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{Car 3D AP\({}_{R40}\)} & \multicolumn{3}{c|}{Ped. 3D AP\({}_{R40}\)} & \multicolumn{3}{c}{Cyc. 3D AP\({}_{R40}\)} \\ & Easy & Mod. & Hard & Easy & Mod. & Hard & Easy & Mod. & Hard \\ \hline SECOND [23] & 83.34 & 72.55 & 65.82 & 48.73 & 40.57 & 37.77 & 71.33 & 52.08 & 45.83 \\ PointPillars [14] & 82.58 & 74.31 & 68.99 & 51.45 & 41.92 & 38.89 & 77.10 & 58.65 & 51.92 \\ PointRCNN [20] & 86.96 & 76.50 & 71.39 & 47.98 & 39.37 & 36.01 & 74.96 & 58.82 & 52.53 \\ STD [22] & 86.61 & 77.63 & 76.06 & 53.08 & 44.24 & 41.97 & 78.89 & 62.53 & 55.87 \\ Point-GNN [23] & 88.33 & 79.47 & 72.29 & 51.92 & 43.77 & 40.14 & 78.60 & 63.48 & 57.08 \\
3DSSD [22] & 88.36 & 79.57 & 74.55 & 50.64 & 43.09 & 39.65 & 82.48 & 64.10 & 56.90 \\ SA-SSD [17] & 88.75 & 79.79 & 74.16 & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ PV-RCNN [20] & 90.25 & 81.43 & 76.82 & 52.17 & 43.29 & 40.29 & 78.60 & 63.71 & 57.65 \\ Part-A2 [24] & 87.81 & 78.49 & 73.51 & 53.10 & 43.35 & 40.06 & 79.17 & 63.52 & 56.93 \\ Voxel R-CNN [1] & 90.90 & 81.62 & 77.06 & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ VoTr [14] & 89.90 & 82.09 & 79.14 & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ SPG [22] & 90.64 & 82.66 & 77.91 & \(-\) & \(-\) & \(-\) & \(80.21\) & 66.96 & 63.61 \\ CT3D [1] & 87.83 & 81.77 & 77.16 & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ BTCDet [22] & 90.64 & 82.86 & 78.09 & \(-\) & \(-\) & \(-\) & \(82.81\) & 68.68 & 61.81 \\ PDV [13] & 81.86 & 77.36 & 47.80 & 40.56 & 38.46 & 83.04 & 67.81 & 60.46 \\
3D Cascade RCNN [22] & 90.46 & 82.16 & 77.06 & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ PG-RCNN [22] & 89.38 & 82.13 & 77.33 & 47.99 & 41.04 & 38.71 & 82.77 & 67.82 & 61.25 \\ \hline DiffRef3D-V & 90.45 & 81.29 & 76.66 & 46.59 & 40.55 & 38.27 & 80.16 & 66.61 & 59.98 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison with state-of-the-art methods on the KITTI _test_ set
\begin{table}
\begin{tabular}{c c|c c c} \hline \hline \multirow{2}{*}{HAM} & \multirow{2}{*}{D.P.} & \multicolumn{3}{c}{Sampling Steps} \\ & & 1 & 2 & 3 \\ \hline & & 59.62 & 61.01 & 61.38 \\ ✓ & & 59.22 & 60.91 & 61.50 \\ ✓ & ✓ & 60.45 & 62.77 & 63.07 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation studies on the KITTI _val_ set with AP (R40) for pedestrian in moderate difficulty with different configurations.
Sampling stepsWe also conduct experiments to investigate the impacts of increasing the number of sampling steps on performance and inference time. As a baseline model, DiffRef3D-V trained in the KITTI _train_ set is employed. Results of the ablation study are summarized in Table 4. Increasing the sampling steps from 1 to 5 provides an AP (R40) gain of 0.49%, 0.85%, and 2.24% in _car_, _cyclist_, and _pedestrian_, respectively. However, AP gain introduced by increasing the number of sampling steps become insignificant compared to increase of latency after sampling steps of 3. Therefore, we select 3 as an optimal number of sampling steps throughout the other experiments.
### Qualitative Analysis
We visualize the hypothesis-driven box prediction process facilitated by the proposal refinement process of DiffRef3D. Figure 3 illustrates iterative sampling steps of DiffRef3D in each timestep. Visualization result implies the necessity of proposals as conditional inputs due to point cloud sparsity and relatively small object size compared to point cloud scenes. Figure 3 (a) shows hypotheses generated around proposal centers tend to prioritize regions with higher object presence, while hypotheses exhibit less realistic box forms since generated from random Gaussian noise. During the iterative sampling, DiffRef3D progressively refines proposals and hypotheses toward object boxes, as demonstrated in Fig. 3 (b) and (c). This result highlights DiffRef3D successfully employing the diffusion process to iteratively denoise hypotheses and refine proposals.
To verify the temporal transformation block considers the effect of the noise signal, we visualize the norm of the scaling factor along every timestep. Results are illustrated in Fig. 4. We affirm that the norm of the scaling factor is larger at a smaller timestep, implying that the refinement module takes more attention to the hypothesis feature at the later refinement process. Therefore, the temporal transformation block controls the importance of hypothesis features before aggregating it with proposal features.
## 6 Conclusion
In this article, we propose a novel framework, DiffRef3D, that utilizes the diffusion process for the task of 3D object detection with point clouds. The HAM and hypothesis generation scheme we proposed demonstrated consistent performance improvements across different models by integrating the proposal refinement of a two-stage 3D object detector into the diffusion process. Experiments on the KITTI benchmark show that DiffRef3D can serve as a generalizable proposal refinement framework that can be applied to other two-stage 3D object detectors with simple implementation but effective performance improvement.
\begin{table}
\begin{tabular}{c|c c c|c} \hline \hline Sampling Steps & Car & Ped. & Cyc. & Latency (ms) \\ \hline
1 & 92.53 & 69.15 & 89.19 & 72 \\
2 & 92.86 & 69.50 & 93.05 & 113 \\
3 & 92.89 & 69.70 & 93.33 & 152 \\
4 & 92.91 & 70.04 & 94.18 & 185 \\
5 & 93.02 & 70.00 & 92.43 & 232 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation studies on the KITTI _val_ set with AP (R40) for easy difficulty with varying sample steps.
Figure 4: **Norm of scaling factor from temporal transformation**. The impact of the hypothesis feature increases as the timestep decreases after temporal transformation.
Figure 3: **Visualization of iterative proposal refinement on the KITTI benchmark**. DiffRef3D refines the proposals using features from the previous predictions and the hypotheses sampled at the current timestep. Each image shows hypotheses and predictions at (a) t=1000, (b) t=500, (c) t=200, and (d) the final prediction. Hypotheses are colored in yellow, predictions are colored in green, and ground truths are colored in blue. |
2302.01424 | Monolithic Six-DOF Parallel Positioning System for High-precision and
Large-range Applications | A compact large-range six-degrees-of-freedom (six-DOF) parallel positioning
system with high resolution, high resonant frequency, and high repeatability
was proposed. It mainly consists of three identical kinematic sections. Each
kinematic section consists of two identical displacement amplification and
guiding mechanisms, which are finally connected to a limb. Each limb was
designed with a universal joint at each end and connected to a moving stage. A
computational model of the positioner was built in the ANSYS software package,
hence, the input stiffness, output compliance, range, and modal analysis of the
system were found. Furthermore, a monolithic prototype made of Acrylonitrile
Butadiene Styrene (ABS) was successfully manufactured by the 3D-printing
process. It was actuated and sensed by piezoelectric actuators (PEAs) and
capacitive displacement sensors, respectively. Finally, the performances of
this proposed positioner were experimentally investigated. The positioning
resolution was achieved as 10.5nm {\times} 10.5nm {\times} 15nm {\times}
1.8{\mu}rad {\times} 1.3{\mu}rad {\times} 0.5{\mu}rad. The experimental results
validate the behavior and capabilities of the proposed positioning system, and
also verify the nanometer-scale spatial positioning accuracy within the overall
stroke range. Practical applications of the proposed system can be expanded to
pick-and-place manipulation, vibration-canceling in
microsurgery/micro-assembly, and collaborative manipulators systems. | Mohammadali Ghafarian, Bijan Shirinzadeh, Ammar Al-Jodah | 2023-02-02T21:30:23Z | http://arxiv.org/abs/2302.01424v1 | # Monolithic Six-DOF Parallel Positioning System for High-precision and Large-range Applications
###### Abstract
A compact large-range six-degrees-of-freedom (six-DOF) parallel positioning system with high resolution, high resonant frequency, and high repeatability was proposed. It mainly consists of three identical kinematic sections. Each kinematic section consists of two identical displacement amplification and guiding mechanisms, which are finally connected to a limb. Each limb was designed with a universal joint at each end and connected to a moving stage. A computational model of the positioner was built in the ANSYS software package, hence, the input stiffness, output compliance, range, and modal analysis of the system were found. Furthermore, a monolithic prototype made of Acrylonitrile Butadiene Styrene (ABS) was successfully manufactured by the 3D-printing process. It was actuated and sensed by piezoelectric actuators (PEAs) and capacitive displacement sensors, respectively. Finally, the performances of this proposed positioner were experimentally investigated. The positioning resolution was achieved as \(10.5\text{nm}\times 10.5\text{nm}\times 15\text{nm}\times 1.8\text{\text{\text{\text{\text{\text{\text{\text{ \text{\text{\text{\text{\text{\text{\
range of 77.42\(\mu\)m, 67.45\(\mu\)m, 24.56\(\mu\)m, 930\(\mu\)rad, 950\(\mu\)rad, and 3100\(\mu\)rad. The design and modeling of a six-DOF compliant dual redundant serial-parallel stage were presented [26]. The mechanism utilized two different kinds of actuation principles (16 actuators in total), PEA and ultrasonic motor (USM). Therefore, two working modes, micro-motion and macro-motion were captured with the reachable workspaces of 140\(\mu\)m \(\times\) 140\(\mu\)m \(\times\) 135\(\mu\)m, and 9.7mm \(\times\) 9.7mm \(\times\) 9.5mm, respectively. Implementing topology optimization, the model of a six-DOF spatial compliant monolithic system with a sub-micron workspace was constructed [27], which had the same differential kinematic characteristics as the Gough-Stewart prototype platform. A small range non-monolithic six-DOF micropositioner based on a compliant mechanism with the overall dimension of 241 \(\times\) 241 \(\times\) 67mm\({}^{3}\), and driving with 8 PEAs was introduced [28]. Considering more PEAs than the number of working axes made the control design of the positioner complicated. The design and screw-based Jacobian analysis of a six-DOF compliant parallel positioning system were investigated [29], which was featured by small rotational and out-of-plane translational motions. The design and test of a six-DOF compliant piezo-driven micropositioning system which consisted of two parallel three-DOF compliant positioners assembled serially together were presented [30]. The proposed positioning system exhibited small translational and rotational workspaces of 8.2\(\mu\)m, 10.5\(\mu\)m, 13.0\(\mu\)m, and 105\(\mu\)rad, 97\(\mu\)rad, 224\(\mu\)rad, respectively. A six-DOF magnetic levitation stage actuated by 8 voice coil motors (VCMs) was presented [31]. The system was considered to have two upper and lower plates manufactured from aluminum and has an overall dimension of 450 \(\times\) 450mm\({}^{2}\). The bandwidth frequency and subsequently dynamic characteristics of such a system would be considered to be low due to the levitation characteristic.
According to the above literature review, difficulties associated with the designing and experimentation of six-DOF micropositioning systems including overall size, over-actuation, low resolution, low dynamic characteristic, small working range, measurement strategy, and control technique are the challenges that cause the lack of in-depth experimental studies in this subject. To overcome this need, in-depth computational and experimental investigations of the performance of a compact monolithic large-range six-DOF parallel positioner with high resolution, high resonant frequency, and high repeatability were proposed. Due to the well-established static and dynamic advantages of monolithic structures, the design was manufactured as one piece using a 3D-printing technique. Six PEAs and six high-resolution capacitive displacement sensors were used to generate fine motion and enable capturing a very low resolution, respectively. Static and dynamic characteristics of the six-DOF positioner were investigated utilizing FEA software, ANSYS, and the lowest bandwidth frequency of 137.41Hz and a large working range of 403.7\(\mu\)m \(\times\) 398.5\(\mu\)m \(\times\) 390.94\(\mu\)m and 8864.4\(\mu\)rad \(\times\) 8297.8\(\mu\)rad \(\times\) 15278.2\(\mu\)rad were reported, respectively. In a feedback control strategy, an extensive experimental study of the six-DOF positioner was conducted to evaluate different aspects of the characteristics of the proposed system. Based on the captured results, the six-DOF positioner proved to have a low resolution, high precision, high bandwidth frequency, and a very low hysteresis characteristic. Based on the findings, the proposed system will be a suitable option to be implemented in applications such as pick-and-place manipulation, carbon nanotube (CNT) harvesting, automated microscope focus, precision beam steering modules, tremor compensation in microsurgery and assembly of micro-machines, and collaborative manipulators system.
## II Mechanical Design
Figs. 1(a-c) show the isometric, top, and side views of the six-DOF positioner, and its spatial dimensions with respect to the X, Y, and Z axes, which are 172.78mm, 160.20mm, and 24.27mm, respectively.
As shown in Figs. 1(a-c), six bridge mechanisms and six leaf parallelogram mechanisms are used, and they are connected to the stage with three arms. Each arm consists of two universal joints. One at the very beginning (connected to the leaf parallelogram mechanism) and another at the end (connected to the stage). The arms are positioned at an equal angle with respect to the horizontal surface (30\({}^{\circ}\)). The bridge mechanism acts as an amplifier for the input PEA, and the leaf parallelogram mechanism allows translational displacement in only one direction. Each universal joint has two DOF and allows two different rotations on two perpendicular surfaces. The universal joints allow the arm to bend at the center point of each joint and preventing the arm to be subjected to an undesired motion such as torsion. Six PEAs can be placed in the middle of the six bridge mechanisms and apply the necessary input forces to the mechanism to generate the desired motions.
In the proposed positioner, the dimension of the motion space is \(\lambda\) = 6; the number of links is \(n\) = 8; the number of joints is \(j\) = 9. Using a Kutzbach-Grubler criterion, the DOF of the positioner is calculated as below,
\[\begin{split} M&=\lambda(n-j-1)+\sum_{i=1}^{j}m_{i }=\\ & 6\times(8-9-1)+(6\times 2+3\times 2)=6\end{split} \tag{1}\]
## III Computational Analysis
The behavior of the proposed positioner was investigated using simulation software (ANSYS). ABS was considered as the manufacturing material and the mechanical and physical properties were shown in Table 1. During the simulations, 175000 tetrahedral elements (Tet10) were placed in the model. Additionally, the meshing process was performed adaptively meaning sensitive areas such as hinges were filled with a smaller element size (more number of elements). The average aspect ratio (AR) of the elements was determined to be 2.24 which according to the finite element method (FEM) guideline [32] suggests that ARs should not exceed 3 to obtain good quality meshing, accurate results, and avoid elements distortion.
### _Modal Analysis_
The dynamic behavior of the positioner was examined to verify the high bandwidth characteristic and the desired output
motions. The first six natural frequencies and their corresponding mode shapes of the system are shown in Figs. 2(a-f). It can be seen that the first natural frequency is 137.41Hz. The proposed polymer-made six-DOF parallel positioner possesses high bandwidth frequency due to monolithic characteristic. Thus, this result confirms the repeatability and stability of the system.
It is evident that adding a mass to the stage of the positioner such as a microgripper or a sensor will cause the system's natural frequencies to drop. Thus, it is necessary that the designed system possesses a high first natural frequency to maintain its best performance while a tool or an end effector is mounted on the stage. The reported results in Fig. 2 were obtained considering that the positioner carrying 100g apparatus. By removing the apparatus, the proposed positioner possesses a high first natural frequency of 633.25Hz.
### _Kinematics and Workspace Analysis_
The Jacobian matrix provides the relationship between inputs and outputs of the proposed six-DOF positioning system. This relation is described by Eq. 2,
\[\mathbf{O}_{6\times 1}=\mathbf{J}_{6\times 6}\mathbf{I}_{6\times 1} \tag{2}\]
where \(\mathbf{O}\) and \(\mathbf{I}\) are the matrices of output and input displacements of the positioner, respectively. The input displacements are generated by six PEAs, and the output displacements are the measured stage's motions along the six axes using high-resolution displacement sensors.
The kinematics of the positioner was calculated by evaluating the stage's motions across 100 inputs, spanning the full input range, and performing a regression on the resulting outputs. The best kinematics fit is as calculated.
\[\mathbf{J}_{\mathbf{FEA}}=\begin{pmatrix}-0.65214&-0.936&-0.20136&0.20204&0.9 1721&0.65117\\ 0.65838&0.22332&-0.90423&-0.90848&0.2718&0.60197\\ 0.59421&0.59351&0.59273&-0.85932&0.5908&0.59297\\ 6.8706&-19.849&18.301&13.74&-19.949&6.7292\\ -18.912&-28.375&14.909&-14.936&3.057&1.883\\ 23.248&-23.131&23.0035&-23.009&23.098&-21.199\\ \end{pmatrix} \tag{3}\]
It can be noted from Eq. 3 that the motions of the proposed six-DOF parallel positioner are coupled, as the Jacobian matrix is not diagonal. Using this Jacobian, the motion ranges of the positioner along the three translational and three rotational axes will be calculated. The system's workspace is dependent on the maximum input displacement that the positioner can endure before it deforms permanently. Therefore, a safety factor analysis had to be investigated to obtain this result. Fig. 3 shows the resultant minimum safety factor of the system. This result was obtained by implementing six equal input displacements to the positioner. It can be noted from Fig. 3 that the minimum safety factor was 1.5 and occurred due to the input displacements of 110\(\upmu\)m, which is corresponding to the stress concentration of 20.688MPa (Von-Mises). This stress concentration occurred at one of the universal joints of the positioner (see Fig. 3), which agrees with our expectation. As the universal joints of the system are the most vulnerable part of the designed structure.
The translational and rotational workspaces of the positioner can be calculated using the Jacobian matrix and the information obtained from the safety factor analysis. Thus, by mapping the input and output of the positioner using Eq. 2, the range of motions was found to be 403.7\(\upmu\)m \(\times\) 398.5\(\upmu\)m \(\times\) 390.94\(\upmu\)m (translational), and 8864.4\(\upmu\)rad \(\times\) 8297.8\(\upmu\)rad \(\times\) 15278.2\(\upmu\)rad (rotational). Figs. 4(a) and 4(b) illustrate the 3D/2D spaces that are covered by the workspace. In a 3D-space, the volume that is occupied by the translational and rotational motions of the positioner's end-effector is about 2.0339e+07\(\upmu\)m\({}^{3}\) and 3.7015e+11\(\upmu\)rad\({}^{3}\), respectively. According to these results, the amplification ratios of the positioner along X, Y, and Z directions are found to be 3.67, 3.62, and 3.55, respectively.
Fig. 1: 3D solid model of the six-DOF positioner; (a) Isometric view (b) Top view (c) Side view
### _Input Stiffness and Output Compliance_
In order to calculate the input stiffness, a certain force Fin was applied to the input section of the bridge mechanism. Then, the corresponding input displacement was extracted from the FEA to obtain the input stiffness. In addition, by applying a certain force Fout and moment Mout to the stage, the output compliance can be evaluated by extracting the stage's displacement and rotation. Based on the simulation results, the input stiffness was found to be 3.0378\(\times\)10\({}^{6}\)N/m and the linear and angular output compliances (m/N & rad/N.m) (Eq. 4, \(C_{ij}\)=\(C_{ji}\) and \(C_{ii}\) >0) were 8.1702e-06, 8.1760e-06, 10.1760e-06 and 3.0963e-02, 3.0831e-02, 1.8005e-02, respectively. Thus, to reach the full range of the six-DOF positioner, six PEAs with the ability to generate input displacement and force of 110\(\mu\)m and 334.2N, respectively, are essentially needed. Moreover, the linear relationship between the input force and input displacement, Fig. 5, indicates that stress stiffening does not happen in the proposed positioner.
\[\mathbf{C}=\begin{pmatrix}\frac{1.720\mu\text{m}}{1.720\mu\text{m}}&\frac{1.720 \mu\text{m}}{1.720\mu\text{m}}&\frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0& \frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu \text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m }}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720 \mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0& \frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu \text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m }}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720 \mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0& \frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu \text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m }}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720 \mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0& \frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu \text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m }}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720 \mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0& \frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu \text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m }}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720\mu \text{m}}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0& \frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu \text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m }}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720 \mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0& \frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu \text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m }}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720 \mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0& \frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu \text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m }}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720 \mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0& \frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu \text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m }}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720 \mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0& \frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu \text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m}}{1.720 \mu\text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m }}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720\mu \text{m}}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0& \frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu \text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m }}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720\mu \text{m}}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720 \mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0& \frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu \text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m }}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.720\mu \text{m}}{1.720\mu\text{m}}-0&\frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0& \frac{1.720\mu\text{m}}{1.720\mu\text{m}}-0&\frac{1.
measurement as an analog voltage, which was recorded at the control computer with a 16-bit analog-to-digital converter (ADC). To align the position capacitive sensors for accurate and precise measurement reading, three Elliot Scientific MDE263 XYZ Micropositioners and three THORLABS MS1 single-axis translation stages were used. In order to reduce the external vibration disturbances, all experimental tests were performed on a Newport pneumatic vibration-isolated optical platform.
In order to improve the tracking performances and controlability of the positioner, a feedback controller was utilized. Therefore, a Proportional-Integral-Derivative (PID) controller, Eq. 5, was employed to establish feedback for the input signals for precise and controlled positioning tasks. After tunning, the best PID constants were found to be 0.1, 300, and 0 for (Proportional Const.), \(k_{i}\) (Integral Const.), and \(k_{d}\) (Derivative Const.), respectively.
\[u(t)=k_{p}e(t)+k_{i}\int_{0}^{t}e(\tau)d\tau+k_{d}\frac{de(t)}{dt} \tag{5}\]
where \(u\) is the control input to the piezo-amplifier modules. Fig. 7 illustrates the schematic diagram of the feedback control strategy implemented in real-time experiments.
### _Resolution Characterization of the System_
The minimum resolution of the positioner was tested by utilizing stairstep signals with a period of \(2s\). As can be seen in Fig. 8, the minimum resolution was determined for each working axis and found to be 10.5nm along the X-axis, 10.5nm along the Y-axis, 15nm along the Z-axis, 1.8\(\mu\)rad along the \(\theta_{x}\)-axis, 1.3\(\mu\)rad along the \(\theta_{y}\)-axis, and 0.5\(\mu\)rad along the \(\theta_{z}\)-axis, respectively. It is worthy of mention that the resolution of the positioner was influenced by scatter radiation, geometrical effects, and measuring electronic noises. Therefore, the minimum resolution can be improved in the future.
### _Motion Tracking and Hysteresis Reduction_
To verify the capability of the developed six-DOF positioner for dynamic motion tracking, a series of 1D, 2D, and 3D complex trajectories were designed and the system was programmed to follow those trajectories. The obtained results
Fig. 4: Six-DOF positioner workspace; (a) 3D-View, (b) Projection of 3D workspace on different working planes
Fig. 5: Relationship between the input force and input displacement of the positioner
Fig. 6: Experimental setup of the system (six-DOF positioner and equipment)
Fig. 7: The schematic diagram of the feedback PID control strategy
are shown in Figs 9-11.
In Figs. 9(a,b) and 9(c,d), the positioning paths were two circular and two rose path trajectories, respectively. Each circular or rose path trajectory was designed, so the positioner performs a positioning task in the designated working axes. The corresponding trajectory tracking errors are also shown in Fig. 9.
In another experimental testing, the effect of the manipulation range of the positioner on tracking error was examined. Thus, two series of six signals with a constant frequency of 0.5Hz and two different amplitudes (one is two times the other) were sent to the system. According to the captured results in Fig. 10, it can be observed that doubling the manipulation range in all working axes causes an increase in the manipulation inaccuracy, consequently. Furthermore, for some specific applications, it is needed to scan a trajectory with different frequencies. As a result, the effect of manipulation frequency becomes important and provides useful and interesting information regarding the dependency of the accuracy of manipulation on this parameter. Fig. 11 shows the results of experimental testing of trajectory tracking of a similar input signal with the same characteristics such as amplitude and phase, however with different frequencies of 0.1Hz, 0.5Hz, and 1Hz. As it was expected, the results demonstrate the importance of scanning frequency on the accuracy of the manipulation task, and increasing the speed of manipulation leads to more tracking errors. Therefore, there is always a trade-off to be expected.
The hysteresis effect inherent in piezoelectric ceramics actuators is an undesired phenomenon that affects the performances of precise positioning systems. To investigate the capability of the implemented feedback control scheme for hysteresis reduction, a series of sinusoidal inputs were generated and applied to the proposed piezo-actuated positioner. The results are shown in Fig. 12. Considering the obtained results, it can be observed the relationship between the input and output of the system in each translational and rotational direction was linear which can validate that the hysteresis phenomenon was reduced significantly.
### _Frequency Response Experiment_
A frequency analysis test was conducted to investigate the dynamic characteristics of the proposed six-DOF positioner. Fig. 13 shows the experimental results of the first six frequency mode test. To capture these results, a series of sinusoidal sweep signals with their frequency increasing linearly was applied to the PEAs as the inputs of the system. Data for this experiment were gathered at a control frequency of 10kHz. The results demonstrated reasonable predictions of the resonant modes in comparison with the computational results. The inherent resonant frequencies are higher than the calculated ones due to the carrying mass. Moreover, using stiffer material in the manufacturing process will result in a positioner with higher bandwidth frequencies for high-speed real-time applications.
## V Discussion
A compliant parallel monolithic large range six-DOF precise positioning system was presented in this work. The system was manufactured as a single consistent monolithic structure using a 3D printing technique. Extensive computational and experimental evaluations of the performances of the positioner were shown to provide an in-depth understanding of its capabilities and characteristics. Due to the large range, high resolution, and compactness of the positioner, it can be modified to be utilized in many applications including pick-and-place manipulation, tremor compensation in microsurgery and micro-assembly, and collaborative manipulators systems. In order to improve the accuracy of positioning tasks performed by the proposed positioner, a robust adaptive [33] disturbance observer-based controller can be established [34, 35, 36]. In addition, uncertainty analysis can be conducted to improve the accuracy of rotational measurements.
|
2306.12811 | Strong coupling of plasmonic bright and dark modes with two eigenmodes
of a photonic crystal cavity | Dark modes represent a class of forbidden transitions or transitions with
weak dipole moments between energy states. Due to their low transition
probability, it is difficult to realize their interaction with light, let alone
achieve the strong interaction of the modes with the photons in a cavity.
However, by mutual coupling with a bright mode, the strong interaction of dark
modes with photons is possible. This type of mediated interaction is widely
investigated in the metamaterials community and is known under the term
electromagnetically induced transparency (EIT). Here, we report strong coupling
between a plasmonic dark mode of an EIT-like metamaterial with the photons of a
1D photonic crystal cavity in the terahertz frequency range. The coupling
between the dark mode and the cavity photons is mediated by a plasmonic bright
mode, which is proven by the observation of a frequency splitting which depends
on the strength of the inductive interaction between the plasmon bright and
dark modes of the EIT-like metamaterial. In addition, since the plasmonic dark
mode strongly couples with the cavity dark mode, we observes four polariton
modes. The frequency splitting by interaction of the four modes (plasmonic
bright and dark mode and the two eigenmodes of the photonic cavity) can be
reproduced in the framework of a model of four coupled harmonic oscillators. | Fanqi Meng, Lei Cao, Aristeidis Karalis, Hantian Gu, Mark D. Thomson, Hartmut G. Roskos | 2023-06-22T11:14:42Z | http://arxiv.org/abs/2306.12811v1 | # Strong coupling of plasmonic bright and dark modes with two eigenmodes of a photonic crystal cavity
###### Abstract
Dark modes represent a class of forbidden transitions or transitions with weak dipole moments between energy states. Due to their low transition probability, it is difficult to realize their interaction with light, let alone achieve the strong interaction of the modes with the photons in a cavity. However, by mutual coupling with a bright mode, the strong interaction of dark modes with photons is possible. This type of mediated interaction is widely investigated in the metamaterials community and is known under the term _electromagnetically induced transparency_ (EIT). Here, we report strong coupling between a plasmonic dark mode of an EIT-like metamaterial with the photons of a 1D photonic crystal cavity in the terahertz frequency range. The coupling between the dark mode and the cavity photons is mediated by a plasmonic bright mode, which is proven by the observation of a frequency splitting which depends on the strength of the inductive interaction between the plasmon bright and dark modes of the EIT-like metamaterial. In addition, since the plasmonic dark mode strongly couples with the cavity dark mode, we observes four polariton modes. The frequency splitting by interaction of the four modes (plasmonic bright and dark mode and the two eigenmodes of the photonic cavity) can be reproduced in the framework of a model of four coupled harmonic oscillators.
## I Introduction
Dark states are either excitations of (quasi-)particles coupled with the ground state by electromagnetic transitions with weak or vanishing dipole moments or electromagnetic modes of resonators which do not or only weakly couple to external radiation. As dark states are not easily accessible by light, it is difficult to achieve strong interaction with the photons of a cavity [1]. In recent years, there has been a significant interest to investigate ways to achieve strong coupling of dark states [2; 3; 4]. Such investigations are not only of interest for a fundamental understanding of light-matter and many-particle interactions, but also have the potential to provide alternative routes for designing new optoelectronic and memory devices [5].
EIT-like planar plasmonic metamaterials (MMs) is an intensively studied research topic [6; 7]. It is a classical analogue of EIT of three-level quantum systems, and manifests as a sharp transparency band in a broad absorption spectrum, originating from the destructive interference between a bright and a dark plasmonic mode of the MM. Owing to their highly dispersive refractive index, EIT-like MMs are ideal candidates for realizing slow-light devices [8; 9] and ultrasensitive sensors [10; 11]. The strong coupling between plasmonic bright-bright and bright-dark modes was observed to occur in various MMs structures [7; 12; 13; 14; 15].
A rotationally symmetric 1D photonic crystal cavity exhibits two degenerate fundamental eigenmodes with perpendicular polarization directions of the electric field. Frequency-resonant, linearly polarized incident photons excite only one of these eigenmodes, which is called the _bright_ photonic eigenmode. The unexcited mode with perpendicular polarization can be regarded as _dark_ photonic mode. In previous publications, we reported the strong coupling between bright plasmonic and bright photonic modes [16; 17; 18]. In this contribution, we investigate the strong coupling of bright and dark plasmonic modes of an EIT-like MM with bright and dark cavity modes of a terahertz photonic crystal cavity. The interactions among plasmonic bright mode, dark mode and the two photonic eigenmodes can be simulated in a conventional four-particle coupling picture.
## II Experimental results and electromagnetic simulations
In the experiments, we employed a 1D photonic crystal (PC) cavity made from Si slabs separated by air gaps [19; 16]. The slabs, with lateral dimensions of 10\(\times\)10 mm\({}^{2}\), were cut from commercially available wafers of highly resistive silicon (specific resistance: >20 k\Omega\)cm, two-side polished, thickness as purchased). For their positioning at a fixed distance, we employed metallic sim rings purchased from MISUMI Europa GmbH, see [16; 17]. The cavity consisted of five Si slabs. It was composed of a central 100-\(\mu\)m-thick Si slab as a 'defect' layer between two Bragg mirrors. The Bragg mirrors each consisted of two pairs of thin Si dielectric layers and air layers, with the respective thicknesses of 50 \(\mu\)m and 96 \(\mu\)m, resulting in a fundamental cavity mode at 490 GHz as confirmed by terahertz transmission measurements. The EIT-like MM consisted of a 2D periodic array of pairs of square-shaped split-ring resonators (SRRs) which were fabricated by photolithography on the surface of the defect layer, as shown in Fig. 1a (top). The SRRs was made by evaporating Ti/Au with
the thicknesses of 10/200 nm. The geometrical parameters for the SRRs are given in the caption of the figure. A schematic view of the cavity, loaded with the EIT-like MM structure, is shown in Fig. 1a (bottom). A description of the experimental setup for the terahertz measurements can be found in Refs. [16; 17]. The electric radiation field was polarized in the x-direction (for the definition of directions, see the coordinate system in Fig. 1a (top)).
For comparative measurements, we used a MM with a single square-shaped SRR in the unit cell. It had the same geometrical period as the SRRs of the EIT-like MM unit cell, the side with the split was oriented along the x-direction. The resonance frequency as predicted by simulations using the CST electromagnetic solver (CST: _Computer Simulation Technology_ from Dassault Systemes Simulia SE) is 445 GHz. The measured resonance frequency of the bare cavity is 480 GHz, which is slightly lower than the simulated frequency due to fabrication tolerances. With the bright SRR MM in the PC cavity, we measured the transmission spectrum shown by the blue solid curve in Fig. 1b. One observes two transmittance resonances which represent the lower polariton (LP) and the upper polariton (UP) of the coupled system. When the dark SRR MM is positioned in the PC cavity, the measured transmission spectrum only shows one cavity mode, with the reduced Q-factor of 23. The measured cavity frequency decreases to 466 GHz due to additional metal added to the cavity. By taking the perturbed cavity mode frequency (466 GHz) and simulated resonant frequency of SRR, we determined the Rabi splitting is of 87 GHz, which corresponds to about 19% of the cavity resonance frequency and hence indicates a mode interaction in the strong-coupling regime [16]. Measured transmittance peaks and resonance frequencies (dashed line) obtained by CST simulations are in good agreement with each other. But the measured spectral linwidths are broader than that of the simulations.
When instead the EIT-like MM is used in the cavity, one instead observes four polariton modes. This is shown in the lower panel of Fig. 1b (red solid curve). The four polariton modes are at 387 GHz (LP1), 423 GHz (LP2), 483 GHz (UP1) and 520 GHz (UP2), in good agreement with CST simulations (dashed line in Fig. 1b). The appearance of four polariton modes can be explained as follows.
The mutual interaction of the modes of the cavity and the EIT-like MM is schematically shown in Fig. 2. One can understand the interaction if one applies a sequential-excitation picture: For an incoming terahertz wave with an electric field polarized along the x-axis, the photons excite the bright cavity eigenmode, which couples directly with the plasmonic bright mode of the left SRR in Fig. 1a. The right SRR (exhibiting the plasmonic dark mode, frequency-degenerate with the bright mode of the left SRR) is then excited mainly due to magnetic near-field interaction - albeit also with an electric contribution [13] - with the left SRR, with a coupling strength dependent on the lateral distance between them. The right SRR becomes radiative, with its electric dipole moment oriented along y-direction contributing emission with an electric field in y-direction which couples to the dark cavity eigenmode. This interaction chain hence couples all four quasi-particles (plasmonic bright and dark MM modes, photonic bright and dark cavity modes) with each other, which produces four new polariton modes as well as cross-polarization conversion.
As stated above, the Rabi splitting between polariton modes LP1 (UP1) and LP2 (UP2) depends on the distance of the two SRRs. This is confirmed both experimentally and theoretically with the data shown in Fig. 3. The red solid curve in Fig. 3a displays the four polariton modes measured when the distance between the two SRRs is zero and the two SRRs are physically connected with each other. The LP1 mode lies in the shoulder of the PC's forbidden band, but is still recognizable. For comparison, the blue curve in Fig. 3a reproduces the transmittance curve of Fig. 1a measured with the EIT-like MM in the cavity, but here with the SRRs separated by 2 \(\mu\)m. In the case of zero distance, the mode splitting between LP1 and LP2, respectively UP1 and UP2, is larger than if the distance of the two SRRs is 2 \(\mu\)m. This observation is confirmed by the results of CST simulations shown by the dashed curves in Fig. 3a. When the two SRRs are not physically connected, the coupling between them is mediated by the light-induced magnetic field, and thus is _inductive_. When the two SRRs are physically connected, the coupling between them is both _inductive_ and _conductive_, which leads to a stronger interaction and larger Rabi splitting [20].
Additional insight is provided by CST simulations. Fig. 3b shows calculated amplitude transmission spectra for various distances between the two SRRs, ranging from 0 \(\mu\)m to 11 \(\mu\)m. In order to suppress interaction between them for all separation distances, the period of the MM was increased from 90 \(\mu\)m in the measurements to 110 \(\mu\)m in the simulations, and the size of the SRRs was reduced from 40 \(\mu\)m to 38 \(\mu\)m. The polariton splitting is then always dominated by interactions within each unit cell. The simulation confirms that the splitting between LP1 and LP2, and UP1 and UP2, increases as the distance between the SRRs is reduced. Interestingly, the frequency positions of LP2 and UP1 do not change much with distance. Except for zero distance, LP2 remains at 420 GHz and UP1 at 500 GHz. As the distance between the two SRRs increases, LP1 moves upwards in frequency towards LP2, and UP2 downwards towards UP1. One anticipates that ultimately the two LP modes would merge into one, and the same would occur with the two UP modes, hence the four polariton modes would converge into the two polariton modes which arise by the coupling of the bright SRR mode with the bright cavity mode.
## III Discussions
### Model of four coupled oscillators
The frequency dependence of the interaction among the four modes can be captured with a model of four coupled harmonic oscillators, described by four differential equations of motion with two coupling constants, \(V_{1}\) and \(V_{2}\). \(V_{1}\) represents the coupling constant between plasmon and photon, and \(V_{2}\) indicates the coupling constant between bright plasmon and
dark plasmon. The set of differential equations together with the derivation of the eigenfrequencies of the mixed modes is given in Appendix A, extending the analysis of [17].
Each mode is represented by its time-dependent generalized coordinate, \(x_{1}(t)\) to \(x_{4}(t)\). \(x_{1}\) and \(x_{4}\) describe the bright and dark cavity modes, respectively, assumed to be frequency-degenerate, with the common resonance frequency \(\omega_{\rm c}=2\pi f_{c}\). \(x_{2}\) and \(x_{3}\) are the coordinates of the bright and the dark plasmonic mode of the MM - also assumed to be frequency-degenerate at \(\omega_{p}=2\pi f_{p}\). The two bright modes couple directly with each other, which is expressed by coupling terms in the equations for \(x_{1}\) and \(x_{2}\) with the coupling strength \(V_{1}\). Correspondingly, the dark modes couple with each other. The interaction is via the y-oriented electric dipole moment of the dark plasmonic mode, which has the same dipole strength as the x-oriented dipole moment of the bright plasmonic mode. In the harmonic-oscillator model, this interaction finds its expression in the equations of motion for \(x_{3}\) and \(x_{4}\) by coupling terms, which also have the strength \(V_{1}\). The third and final interaction is between the two plasmonic modes. It is represented by coupling terms in the equations of \(x_{2}\) and \(x_{3}\), with the strength of this interaction given by \(V_{2}\). A higher value of \(V_{2}\) corresponds physically to a smaller distance between the two SRRs of the unit cell.
The full lines in Fig. 4 show the evolution of the resonance frequencies of the mixed modes (representing the polaritons), obtained from the set of differential equations, as a function of the coupling parameter \(V_{2}\). The curves were derived by a fit to the resonance frequencies extracted from the five CST-determined transmission spectra displayed in Fig. 3b. The coupling parameter \(V_{1}\) was used as a global parameter for all values of the distance s of the SRRs, with a resultant value of \(V_{1}=0.4950\times 10^{12}\) rad/s. The other parameters in the model were obtained as \(f_{p}=464.8\) GHz and \(f_{c}=474.2\) GHz. The mixed-mode branches 1 and 4 are found to exhibit strong gradients as a function of \(V_{2}\), while the branches 2 and 3 are less sensitive to the variation of \(V_{2}\). For \(V_{2}\to 0\), branch 1 merges with branch 2, as does branch 3 with branch 4.
We also performed independent fits for each value of the distance s between the SRRs, where only the coupling parameter \(V_{2}\) was varied. The black symbols in Fig. 4 represent the resonance frequencies as a function of \(V_{2}\). The fit with a fixed value of \(V_{1}\) especially predicts a change of the resonance frequencies of the mixed modes 2 and 3, when the value s is changed, while one observes in Fig. 3b (and for the black circles and upwards-pointing triangles in Fig. 4) that the mixed-mode resonance frequencies remain nearly constant, except for \(s\to 0\)\(\mu\)m. One also notes that for \(s\to 0\)\(\mu\)m, a fit with a constant \(V_{1}\) overestimates the resonance frequency of the mixed mode 1, and also slightly that of mixed mode 2. This is caused by the conductive coupling which arises in addition to the inductive coupling of the two SRRs and impacts also \(V_{1}\).
The plasmonic dark mode plays a key role in bringing the cavity dark mode into the coupling scenario. We study now how this mediating effect is influenced when the cavity dark mode is detuned from the resonance frequencies of the cavity modes and the plasmonic bright mode. We simulated the transmission of the EIT-SRR-loaded cavity with the CST Maxwell solver, changing the geometrical dimensions of the SRR located at the right side in the sketch of Fig. 1a (top). For the MM, we otherwise used the same geometrical param
Figure 1: (a) Top: Unit cell of the EIT-like MM; bottom: Schematic presentation of the 1D PC cavity loaded with the EIT-like MM. Geometrical parameters of the EIT-like MM: m = 90 \(\mu\)m, n = p = 40 \(\mu\)m, u = 5 \(\mu\)m, g = 5 \(\mu\)m, s = 2 \(\mu\)m. (b) Power transmittance spectra of the metamaterial-loaded cavity, both measurements (full lines) and results of CST simulations (dahed lines). Top panel: MM with a single SRR in the unit cell; bottom pannel: EIT-like MM.
Figure 2: Schematic illustration of the four-mode coupling. ”E” represents electric coupling and ”M” represents magnetic coupling.
eter values as for the plots in Fig. 3b, namely m = 110 \(\mu\)m, n = 38 \(\mu\)m, and s = 2 \(\mu\)m. Only the value of p was changed from 32 \(\mu\)m to 42 \(\mu\)m. The color graph in Fig. 5 presents the transmission spectra for x-polarized radiation as a function of the changing resonance frequency of that SRR. When the frequency is strongly detuned (both at the low-frequency limit and the high-frequency one), both the plasmonic dark mode and the cavity dark mode are no longer involved in the strong coupling, and only the bright modes of the cavity and the MM participate in the strong interaction. One thus expects only two polariton modes to remain, which lie around the frequencies marked as "LP" and "UP" (written in white lettering) in Fig. 5. In fact, for detuning to large frequencies, the UP2 and the LP2 mixed-mode branches die out, leaving oscillator strength only in UP1 and LP1. For detuning to low frequencies, the opposite occurs: Oscillator strength survives in UP2 and LP2, while UP1 and LP1 die out. Inbetween these extremes, one observes anti-crossing behavior of the UP1 and UP2 branches, and the LP1 and LP2 branches, the respective mode splittings amount to about 8 GHz, are larger than the spectral linewidths of the polariton modes, and correspond to 6% and 5% of the respective LP and UP mode frequencies. This indicates that the interactions are in the strong-coupling regime.
We also calculated the mixed-mode frequencies with the equations of motion of the four coupled harmonic oscillators,
Figure 4: Resonance frequencies of the four mixed modes (representing the polaritons) as obtained by the model of four coupled harmonic oscillators. Black symbols: Resonance frequencies are extracted from Fig. 3b, and plotted versus the value of the coupling parameter \(V_{2}\) obtained by a fit. The full lines are the results of a fit with a fixed global coupling parameter \(V_{1}\).
Figure 5: Color plots of the simulated transmission spectra of a EIT-MM-loaded cavity as a function of the resonance frequency of the dark SRR. The inset displays the loaded cavity, with the EIT-like MM represented by a two-SRR unit cell. The white dashed lines represent the results calculated with the equations of motion of the four-coupled-oscillators model.
Figure 3: (a) Power transmission spectra of radiation polarized in x-direction. For a cavity loaded with an EIT-like MM, where the SRRs are physically connected with each other (experiment: red line, CST simulation: green dashed line), respectively have a distance s of 2 \(\mu\)m (experiment: blue line, CST simulation: magenta dashed line). The 2-\(\mu\)m-data are a reproduction of those shown already in Fig. 1b. (b) Amplitude transmission spectra of radiation polarized in x-direction. CST simulations for several distances between the two SRRs. Each neighboring curve shifts vertically by 0.1 for better comparison. In this set of simulations, m is taken as 110 \(\mu\)m, and n and p are assumed to be 38 \(\mu\)m.
using the values \(f_{c}=462\) GHz, \(f_{p}=436\) GHz for the resonance of the unmodified SRR, \(V_{1}=0.7024\times 10^{12}\) rad/s and \(V_{2}=0.2903\times 10^{12}\) rad/s. The results are plotted as white dashed curves in Fig. 5. They reproduce the resonances obtained with the full-wave simulations reasonably well. The deviations are probably due to the fact that we used fixed values for the coupling constants \(V_{1}\) and \(V_{2}\).
We end this chapter with two remarks and refer the reader for further information to the Appendix. First, we point out the equivalent-circuit model addressed in Appendix B. The model is based on lumped elements (capacitors, inductors and resistors) from which the various coupled resonators are built. The model can provide a qualitative and even quantitative analysis of the system of coupled resonators if the coupling mechanisms (capacitive, inductive or both) are clear [21]. In this regard, it is of a more microscopic nature than the phenomenological model of four coupled equations of motion.
Second, we point out that if both SRRs are bright (i.e., have parallel electric dipole moments), there will be three polariton modes instead of four, when coupled to the cavity (see Appendix C with Fig. 10). In this case, the MM modes only interact with the bright cavity mode and not with the dark one. The mixed-mode frequencies are well reproduced by the equations-of-motion model of three coupled oscillators [17].
### Coupling in a metallic grating-type Fabry-Perot cavity
A way to suppress the coupling to the cavity dark mode is to employ a cavity that does not support modes with a polarization orthogonal to that of the bright mode. Such a type of cavity is a Fabry-Perot resonator made with metallic gratings as reflectors. Ohmic losses strongly suppress standing waves with a polarization parallel to the metal stripes of the grating, but the losses are much lower for radiation polarized perpendicular to the stripes. (We note in passing that a fundamental-mode cavity made with reflectors consisting of unpatterned sheets of gold, or silver or any other good metallic conductor would have a poor Q-factor for all polarization directions. In a fundamental mode cavity, the radiation field penetrates into the metal where it experiences substantial losses even in the case of good conductors. For that reason, we employ cavities made from dielectric materials for studies like those of this publication.)
We investigated mode coupling of SRR MMs in a grating-type cavity by CST simulations. The gratings were assumed to be made from rectangular gold wires (conductivity of \(4.56\times 10^{7}\) S/m) with a width of 1.4 \(\mu\)m and a thickness of 0.2 \(\mu\)m, and separated from each other by 0.6 \(\mu\)m. The wires were assumed to be free-standing in air. The cavity consisted simply of two grating reflectors, with a distance of 180 \(\mu\)m between them. The red line in Fig. 6 displays the transmission spectrum of the empty cavity for the amplitude of radiation polarized perpendicular to the grating wires. The transmittance exhibits a peak at 784 GHz, the resonance frequency of the fundamental cavity mode.
Into the cavity, centered between the reflectors, we placed an EIT-like MM structure (see schematic drawings in Fig. 6). It consisted of two substrate-free metallic SRRs, each having a fundamental resonance at 754 GHz (see blue line in Fig. 6), split into two polariton modes at 701 GHz and 794 GHz (see green line in Fig. 6). The parameters of the MM in the simulations (using the nomenclature of Fig. 1a (bottom)) were as follows. Material: gold stripes with a thickness of 0.2 \(\mu\)m, u = 5 \(\mu\)m, g = 5 \(\mu\)m, s = 2 \(\mu\)m, n = p = 40 \(\mu\)m, m = 90 \(\mu\)m. With the EIT-like MM structure in the cavity, one observes three transmission peaks at 618 GHz, 751 GHz and 862 GHz, the middle one lying close to the resonance frequency of the individual SRRs and being considerably weaker than the transmission peaks above and below. The appearance of three resonances is consistent with the strong coupling among three modes (cavity bright mode, plasmonic bright mode and plasmonic dark mode). The suppressed cavity dark mode does not contribute noticeably in the interaction. This result supports the notion that it is indeed the cavity dark mode which is responsible for the appearance of the fourth resonance in the experiments and simulations with the EIT-like MM in the dielectric cavity discussed in the previous sections of this publication.
### Cross-polarization conversion and polarization tuning
The coupling of the bright and dark modes of the EIT-like MM can be employed for cross-polarization conversion. Linearly polarized incident radiation is converted to elliptically polarized one [13]. We briefly consider here, how the orientation of the incoming electric field influences the polarization of the outgoing field. Fig. 7 displays the co-polarized and cross-polarized transmission spectra of the radiation passing
Figure 6: Amplitude transmission spectrum of a metallic-grating-type Fabry-Perot cavity loaded with an EIT-like MM (dark-grey line). Amplitude transmission spectrum of empty Fabry-Perot cavity (red line), of bare MM with single bright SRR in each unit cell (blue line), of bare MM with EIT SRRs in each units cell (green line). The sketches indicate the four structures of which the transmission spectra are displayed, which were calculated with CST for radiation polarized perpendicular to the metal wires of the grating.
the EIT-like-MM-loaded cavity, as the polarization angle \(\phi\) of the incident electric field is varied from \(0^{\circ}\) to \(150^{\circ}\). Co-polarization (cross-polarization) means that the field orientation of the outgoing wave parallel (perpendicular) to the linear field orientation of the incoming wave is displayed. We do not analyze the phases of the two outgoing waves relative to the incoming one. The structural parameters of the EIT-like MM and the cavity are the same as those used for Fig. 1b, and an orientation of \(\phi=0^{\circ}\) corresponds to the situation shown there.
In Fig. 7, for most values of \(\phi\) both the co- and cross-polarized transmission spectra exhibit four resonance peaks, and their frequencies do not change with the variation of \(\phi\), only the amplitudes. The co-polarized transmission exhibits a \(180^{\circ}\) rotational symmetry, the cross-polarized transmission, however, a \(90^{\circ}\) symmetry. In co-polarized transmission, one observes at specific angles only three polariton modes. If we number the modes with rising frequency as \(\#1\) to \(\#4\), then within the range \(0^{\circ}\leq\phi<180^{\circ}\), mode \(\#2\) disappears at about \(30^{\circ}\), mode \(\#4\) at about \(60^{\circ}\), mode \(\#1\) at about \(100^{\circ}\), and mode \(\#3\) at about \(150^{\circ}\). A similar reduction of the number of observed modes occurs at specific angles in cross-polarized transmission. The details of these phenomenological findings are intriguing and invite future studies.
## IV Conclusion
In conclusion, we have studied experimentally and theoretically the transmission properties of a metamaterial, consisting of coupled split-ring resonators and exhibiting electromagnetically-induced-transparency-like behavior, in a rotationally symmetric photonic crystal cavity at sub-terahertz frequencies. We have observed the appearance of four transmission peaks. They have been identified as the signatures of polariton modes arising from strong mutual coupling of four modes, the bright and dark fundamental modes of the metamaterial with the two orthogonally polarized fundamental cavity modes. In a quasi-particle picture, the mutual coupling occurs between bright and dark plasmons of the metamaterial and the orthogonally polarized photons of the cavity. Incoming linearly polarized radiation is converted to elliptically polarized radiation. The coupling to both cavity modes can be reduced to coupling to only one (and the appearance of only three instead of four polariton modes), if either the rotational symmetry of the cavity is destroyed (e.g. by using a cavity made from metallic-grating reflectors) or if the metamaterial exhibits a total dipole moment which is parallel to the polarization of the incoming wave.
## Acknowledgements
This research work was funded by DFG projects RO 770/46-1 and RO 770/50-1 (the latter being part of the DFG-Schwerpunkt "Integrierte Terahertz-Systeme mit neuartigter Funktionalitat (INTEREST)"). Lei Cao acknowledges support from the HUST Overseas Training Program for Outstanding Young Teachers.
## Author Declarations
### Conflict of interest
The authors have no conflicts to disclose.
### Author contributions
**Fanqi Meng:**Conceptualization (lead); Investigation (lead); Writing-original draft (equal); Formal analysis (equal). **Lei Cao:** Writing-original draft (equal); Formal analysis (equal); Validation (equal). **Aristeidis Karalis:** Validation (equal). **Hantian Gu:** Investigation (equal). **Mark D. Thomson:** Conceptualization (equal). **Hartmut G. Roskos:** Conceptualization (equal); Writing-review & editing(equal); Funding acquisition (lead); Supervision (lead).
## Data Availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
## Appendix A Coupled differential equations of the four-oscillators model
We assume four harmonic oscillators, two having a resonance frequency \(\omega_{1}=\omega_{4}=\omega_{c}\) (cavity mode frequency) and the other two being resonant at \(\omega_{2}=\omega_{3}=\omega_{p}\) (MM plasmonic mode frequency). Oscillators 1 and 2, respectively 3 and 4, are coupled, with the coupling strength being determined by the parameter \(V_{1}\). Also, oscillators 2 and 3 are coupled, with a strength given by \(V_{2}\). For an explanation of this choice of coupling, see Sec. III.1. The differential equations of motion for the four generalized coordinates \(x_{1}(t)\) to \(x_{4}(t)\) of the oscillators read as follows:
\[\begin{split}&\ddot{x}_{1}+\omega_{c}^{2}x_{1}+V_{1}\dot{x}_{2}=0 \,,\\ &\ddot{x}_{2}+\omega_{p}^{2}x_{2}-V_{1}\dot{x}_{1}-V_{2}\dot{x}_{3 }=0\,,\\ &\ddot{x}_{3}+\omega_{p}^{2}x_{3}+V_{2}\dot{x}_{2}+V_{1}\dot{x}_{4 }=0\,,\\ &\ddot{x}_{4}+\omega_{c}^{2}x_{4}-V_{1}\dot{x}_{3}=0\,.\end{split} \tag{10}\]
If the time-dependent factor is taken as \(x_{i}(t)=x_{i}^{0}e^{-j\omega t}\), with \(i=1,2,3,4\), the frequency-domain expressions of the differential equations can be written in matrix form as
\[\begin{bmatrix}\omega^{2}-\omega_{c}^{2}&j\omega V_{1}&0&0&0\\ -j\omega V_{1}&\omega^{2}-\omega_{p}^{2}&-j\omega V_{2}&0\\ 0&j\omega V_{2}&\omega^{2}-\omega_{p}^{2}&j\omega V_{3}\\ 0&0&-j\omega V_{3}&\omega^{2}-\omega_{c}^{2}\end{bmatrix}\begin{bmatrix}x_{ 0}^{0}\\ x_{0}^{0}\\ x_{0}^{0}\\ x_{0}^{0}\\ x_{4}^{0}\end{bmatrix}=0\,. \tag{11}\]
Letting the determinant be zero, we obtain the function for the eigenfrequencies in general polynomial form:
\[ax^{4}+bx^{3}+cx^{2}+dx+e=0\,, \tag{12}\]
with \(x=\omega^{2}\). The coefficients are given explicitly by
\[\begin{cases}a=1\,,\\ b=-(2\omega_{c}^{2}+2\omega_{p}^{2}+2V_{1}^{2}+V_{2}^{2})\,,\\ c=2\omega_{c}^{2}\omega_{p}^{2}+(\omega_{c}^{2}+\omega_{p}^{2})(\omega_{c}^{2}+ \omega_{p}^{2}+2V_{1}^{2})+2\omega_{c}^{2}V_{2}^{2}+V_{1}^{4}\,,\\ d=-[2\omega_{p}^{2}\omega_{c}^{2}(\omega_{p}^{2}+\omega_{c}^{2})+2\omega_{p}^{2} \omega_{c}^{2}V_{1}^{2}+\omega_{c}^{4}V_{2}^{2}]\,,\\ e=\omega_{c}^{4}\omega_{p}^{4}\,.\end{cases} \tag{10}\]
Numerical calculation is used to solve the equation and obtain the four eigenfrequencies of the coupled harmonic oscillators.
## Appendix B Equivalent-circuit model
An alternative way to physically simulate the interaction of the photonic and plasmonic resonators is to represent them by LC oscillators of an electronic circuit whose wiring expresses the interaction between the oscillators [21]. If damping is to be included, one could employ LCR oscillators, but we will only considered the undamped limit here. Fig. 8 shows an equivalent circuit which implements the model of Fig. 2 for the description of the four-mode coupling phenomenon in the PC cavity. Four LC sub-circuits are used to model the four resonators. The coupling between the cavity photonic modes and the plasmonic MM modes is assumed to be capacitive (mediated by the electric radiation field), while the coupling between the plasmonic bright and dark modes is considered as inductive (mediated by the magnetic field of the MM).
With the formula for approximate inductance calculations of Ref. [22] and [23], we obtain for each SRR, with the dimensions as given in Fig. 1, an equivalent inductance of Lpb = Lpd = 0.14 nH. The equivalent inductance of the photonic cavity is chosen arbitrarily as Lcb = Lcd = 1 nH. The equivalent capacitances of the SRRs are determined as Cpb = Cpd = 0.91 fF to yield a resonance frequency of the SRRs of \(f_{p}=445\) GHz, similarly the equivalent capacitances of the cavity are chosen to be Ccb = Ccd = 0.11 fF to yield a cavity resonance at \(f_{c}=480\) GHz. For the coupling capacitance, we obtain Cbb1 = Cbb2 = Cdd1 = Cdd2 = 0.09 fF so that the resonance frequencies obtained from the circuit model match closely with those obtained from CST simulation, while the dimensionless inductive coupling coefficient k between Lcb and Lcd can be varied from 0 to 1, depending on the lateral distance of the two SRRs. The losses in the system are neglected.
Fig. 9a shows the calculated scattering parameter S\({}_{21}\) for the value k = 0.3 of the inductive coupling coefficient. One observes four distinct resonances corresponding to the four eigenmodes. If the coupling coefficient is varied between 0.25 and 0.35, one obtains the evolution of the four resonance frequencies as shown in Fig. 9b. With the increase of k, corresponding to a decrease of the lateral distance between the two SRRs, the highest branch (mixed mode 4) goes up in frequency, while the lowest branch (mixed mode 1) moves down. The middle two branches remain almost unchanged. These results reproduce the general trends of Figs. 3b and 4, which suggests that the simple equivalent circuit can serve as a tool to model the four-mode coupling process. The predictive power of the model is, however limited to semi-quantitative statements, because the exact values of the eigenfrequencies cannot be predicted due to the uncertainty of the values of many circuit parameters and the omission of additional capacitive coupling between the two SRRs [21].The precise determination of element parameters in the circuit model is beyond the scope of this publication.
## Appendix C The case of two bright SRRs in the 1D PC cavity
If a MM with two SRRs, which have the same orientation in the unit cell, is placed into the PC cavity, and the MM is excited with a polarization along the gapped sides of the SRRs, then the four-mode coupling phenomenon disappears. This is shown in Fig. 10, which displays simulated transmittance spectra for the case of two bright SRRs per unit cell, as indicated in the inset of Fig. 10a. Both SRRs have their sides with the gap oriented in x-direction, which is also the
Figure 7: Amplitude transmission spectra of linearly polarized radiation impinging onto the EIT-like-MM-loaded cavity as a function of the polarization direction of the incident electric field (CST simulations). (a) Co-polarized transmitted field component, the spectra for different angles are displaced vertically by 0.2. (b) cross-polarized transmitted field. For clarity, the spectra for different angles are displaced vertically by 0.1.
direction of polarization of the incoming terahertz wave and the direction of the detected electric field. The red and blue curves in Fig. 10a display spectra for SRRs of different size (red curve), respectively the same size (blue curve). The separation distance of the SRRS is in both cases the same, with a value of s = 2 \(\mu\)m. The red curve was obtained for resonance frequencies of the two bright SRRs of 468 and 537 GHz. One observes only three polariton modes. If the resonance frequencies of the SRRs are degenerate (both at 468 GHz), then one even observes only two polariton modes, as shown by the blue curve.
For more insight into the coupling between the MM with two bright SRRs in the unit cell and the cavity, we show in Fig. 10b a color plot of transmittance spectra as a function of the resonance frequency of one of the SRRs. The resonance frequencies of the other one and of the cavity were kept fixed at 468 and 480 GHz, respectively. Also the distance between the two bright SRRs was kept fixed at 2 \(\mu\)m. The graph shows an anti-crossing behavior of two modes, an upper (UP) and a lower (LP) polariton mode, with an additional third polariton mode (MP) between them. Approaching the waist region of the anti-crossing, where the resonances of the two SRRs become degenerate, from either above or below the waist, the third mode looses oscillator strength and vanishes at the degeneracy point. A fourth polariton mode does not appear, which is evidence that the second cavity mode is not excited and hence does not take part in the coupling process. The interaction is limited to the two MM eigenmodes and a single cavity mode. We can fit the UP, middle polariton (MP) and LP with the coupled three harmonic oscillators model [17]. The dash lines derived from the coupled three harmonic oscillators model nicely fit the three polariton branches of the simulation. The open circles represent the varying resonance frequency of the bare SRR, derived from CST simulation. It is interesting to note that the MP diminishes as the resonances of SRRs superimpose with each other, which also represents a polariton dark mode [5].
|
2301.10079 | Reformulation Techniques for Automated Planning: A Systematic Review | Automated planning is a prominent area of Artificial Intelligence, and an
important component for intelligent autonomous agents. A cornerstone of
domain-independent planning is the separation between planning logic, i.e. the
automated reasoning side, and the knowledge model, that encodes a formal
representation of domain knowledge needed to reason upon a given problem to
synthesise a solution plan. Such a separation enables the use of reformulation
techniques, which transform how a model is represented in order to improve the
efficiency of plan generation. Over the past decades, significant research
effort has been devoted to the design of reformulation techniques. In this
paper, we present a systematic review of the large body of work on
reformulation techniques for classical planning, aiming to provide a holistic
view of the field and to foster future research in the area. As a tangible
outcome, we provide a qualitative comparison of the existing classes of
techniques, that can help researchers gain an overview of their strengths and
weaknesses. | Diaeddin Alarnaouti, George Baryannis, Mauro Vallati | 2023-01-24T15:33:37Z | http://arxiv.org/abs/2301.10079v2 | # Reformulation Techniques for Automated Planning: A Systematic Review
###### Abstract
Automated planning is a prominent area of Artificial Intelligence, and an important component for intelligent autonomous agents. A cornerstone of domain-independent planning is the separation between planning logic, i.e. the automated reasoning side, and the knowledge model, that encodes a formal representation of domain knowledge needed to reason upon a given problem to synthesise a solution plan. Such a separation enables the use of reformulation techniques, which transform how a model is represented in order to improve the efficiency of plan generation. Over the past decades, significant research effort has been devoted to the design of reformulation techniques. In this paper, we present a systematic review of the large body of work on reformulation techniques for classical planning, aiming to provide a holistic view of the field and to foster future research in the area. As a tangible outcome, we provide a qualitative comparison of the existing classes of techniques, that can help researchers gain an overview of their strengths and weaknesses.
Introduction
Automated planning is a research discipline that addresses the problem of generating a totally- or partially-ordered sequence of actions that transform the environment from some initial state to a desired goal state. Within this discipline, domain-independent planning refers to those approaches that keep the knowledge model, the domain knowledge related to the problem at hand, separate from planning logic, that enables automated reasoning to generate plans. The development of domain-independent planners within the AI Planning community facilitates the use of this "off-the-shelf" technology for a wide range of applications, including UAV manoeuvring (Ramirez et al., 2018), space exploration (Ai-Chang et al., 2004), and train dispatching (Cardellini et al., 2021). This is despite the complexity issues inherent in plan generation, which are exacerbated by the separation of planner logic from domain knowledge. On the other hand, this separation has the advantage that planning engines can be interchanged in a modular way, provided that they accept the same language for describing planning problems and deliver the same type of plans.
This modular approach has fostered the development of planning engines, as well as the design and exploitation of reformulation techniques. These refer to the ability to automatically re-formulate, re-represent or tune the domain model and/or a problem description, while keeping to the same input language, in order to increase the efficiency of a planning engine and expand the scope of problems solved (Riddle et al., 2011). The aim is to make these techniques independent of domain and planner to some degree (that is, applicable to a range of domains and planning engine technology), and use them to form a wrapper around a planner, improving its overall performance for the particular domain to which it is applied.
In the past few decades, a number of reformulation techniques have been introduced. This is particularly true for classical planning which, despite its intrinsic
simplicity, provides an ideal ground to study and investigate planning techniques that can then be extended beyond a specific framework. For this reason, classical planning has been studied for several decades, and is still a focal point of research within the automated planning community, as evidenced by the papers devoted to it in the flagship conference, the International Conference on Automated Planning and Scheduling (ICAPS)1 and by the number of papers focusing on knowledge engineering for classical planning that are accepted at the Workshop on Knowledge Engineering for Planning and Scheduling (KEPS).
Footnote 1: The interested reader is referred to the ICAPS website: [https://www.icaps-conference.org/](https://www.icaps-conference.org/)
In this paper, we systematically review the state of the art of reformulation techniques for classical planning, with the aim of providing a holistic view of the field and answering the following research questions:
* What reformulation techniques have been proposed in literature?
* How can they be applied on an indicative planning problem?
* What are the particular strengths and weaknesses of each individual reformulation technique?
The main purpose of this review is to provide a qualitative comparison of the existing reformulation approaches, with the aim of helping experts and practitioners in identifying the most promising technique to be used, and to highlight research gaps that can foster further research in the area. Note that empirical analysis of the reviewed reformulation techniques is beyond the scope of this work due to the high variability, among different papers, of considered benchmarks, planning engines, and software/hardware infrastructure (Bochese et al., 2018).
The remainder of this paper is organised as follows. Section 2 the necessary background on planning and reformulation, and we introduce an example domain model to be used as a running example throughout the paper. Then, Section 3
describes the methodology used for this literature review. Section 4 is the main part of the review, presenting the existing reformulation techniques, and showing how they can be implemented on the running example. We then compare the reviewed techniques in Section 5, while in Section 6 we briefly discuss some reformulation techniques that target non-classical planning problems. Finally, we provide conclusions and suggested directions for future research in the area of reformulation techniques for automated planning.
## 2 Background
This section is devoted to providing the required background in terms of automated classical planning, the Gripper domain model used as a running example, and the notion of reformulation.
### Classical Planning
Classical planning is concerned with finding a (partially or totally ordered) sequence of actions transforming the static, deterministic and fully observable environment from the given initial state to a desired goal state [1].
In the classical representation, a _planning task_ consists of a _planning domain model_ and a _planning problem_, where the planning domain model describes the environment and defines planning operators while the planning problem defines concrete objects, an initial state and a set of goals. The environment is described by _predicates_ that are specified via a unique identifier and terms (variable symbols or constants).
Formally, a _planning task_ is a pair \(\Pi=(Dom_{\Pi},Prob_{\Pi})\) where a _planning domain model_\(Dom_{\Pi}=(P_{\Pi},Ops_{\Pi})\) is a pair consisting of a finite set of predicates \(P_{\Pi}\) and planning operators \(Ops_{\Pi}\), and a _planning problem_\(Prob_{\Pi}=(Obj_{\Pi},I_{\Pi},G_{\Pi})\) is a triple consisting of a finite set of objects \(Obj_{\Pi}\), initial state \(I_{\Pi}\) and goal \(G_{\Pi}\).
Let \(ats_{\Pi}\) be the set of all _atoms_ that are formed from the predicates \(P_{\Pi}\) by applying all possible substitution mappings from the predicate parameters (variable symbols) to the objects from \(Objs_{\Pi}\). In other words, an atom is an _instance_ of a predicate (in this article, when we use the term instance, we mean an instance that is fully _ground_). A _state_ is a subset of \(ats_{\Pi}\), and the _initial state_\(I_{\Pi}\) is a distinguished state. The _goal_\(G_{\Pi}\) is a non-empty subset of \(ats_{\Pi}\), and a _goal state_ is any state that contains the goal \(G_{\Pi}\). Note that the semantics of _state_ reflect the full observability of the environment; that is, for a state \(s\), atoms present in \(s\) are assumed to be true in \(s\), while atoms not present in \(s\) are assumed to be false in \(s\).
_Planning operators_ are "modifiers" of the environment. They consist of _preconditions_, i.e., what must hold prior to an operator's application, and _effects_, i.e., what is changed after its application. We distinguish between _negative effects_, i.e., what becomes false, and _positive effects_, i.e., what becomes true after an operator's application. _Actions_ are instances of planning operators, i.e., an operator's parameters, as well as corresponding variable symbols in its preconditions and effects, are substituted by objects (constants). Planning operators capture general types of activities that can be performed. While predicates can be instantiated to atoms to capture given relations between concrete objects, planning operators can be instantiated to actions to capture given activities between concrete objects.
A _planning operator_\(o=(\textit{name}(o),\textit{pre}(o),\textit{eff}(o))\) is specified such that \(\textit{name}(o)=\textit{op\_name}(x_{1},\ldots,x_{k})\), where _op_name_ is a unique identifier and \(x_{1},\ldots,x_{k}\) are all the variable symbols (parameters) appearing in the operator, \(\textit{pre}(o)\) is a set of predicates representing its precondition, and \(\textit{eff}(o)\) represents its effects, divided into \(\textit{eff}^{-}(o)\) and \(\textit{eff}^{+}(o)\) (i.e., \(\textit{eff}(o)=\textit{eff}^{-}(o)\cup\textit{eff}^{+}(o)\)) that are sets of predicates representing the operator's negative and positive effects, respectively. _Actions_ are instances of planning operators that are formed by substituting objects, which are defined in a planning problem, for operators' parameters as well as for the corresponding variable symbols in operators' preconditions and effects. An action
\(a=(\mathit{pre}(a),\mathit{eff}^{-}(a)\cup\mathit{eff}^{+}(a))\) is _applicable_ in a state \(s\) if and only if \(\mathit{pre}(a)\subseteq s\). Application of \(a\) in \(s\), if possible, results in a state \((s\setminus\mathit{eff}^{-}(a))\cup\mathit{eff}^{+}(a)\).
A solution of a planning task is a sequence of actions transforming the environment from the given initial state to a goal state. A _plan_ is a sequence of actions. A plan is a _solution_ of a planning task II, a _solution plan_ of II in other words, if and only if a consecutive application of the actions from the plan starting in the initial state of II results in the goal state of II.
The standardised language for describing classical planning tasks is PDDL [10], that was introduced in 1998 by the organisers of the first International Planning Competition,2 building on top of STRIPS [11] and the Action Description Language (ADL) [12].
Footnote 2: [https://www.icaps-conference.org/competitions/](https://www.icaps-conference.org/competitions/)
### The Gripper Domain
As a running example in this paper, we consider the well-known Gripper domain model, initially introduced in the first International Planning Competition in 1998 by Jana Kohler [10]. This domain was selected in this paper because of its simplicity, and due to its suitability for being reformulated. In a nutshell, the Gripper domain consists of a robot that has two grippers, and is tasked to moved a number of balls between two rooms. The domain model includes three operators:
**Move:**: to move the robot between rooms.
**Pick:**: to use a gripper to pick up a ball.
**Drop:**: to drop a ball that the robot is holding in one of its grippers.
Figure 1 shows the PDDL code of the domain model, including the relevant predicates and the three mentioned operators. Figure 2 shows an example Gripper planning problem, where the robot has 2 grippers, and has to move 4 balls between the considered rooms. The initial state defines the initial position of balls and
robot, and the initial status of the grippers. The goal section specifies the desired goal position of the 4 balls.
### Reformulation in Domain-Independent planning
Taking a general perspective, reformulation is a term meaning a change to the way in which one thinks about a problem, and it has been demonstrated to be a common practice employed also by people to tackle challenging problems (Riddle et al., 2013). A theoretical framework for describing reformulation schemes in automated planning has been presented by Chrpa et al. (2012). Early research in the area of reformulation in AI started in the 1960s (Amarel, 1968), and reformulation has rapidly become a major field of research. In the field of automated planning
Figure 1: The Gripper PDDL domain model.
and, more general, of state-space search, reformulation is intended as a change of representation. Different representations of the same problem can result in different search spaces, and the use of reformulation techniques can allow to make a problem more amenable for a considered planning engine by providing a search space that is easier to be explored to find a goal state.
Focusing on automated planning, the domain-independent paradigm decouples reasoning from knowledge representation. This supports the use of reformulation techniques which can re-formulate, re-represent or tune the domain model and/or problem description, while keeping to the same input language, in order to increase the efficiency of a planning engine and expand the scope of problems solved. The idea is that these techniques are independent from application domain and planning engines (that is, applicable to a range of domains and planning engine technology), and use them to form a wrapper around a planner, improving its overall performance for the domain to which it is applied.
Figure 3 gives an overview of the use of reformulation wrapper to modify the representation of a planning task in a domain- and planner-independent way that allows to reuse the same technique in any domain and with any engine that supports the considered formal language. In this review, we focus on classical planning
Figure 2: An example Gripper planning problem that consists of 4 balls to be carried from rooma to roomb.
reformulation approaches that aim at changing the representation of a planning task within the same formal language, with the goal of making the task more amenable for a given (class of) engines.
## 3 Review Methodology
This section is devoted to presenting the methodology used for performing the literature review. To ensure deeper understanding of the reformulation approaches that are applicable in domain-independent classical planning, we applied a variation of the systematic literature review methodology as proposed by Kitchenham and Charters (2017) for software engineering and previously applied for reviews of AI-related literature (Baryannis et al., 2019).
### Search strategy
The collection of literature concerning reformulation techniques in classical planning was performed through automated search using Google Scholar, Scopus and the University of Huddersfield library search engine (Summon). Two levels of keyword terms were utilised. On the first level, we used a disjunction (OR) of the following three keyword phrases: "classical planning reformulation", "PDDL reformulation", and "domain-independent reformulation". On the basis of the techniques identified using the aforementioned keyword phrases, we provided a second level of keywords,
Figure 3: An overview of the use of a domain- and planner-independent reformulation technique.
to obtain all the relevant works for each of them. This second-level search was with the following keywords linked to one another by OR: "macro-operators", "macro-actions", "entanglements", "actions elimination", "bagged representation, "action schema splitting", and "model configuration".
The previously mentioned libraries were used equally as results vary between them. The covered time period is set from 1980 to January 2022. Notably, thanks to the introduction of the standardised PDDL language and the first edition of the International Planning Competition in 1998, most of the relevant work has been published after 2000.
To maximise coverage and completeness of the conducted search, two ancillary procedures were included: (1) checking reference lists of select primary studies (often referred to as backwards snowballing); and (2) identifying existing literature reviews on automated planning that may include reformulation (more details in Section 3.3).
### Search scope
A series of inclusion and exclusion criteria frames the scope of this review, to ensure quality and relevance of the selected works. First, studies must be peer-reviewed and written in English. Then, they should contain a formal description of at least one reformulation technique for automated planning models, accompanied with an empirical analysis of the usefulness of the proposed technique. Third, the proposed reformulation technique must be domain-independent and must focus on classical planning. Fourth, the reformulation must accept planning models as input, and provide as output models that can be processed by domain-independent classical planning engines. It should be noted that the last point excludes some well-known approaches such as DISCOPLAN [10] and TIM [22], that do not allow to include the extracted knowledge in planning models. While the main focus of the review is on techniques for reformulating classical PDDL lifted models, here we also include works proposed before the
introduction of the language, and that propose techniques for reformulating STRIPS (Fikes and Nilsson, 1971) models; for the purposes of this review these can be considered to be similar in nature to PDDL ones.
A search based on the keywords and the engines described in Section 3.1 yields several thousands of results. By carefully and thoroughly applying all aforementioned criteria and ancillary search strategies, and by considering also abstracts of the identified studies, 54 studies remain and are analysed in the rest of this paper. For the sake of readability, the selected studies are divided into different classes, presented in Section 4, according to the reformulation approach that they present.
### Related surveys
The main motivation behind this systematic review is the limited amount of works covering the field of domain-independent reformulation for automated planning. The most relevant published work is a survey of machine learning methods for automated planning (Jimenez et al., 2012) where a section is dedicated to macro-actions. This survey covers only 4 studies that are also covered in this paper, published between 1977 and 2007. The work of Long et al. (2002) provides a definition of reformulation for automated planning, and briefly discusses some ways in which it has been applied in the field, though it also includes cases not within the scope of this review. To the best of our knowledge, this review is the first attempt at systematically analysing reformulation approaches for automated planning.
## 4 Reformulation Techniques for Classical Planning
In this section we present the reviewed studies, classified according to the implemented reformulation approach. For each approach, we also provide an example of applying the reformulation approach to the Gripper domain.
### Entanglements
The entanglements approach was firstly introduced in [Chrpa and Bartak, 2009], and aims at removing useless actions from a ground planning task, to reduce the branching factor of a task at hand. Such useless actions are identified by analysing the relationship between predicates of the problem, and operators. In a nutshell, the idea is to change the representation of a planning task so that only potentially useful actions can be grounded and considered during search.
In literature, two main classes of entanglements have been proposed: outer and inner. Outer entanglements [Chrpa and McCluskey, 2012, Chrpa et al., 2018] identify useless actions on the basis of their relationships with predicates from the initial state or the goal state. Inner entanglements [Chrpa et al., 2019] focus instead on the relation between pairs of actions. There are two types of inner entanglements: _entanglements by preceding_, which defines the case where a certain predicate is required by an operator as a precondition, and _entanglements by succeeding_, which denotes the case where a particular operator makes a predicate available as one of its effects.
According to the results presented in [Chrpa et al., 2012], exactly finding entanglements is as hard as solving the planning task itself. For this reason, most of the work in literature focuses on approximate approaches for identifying entanglement relations. To exemplify how entanglements can reformulate a given planning task, here we show how the Gripper domain can be modified by entanglements by init for the operator pick. The basis of entanglement in this case for the particular encoding is that it is useful to perform a pick action only on balls that are in the initial room. In any other occasion, the pick action should not be considered. The relationship is therefore captured by the (at?b?r) predicate, that indicates the position of a ball and is a precondition of the considered operator. To implement the entanglement, a new predicate \(p^{\prime}\) (at-ent?b?r) should be created and added to
the domain model, having the same arguments of (at?b?r). The pick operator should be modified by adding \(p^{\prime}\) as a precondition. In the initial state description of the problem, an additional \(p^{\prime}\) predicate should be created for each existing (at?b?r). This is similar in nature to the auxiliary predicate added to preconditions to account for unforeseen circumstances to address the qualification problem in service specifications (Baryannis et al., 2017; Baryannis and Plexousakis, 2013) An excerpt of the modified domain and problem models is presented in Figure 4.
### Macro-Operators
Macro-operators (macros) represent a well-known and well-studied technique for enhancing performance of planning engines. In a nutshell, macros encapsulate sequences of (primitive) planning operators. Macros are encoded as ordinary planning operators and, hence, they can be added into planning domain models. Macros, informally speaking, provide short-cuts in the state space and, consequently, planning engines can generate plans in a smaller number of steps. This comes at the
Figure 4: An excerpt of the modified Gripper planning task when the pick operator is entangled by init on the basis of the at predicate.
cost of an increased branching factor, since macros often have many more instances than primitive operators and thus their use might introduce additional overheads as well as larger memory requirements.
The notion of macros can be traced back to 1970s and 1980s. REFLECT [Dawson and Siklossy, 1977] builds macros from pairs of primitive operators that can be applied in sequence and share at least one argument. Macro Problem Solver [Korf, 1985] is capable of learning macros for achieving particular non-serialisable subgoals (e.g. in Rubik's cube). MORRIS [Minton, 1985] learns macro-operators from parts of plans appearing frequently (S-macros) or being potentially useful despite having low priority (T-macros). FM [McCluskey, 1987] learns sequences of operators frequently used together, and combines them in potentially long sequences called chunks. MACLEARN [Iba, 1985, 1989] is a heuristic approach for learning re-usable macros for solving puzzle problems; the system includes three subcomponents that are in charge of proposing promising macros and testing their usefulness for solving problems.
The first International Planning Competition, held in 1998, introduces PDDL that became the "de facto" standard language for planning models. The introduction of a language supported the design and testing of approaches for the generation and identification of macros, that started to thrive. MacroFF [Botea et al., 2005], based on the well-known FF planning engine [Hoffmann and Nebel, 2001], generates macros according to a number of pre-defined rules (e.g., the "locality rule") that apply on adjacent actions in training plans. MacroFF is capable of generating planner-independent macros, that can be added to domain models as standard operators, or planner-specific macros for FF, that can be provided as additional input to the planning engine. DHG [Armano et al., 2004] is able to learn macro-operators from static domain analysis by exploring a graph of dependencies between operators. WIZARD [Newton and Levine, 2007] is a framework that exploits genetic programming to create macros; starting from the primitive operators of
the domain model, WIZARD leverages genetic algorithms to combine them into useful macros for a given planning engine. Chrpa (2010) propose an approach for identifying suitable macros by looking at action dependencies in generated plans. Dulac et al. (2013) propose to exploit n-gram algorithm to analyse training plans to automatically learn macros.
DBMP/S (Hofmann et al., 2017) applies Map Reduce for learning macros from a large training plan databases. More recently, the same authors propose an approach for generating macros for ADL domain models, that includes PDDL features that are rarely supported by more traditional methods (Hofmann et al., 2020). CAP (Asai and Fukunaga, 2015) exploits component abstraction, that allows to cluster together similar objects (introduced by MacroFF), for generating sub-goal specific macros. In other words, CAP divides complex planning problems into independent sub-problems by abstracting the components of the original problem. Then it finds sub-plans for each sub-problem, and connects the actions of every sub-plan into a solo macro operator. BloMa (Chrpa and Siddiqui, 2015) leverages block deordering, which rearranges plans into "blocks" that can no longer be deordered (Siddiqui and Haslum, 2012), for generating longer macros. In particular, BloMa generates a large pool of macros from "macroblocks", which are derived from "blocks" by applying a set of rules. Chrpa and Vallati (2019) introduce the idea of critical section macros, that are inspired by parallel computing critical sections; such macros aim at capturing a whole activity that deals with a limited resource (or more limited resources). Finally, Castellanos-Paez et al. (2021, 2021) introduce ERA, an approach for extracting macros from plans that is based on pattern mining; an important feature of ERA is its ability to identify macros even if the included operators are not always adjacent in the considered plans.
A different line of work looks into exploiting the notion of entanglements to identify promising macros. (Chrpa, 2010) focuses on combining both macro-operators and entanglements in order to get the benefit behind each of them, as
macros can reduce the size of the search space whereas the usage of entanglements is capable of reducing the branching factor which may occur because of the generated instances of macros. [Chrpa et al., 2013] propose an automated approach that combine two primitive operators that are linked by inner entanglement relationships, and leverages on such relationships to eliminate one or both of the primitive operators from the domain model. MUM [Chrpa et al., 2014] is a learning system that exploits outer entanglements as heuristics in the process of generating macros. Macros generated by MUM have a limited number of instances, specifically, the number of macro instances has to be in the same order of magnitude as the number of primitive operator instances. OMA [Chrpa et al., 2015b] is capable of generating macros online, i.e. without the need for offline training, by considering entanglement relations between operators of the domain model.
Notably, there is also a line of work that considers the problem of identifying the best (set of) macros to be used, given an initial pool of candidates. Alhossaini and Beck [2013] select problem-specific macros from a given pool of macros (hand-coded or generated by another technique) using a specifically trained predictor. ASAP [Vallati et al., 2013] uses a set of provided training plans to identify the best combination of planning engine and macro set (also considering entanglements) to be used on a given domain. PbP [Gerevini et al., 2014] uses statistical tests to identify the most promising portfolio of planning engines and macro actions to be used for solving challenging planning instance. Finally, MeVo [Vallati et al., 2020], given a large pool of macros, can evolve over time the best set of macros to be used by a planning engine for solving a continuous stream of problems from a considered domain. This features allows MeVo to overcome the issue of having training instances that are not representative of the testing ones.
Technically speaking, the way in which a macro-operator is generated by assembling two operators is straightforward and quite similar to the way composition works in the case of services Baryannis and Plexousakis [2014]. Considering two
operators \(a_{i}\) and \(a_{j}\), the resulting macro encapsulating their execution in sequence can be generated as follows:
\[pre\left(a_{i,j}\right)=Pre\left(a_{i}\right)\cup\left(Pre\left(a_{j}\right) \backslash eff^{+}\left(a_{i}\right)\right)\]
\[eff^{-}\left(a_{i,j}\right)=\left(eff^{-}\left(a_{i}\right)\cup eff^{-}\left(a_ {j}\right)\right)\backslash eff^{+}\left(a_{j}\right)\]
\[eff^{+}\left(a_{i,j}\right)=\left(eff^{+}\left(a_{i}\right)\cup eff^{+}\left( a_{j}\right)\right)\backslash eff^{-}\left(a_{j}\right)\]
More than two operators can be encapsulated in a single macro by iteratively repeating the described process. Figure 5 shows the macro-operator pick-move-drop for the Gripper domain model. It has been generated by composing the primitive operators pick, move, and drop, with the idea of providing a single operator that can represent a whole movement of a ball from its initial position to its goal position. It can be added to the original domain model, and planning engines will take it into account when solving a given planning task. Notably, the resulting plans may include the macro operator and therefore, to be valid with regards to the original domain model, will need to be parsed.
Figure 5: An example macro-operator that encapsulates the sequence of operators pick(?obj?from?gripper), move(?from?to), drop(?obj?to?gripper). For the sake of clarity, we use the same name for matched variables between operators.
### Operators Elimination
Operators elimination aims at reducing the branching factor by removing from the domain model those operators whose effects can be achieved by executing a sequence of other operators. In a nutshell, the main point is to minimise the size of the model by identifying operators that can be considered redundant. It should of course be noted that it may not always be possible to remove operators from a domain model. Haslum and Jonsson (2000) introduce a technique for performing operators elimination, by formally defining the notions of a redundant operator and of a minimal set of operators, also proposing a greedy algorithm to identify the minimal set of operators for a given domain model.
Operators elimination may be of limited impact when used on its own, particularly in the case of highly engineered domain models. However, it can be particularly helpful when performed after another reformulation approach. For instance, Chrpa and Bartak (2009) discuss operators elimination after the exploitation of entanglements, and there has been a large body of work investigating the elimination of primitive operators that are encapsulated into macros (e.g., (Chrpa and Vallati, 2019, 2010, 2013)).
Considering the Gripper domain, in the original model shown in Figure 1 there are no operators that are redundant and can be removed. However, if the domain model is extended by adding the macro shown in Figure 5, then both pick and drop operators can be removed without compromising solvability of problems sharing the structure of the one shown in Figure 2 - i.e. where the goal is not for the robot to hold a ball in its gripper.
### Bagged Representation
In domain models encoded in PDDL, it is usually the case that each object is uniquely identified, even if it is not important to distinguish between objects. In the presence of large sets of objects, this can lead to an explosion of the combinatorial
problem, that needs to take into account the specific information of each individual object. The bagged representation reformulation addresses the above criticism: in cases where only the number of objects is relevant, and it is not important to have the ability to distinguish between objects, they can be represented as "bags" of identical objects. The main advantage is to reduce the branching factor, by basically pruning states that are identical but for the specific object.
Bagged representation was first introduced in 2013 (Riddle et al., 2013), and then further extended by providing an in-depth analysis of its impact on well-known benchmarks and automated techniques to perform the reformulation (Riddle et al., 2015, 2015, 2016).
When it comes to reformulating the Gripper domain model, objects of type ball are an excellent candidate to be represented using bagged representation. It is important to know their number in every room, but it is not important to know which specific ball is where. Figure 6 presents an excerpt of the reformulated domain and problem models. Objects of type ball are removed, and are substituted by counters. The predicate count is used to record the number of balls in a room. Predicate more is used to link together the different possible values of a counter. This is required because in classical planning there is no notion of numeric elements, so this allows to use Boolean predicates such as more as counters. The reformulated operator pick is also shown in Figure 6, with the main change being that it now updates the counter of the room where it is applied. The drop operator (not shown in the figure) is modified in a similar way. Finally, the problem model is reformulated by removing the ball objects, appropriately setting the initial values of the counters for the considered rooms, and expressing the goal in terms of the number of balls that need to be in the final room.
(:predicates... (count?b?r?n) (more?n1?n2))
(:action pick :parameters (?n1?n0?obj?room?gripper) :precondition (and (ball?obj)(room?room) (gripper?gripper)(at-robby?room)(free?gripper) (more?n1?n0)(count?obj?room?n0)) :effect (and (carry?obj?gripper) (not (count?obj?room?n0)) (count?obj?room?n1)(not (free?gripper))))...)...
(:objects n4 n3 n2 n1 n0 rooma roomb ballX left right) (:init.... (ball ballX) (more n0 n1) (more n1 n2)(count ballX rooma n4) (count ballX roomb n0)...) (:goal (count ballX roomb n4)) )
### Action Schema Splitting
The presence of operators with a large number of parameters can be problematic for the grounding step of domain-independent planning engines, as they usually have to instantiate all possible actions before filtering out those that are irrelevant. The number of ground actions grows exponentially with the number of parameters and the number of objects of the problem to solve. In order to address this issue, the idea of action schema splitting [Areces et al., 2014] is to split large operators into smaller ones (with regard to the number of parameters). The main aim is to break the exponential growth by dividing parameters between different operators. While the idea is intuitively easy, its exploitation has a number of hidden issues
Figure 6: An excerpt of the Gripper domain and problem models reformulated using bagged representation for objects of the type ball.
to be taken into account. When an operator is broken into two smaller operators, the order in which these two operators are executed becomes important, as is to consider whether their execution can be interleaved with different operators - that can change the state of the world in some unexpected ways.
Considering the example Gripper domain model, in the original version of the model there is no operator that is suitable to be split using the described methodology. However, action schema splitting can be understood also as the "opposite" of macro-operators and in fact, it can also be used to find reformulations of models by first encapsulating primitive operators into macros, and then splitting them in different ways. For this reason, to exemplify the use of this technique, we consider the macro encapsulating pick, move, drop operators shown in Figure 5 and we split it into two operators, pick-move and drop. Figure 7 shows the resulting operators. It is worth highlighting that the example also shows a drawback of the action schema splitting technique, i.e. the fact that the resulting operators may have the same number of parameters as the initial large operator.
Figure 7: Action Schema Splitting applied to the macro operator shown in Figure 5, and resulting in two operators.
### Domain Model Configuration
It is well known that the way in which elements of planning models are ordered can have an impact on the performance of domain-independent planning engines [Howe and Dahlman, 2002]. In this context, the term elements can refer to operators, pre and post conditions, predicate definitions, etc. for the domain models; objects, initial and goal state predicates listing for problem definitions.
The idea behind domain model configuration is to identify an ordering of elements that can improve the performance of a considered domain-independent planning engine. Vallati et al. [2015] introduce an approach that leverages on algorithm configuration techniques to identify a suitable configuration of a domain model to improve the performance of a planning engine. The work has been subsequently extended [Vallati et al., 2021] to consider also cases where macro-operators have to be added to the domain model. Vallati et al. [2017] describe a method for the online reordering of domain models by means of dedicated heuristics, based on aspects such as the number of preconditions, number of effects, etc. In a different line of work, Vallati and Serina [2018] explore the configuration of planning problem models, by considering a structure called Planning Encoding Graph (PEG) [Serina, 2010] to produce information that helps in creating the basis on which the reordering of elements should be done. In a nutshell, the PEG can provide information about how important some objects of the problem are based on their involvement in predicates of both initial and goal descriptions. This knowledge can then be exploited in the ordering of the predicates, according to the objects that they deal with.
Considering the guidelines in [Vallati et al., 2015], there are no unpromising operators that we may prefer to put last in the Gripper domain model. However, following the introduced notion of _directionality_, we may change the ordering of operators to pick, move, drop. In this way, the ordering of operators follow the
expected typical ordering of corresponding actions in solution plans that has been demonstrated to be useful for a range of planning engines.
## 5 Qualitative Comparison
Having completed an overview of the existing literature on domain- and planner-independent reformulation for classical planning, we are now able to qualitatively compare the considered techniques. In fact, we will now focus on the advantages and disadvantages of the reviewed techniques, with the aim of providing useful guidelines for planning experts and practitioners in the process of selecting a promising technique to be used to improve planning performance on a domain of interest. Table 1 gives an overview of the reviewed reformulation techniques in terms of their main advantages and the major potential drawbacks.
\begin{table}
\begin{tabular}{l|l|l} \hline
**Reformulation Approach** & **Benefits** & **Drawbacks** \\ \hline Macro-Operators & - Reduce depth (reduce the transitions needed to reach goal states). & - Increase branching factor. \\ & & - Increase of the ground size. \\ \hline Entanglements & - Reduce the branching factor. & - Potentially incomplete. \\ \hline Actions Elimination & - Reduce the branching factor. & - Rarely applicable. \\ & & - Potentially incomplete. \\ \hline Bagged Representation & - Reduce the branching factor. & - Only applicable when having indistinguishable numerable objects. \\ \hline Action Schema Splitting & - Reduce the ground size of the problem. & - Potential increase of the branching factor. \\ \hline Domain Model Configuration & - Ease the exploration of the search & - Can have limited impact if planners \\ & space. & re-order elements internally. \\ & & - Potential reduction of performance if \\ & & wrong ordering is used. \\ \hline \end{tabular}
\end{table}
Table 1: Qualitative comparison of the main strengths and weaknesses of the reviewed reformulation techniques.
As indicated in Table 1, most reformulation techniques aim at making the exploration of the search space easier by reducing the branching factor, reducing the number of steps needed to reach a goal state, or reducing the ground size of the problem to be solved. However, every technique has potential drawbacks to be weighed in when deciding whether to exploit it. Drawbacks range from the serious potential loss of completeness to a more shallow limited impact on performance in the worst case. The use of macros can boost the performance of a planning engine by reducing the number of steps needed to reach a goal state, at the cost of a potentially significant increase in terms of the branching factor and a larger ground problem. Entanglements aim at reducing the branching factor and the ground size of a problem by eliminating unpromising actions, but in doing so can remove some or all of the paths to goal states. In a similar fashion, action elimination directly reduces the ground size by eliminating operators deemed to be useless, but on its own it is rarely applicable to well-formed domain models. Further, if applied in an "aggressive" way, this may undermine completeness. Bagged representation provides an elegant way to reduce branching factor by removing the differences between objects, but it is again a technique that is rarely applicable on its own. Action schema splitting aims at reducing the ground size of a problem by splitting complex operators, but this comes at the cost of increasing the branching factor. Finally, configuring a domain model can help improve the performance of planning engines by listing the most important planning elements first, but if used inappropriately it can also lead to adverse effects.
It should be noted, however, that in many cases it is possible to combine different techniques to maximise performance boost and mitigate drawbacks of individual techniques (Vallati and Kitchin, 2020). As mentioned in previous sections, macros are frequently combined with other reformulation techniques. For instance, they have been used with entanglements (Chrpa et al., 2014), with action elimination (Chrpa and Vallati, 2022), or with domain model configuration (Vallati et al., 2021).
While domain model configuration can be straightforwardly combined with any other reformulation technique, the combination of macros and entanglements can lead to significant performance improvements, as entanglements are tackling the main drawback that is associated with the use of macros, i.e. the branching factor.
The main take-home messages that can be diluted by the performed systematic review of the literature, can be summarised as follows.
* The majority of existing literature is dedicated to techniques for generating macros, while comparatively limited effort has been devoted to investigating alternative or different reformulation techniques. Admittedly, the planning community should aim at expanding the spectrum of reformulation techniques as much as possible, to foster their combination and the possibility to use them fruitfully in large and challenging domain models -where they are needed the most.
* A substantial number of existing reformulation techniques aims at addressing issues that are typical of planning engines that reason on the basis of a ground representation. This focus is historically motivated, as domain-independent planning engines traditionally ground the lifted PDDL representation. However, as engines capable of reasoning with lifted or partially ground representation are gaining momentum (see for instance (Horcik and Fiser, 2021, Correa et al., 2020)), reformulations that are effective also on lifted models can and should be investigated more convincingly.
* On a similar note as the previous point, there is a lack of reformulations that look into exploiting the potential synergies with planning approaches that are based on compiling the planning problem into an equivalent SAT or ASP problem to be solved. Given the fact that reformulations for such languages have been proposed (see for instance (DODARO et al., 2022)), there may be changes in PDDL models that can result in performance boost in the resulting compiled instances.
* A potentially interesting area to explore is the reformulation of problem models. Existing techniques tend to have a strong focus on the domain model, to be widely applicable on problems from the same domain. However, as planning is more and more used in real-world applications, the complexity of problems is increasing as well. Further, in some domains like urban traffic control (McCluskey and Vallati, 2017) or energy network balancing (Piacentini et al., 2013), large parts of the problem models remain the same between problems (i.e. the network description); so it would we worth exploring if problem models can benefit from a targeted reformulation.
* We recognise that further work is needed to improve the usability of planning techniques in real-world applications, as the main focus of existing reformulation techniques is still on classical planning. However, as demonstrated by a number of recent works, in many cases classical planning reformulation approaches can be extended to, or provide inspiration for, non-classical systems.
## 6 Beyond classical planning
In this review, we focused on reformulation techniques for classical planning. However, it is worth mentioning that there is also a (limited) body of work looking at reformulation for problems beyond classical planning. In this section we provide an overview, by no means complete, of some of the reformulation approaches introduced for planning problems beyond classical. On the one hand, non-classical planning models include additional language features that can complicate the reformulation process. Examples of languages that have the expressive power to represent non-classical problems include PDDL 2 (Fox and Long, 2003) and PDDL 3 (Gerevini et al., 2009), for numeric, temporal planning and for encoding preferences; PDDL + (Fox and Long, 2006) for mixed discrete-continuous problems, PPDDL (Younes and Littman, 2004) and RDDL (Sanner, 2010) for probabilistic planning, and MA-PDDL (Kovacs, 2012) for multi-agent problems. On the other hand, it can
be argued that non-classical planning is needed in most real-world applications, and it is therefore imperative to investigate techniques and approaches to boost planning performance in such circumstances.
A line of research that looks into extending reformulation techniques for classical planning to more expressive cases is that of Chrpa et al. (2015), which investigates the use of entanglements in numeric planning problems. Specifically, the authors extend the notion of outer entanglements to allow them to handle numeric variables. Similarly, Scala (2014) extended macros to be used in the presence of numeric fluents. Finally, Franco et al. (2019) present a technique to reduce the ground size of PDDL+ planning problems by reducing the arity of sparse predicates, drawing a parallel to bagged representation for classical planning.
A different line of research focuses instead on translating a planning model from an original input language, to a different less-expressive one. The main advantage of this approach is increasing the number of planning engines that are able to reason upon the planning problem, and leverage existing robust technologies devised for solving more restricted cases. Examples of this class of reformulation approaches include (Percassi et al., 2021), that translates PDDL+ problems into PDDL2.1 ones, (Grastien and Scala, 2020; Taig and Brafman, 2013) that provides approaches for translating of conformant planning problems into classical problems, the representation of uncertainty in conformant planning problems Palacios and Geffner (2009), the translation of complex temporal aspects in PDDL2.1 (Cooper et al., 2010), and the removal from PDDL3 of soft trajectory constraints (Percassi and Gerevini, 2019).
## 7 Conclusion
Reformulation represents a well-known class of approaches for improving the performance of domain-independent planning systems. The main idea is to represent a given planning problem in a way that allows to increase the efficiency
of a selected planning engine to be used to solve the considered problem or class of problems. In this paper, we reviewed the state of the art of reformulation approaches for classical planning. We first presented in detail the different techniques, and then took the opportunity to provide a qualitative comparison of their expected benefits and drawbacks, with the aim of providing some useful guidelines to select the most appropriate reformulation to be used for a considered problem. Notably, the provided comparison helps also in highlighting potentially fruitful combinations of reformulation techniques. Finally, to emphasise the importance of reformulation for planning models beyond classical, we briefly presented well-known work in the area.
### Acknowledgement
Mauro Vallati is supported by the UKRI Future Leaders Fellowship [grant number MR/T041196/1].
|
2303.06062 | Jordan algebra in R | In this short article I introduce the "jordan" package which provides
functionality for working with different types of Jordan algebra. I give some
numerical verification of the Jordan identity for the five types of Jordan
algebras. The package is available on CRAN at
https://CRAN.R-project.org/package=stokes. | Robin K. S. Hankin | 2023-02-21T01:02:40Z | http://arxiv.org/abs/2303.06062v1 | # Jordan algebra in R
###### Abstract
In this short article I introduce the jordan package which provides functionality for working with different types of Jordan algebra. I give some numerical verification of the Jordan identity for the five types of Jordan algebras. The package is available on CRAN at [https://CRAN.R-project.org/package=stokes](https://CRAN.R-project.org/package=stokes).
## 1 Introduction: Jordan algebras
A _Jordan algebra_ is a non-associative algebra over the reals with a bilinear multiplication that satisfies the following identities:
\[xy=yx\]
\[(xy)(xx)=x(y(xx))\]
(the second identity is known as the Jordan identity). In literature, multiplication is usually indicated by juxtaposition but one sometimes sees \(x\circ y\). Package idiom is to use an asterisk, as in x*y. Following McCrimmon [1], there are five types of Jordan algebras:
* **type 1**: Real symmetric matrices, class real_symmetric_matrix, abbreviated in the package to rsm
* **type 2**: Complex Hermitian matrices, class complex_herm_matrix, abbreviated to chm
* **type 3**: Quaternionic Hermitian matrices, class quaternion_herm_matrix, abbreviated to qhm
* **type 4**: Albert algebras, the space of \(3\times 3\) Hermitian octonionic matrices, class albert
* **type 5**: Spin factors, class spin
(of course, the first two are special cases of the next). The jordan package provides functionality to manipulate jordan objects using natural R idiom. Objects of all these classes are stored in matrix form with columns being elements of the jordan algebra. The first four classes are matrix-based in the sense that the algebraic objects are symmetric or Hermitian matrices (the S4 class is jordan_matrix). The fifth class, spin factors, is not matrix based.
## 2 Matrix-based Jordan algebras, types 1,2,3
The four matrix-based Jordan algebras have elements which are square matrices. The first three classes are real (symmetric) matrices, complex (Hermitian) matrices, and quaternionic (Hermitian); type 4 is considered separately at the end. Types 1,2, and 3 all behave in the same way from a package idiom perspective. Consider:
> library(jordan) > x <- rsm() # "Random Real Symmetric Matrix" > y <- rsm() > z <- rsm() > x Vector of real symmetric matrices with entries [1] [2] [3] [1,] -1.41 1.76 -0.01 [2,] 1.70 -1.67 -0.41 [3,] 0.57 0.28 -0.75 [4,] -0.34 -0.21 -0.23 [5,] 0.16 1.18 1.63 ..
[1,] 98.59 101.76 99.99
[2,] 101.70 98.33 99.59
[3,] 100.57 100.28 99.25
[4,] 99.66 99.79 99.77
[5,] 100.16 101.18 101.63................................
[11,] 101.03 99.98 101.81
[12,] 101.78 99.57 102.14
[13,] 100.31 99.58 101.12
[14,] 100.01 99.73 99.44
[15,] 100.69 101.46 100.21
(the last line is motivated by analogy with M + x, for M a matrix and x a scalar). Jordan objects may be multiplied using the rule \(x\circ y=(xy+yx)/2\):
> x*y Vector of real symmetric matrices with entries [1] [2] [3] [1,] -0.39760 -0.47830 -1.42100 [2,] 2.44290 0.55960 3.98550 [3,] 2.09730 -1.98220 5.26770 [4,] -0.45340 0.02920 -0.32840 [5,] 1.07915 -0.68270 -1.66305..
[4,] 1.4684547 0.8016225 1.1587312
[5,] -1.4314930 0.6700580 4.4210495...
[11,] 4.1317442 1.0128450 1.92771358
[12,] 0.5929205 0.2259248 -1.4115125
[13,] 0.8094577 0.7190828 6.6453095
[14,] -7.9400165 0.0537760 -0.7245633
[15,] 6.0556225 1.1611970 9.5536030 showing numerically that \(x(yz)\neq(xy)z\). However, the Jordan identity \((xy)(xx)=x(y(xx))\) is satisfied:
> LHS <- (x*y)*(x*x) > RHS <- x*(y*(x*x)) > LHS-RHS
Vector of real symmetric matrices with entries
[1] [2] [3]
[1,] -1.776357e-15 -4.440892e-15 -8.881784e-16
[2,] 0.000000e+00 1.776357e-15 -3.552714e-15
[3,] 7.105427e-15 -1.776357e-15 0.000000e+00
[4,] 0.000000e+00 -2.220446e-16 -1.332268e-15
[5,] -1.776357e-15 -8.881784e-16 -3.552714e-15
...
[11,] 4.440892e-16 0.000000000 -1.776357e-15
[12,] 7.105427e-15 -8.881784e-16 0.000000e+00
[13,] -1.332268e-15 6.661338e-16 1.421085e-14
[14,] 1.776357e-15 -1.776357e-15 -1.776357e-15
[15,] 0.000000e+00 2.220446e-16 7.105427e-15
(the entries are zero to numerical precision). If we wish to work with the matrix itself, a single element may be coerced with as.matrix():
> M1 <- as.matrix(x[1]) > (M2 <- as.matrix(x[2]))
[,1] [,2] [,3] [,4] [,5]
[1,] 1.76 -1.67 -0.21 1.64 -0.02
[2,] -1.67 0.28 1.18 1.19 -0.43
[3,] -0.21 1.18 -1.36 -0.46 -0.42
[4,] 1.64 1.19 -0.46 1.57 -0.27
[5,] -0.02 -0.43 -0.42 -0.27 1.46
(in the above, observe how the matrix is indeed symmetric). We may verify that the multiplication rule is indeed being correctly applied:
> (M1 %*% M2 + M2 %*% M1)/2 - as.matrix(x[1]*x[2])
[,1] [,2] [,3] [,4] [,5]
[1,] 0 0 0 0
[2,] 0 0 0 0
[3,] 0 0 0 0
[4,] 0 0 0 0
[5,] 0 0 0 0
It is also possible to verify that symmetry is preserved under the Jordan operation:
> jj <- as.matrix(x[1]*x[2]) > jj-t(jj)
[,1] [,2] [,3] [,4] [,5]
[1,] 0 0 0 0
[2,] 0 0 0 0 0 [3,] 0 0 0 0 [4,] 0 0 0 0 [5,] 0 0 0 0 0 [2,] 0
where \(a,b,\alpha\in\mathbb{R}\), and \(\mathbf{a},\mathbf{b}\in\mathbb{R}^{n}\). Here \(\langle\cdot,\cdot\rangle\) is an inner product defined on \(\mathbb{R}^{n}\) (by default we have \(\langle(x_{1},\ldots,x_{n}),(y_{1},\ldots,y_{n})\rangle=\sum x_{i}y_{i}\) but this is configurable in the package).
So if we have \(\mathcal{I},\mathcal{J},\mathcal{K}\) spin factor elements it is clear that \(\mathcal{IJ}=\mathcal{JI}\) and \(\mathcal{I}(\mathcal{J}+\mathcal{K})=\mathcal{IJ}+\mathcal{IK}\). The Jordan identity is not as easy to see but we may verify all the identities numerically:
> I <- rspin() > J <- rspin() > K <- rspin() > I
Vector of spin objects with entries [1] [2] [3] r 0.49 0.38 -2.13 [1] 0.41 -0.48 -0.99 [2] -0.21 -0.86 -0.62 [3] -0.25 -0.41 -1.01 [4] -2.48 1.86 -0.65 [5] -0.02 -0.10 -1.17
> I*J - J*I # commutative:
Vector of spin objects with entries [1] [2] [3] r 0 0 0 [1] 0 0 [2] 0 0 [3] 0 0 [4] 0 0 [5] 0 0
> I*(J+K) - (I*J + I*K) # distributive:
Vector of spin objects with entries [1] [2] [3] r 2.220446e-16 3.330669e-16 0.00000e+00
[1] 1.110223e-16 0.000000e+00 1.110223e-16
[2] 0.000000e+00 2.220446e-16 0.000000e+00
[3] 0.000000e+00 -1.110223e-16 0.000000e+00
[4] 0.000000e+00 0.000000e+00 0.000000e+00
[5] -1.110223e-16 0.0000000e+00 -2.220446e-16
> I*(J*K) - (I*J)*K # not associative:
Vector of spin objects with entries [1] [2] [3] r 8.881784e-16 0.000000 0.00000 [1] -2.814170e-01 -3.279000 -1.032597
[2] 1.222649e+00 -0.163754 -5.508214
[3] -1.216666e+00 -0.511667 2.281628
[4] -3.774480e+00 1.28646 1.430624
[5] 1.619257e+00 -0.956662 -3.466692
> (I*J)*(I*I) - I*(J*(I*I)) # obeys the Jordan identity
Vector of spin objects with entries [1] [2] [3] r -3.552714e-15 -8.881784e-16 0.000000e+00
[1] 0.000000e+00 0.000000e+00 8.881784e-16
[2] 0.000000e+00 0.000000e+00 0.000000e+00
[3] -1.110223e-16 0.000000e+00 0.000000e+00
[4] 0.0000000e+00 8.881784e-16 3.552714e-15
[5] -2.220446e-16 -2.220446e-16 0.000000e+00
## 4 Albert algebras, type 4
Type 4 Jordan algebra corresponds to \(3\times 3\) Hermitian matrices with octonions for entries. This is class albert in the package. Note that octonionic Hermitian matrices of order 4 or above do not satisfy the Jordan identity and are therefore not Jordan algebras: there is a numerical illustration of this fact in the onion package vignette. We may verify the Jordan identity for \(3\times 3\) octonionic Hermitian matrices using package class albert:
> x <- ralbert() > y <- ralbert() > x
Vector of Albert matrices with entries [1] [2] [3] d1 1.54 0.03 -0.82 d2 -1.48 -0.65 -0.74 d3 -0.19 0.31 0.24 Re(01) -0.40 0.39 0.34 i(01) 0.05 0.84 0.33 j(01) -1.31 -0.11 -0.15 k(01) -2.09 -1.05 0.08 l(01) -0.60 1.28 -1.46 i1(01) -0.94 1.69 -1.82 j1(01) 1.30 -0.49 -2.73 kl(01) 2.42 -0.65 0.50 Re(02) -1.08 -0.40 0.53 i(02) -1.41 0.87 0.57 j(02) 0.84 -0.60 -0.58 k(02) -0.75 0.47 0.95 l(02) -0.62 0.84 -0.67 il(02) -1.18 0.46 0.36 jl(02) -0.15 0.97 0.94 kl(02) -0.69 -2.45 -0.81 Re(03) 0.78 -0.58 0.25 i(03) -1.12 -0.33 -1.70 j(03) -0.29 0.35 0.16 k(03) 1.68 0.18 -0.06 l(03) 0.92 0.84 0.62 il(03) -0.25 0.01 0.05 jl(03) -0.43 0.68 2.27 kl(03) -0.69 2.60 -1.46
Vector of Albert matrices with entries [1] [2] [3] d1 1.509903e-14 7.105427e-15 8.881784e-15 d2 -2.842171e-14 1.421085e-14 -7.105427e-15 d3 -7.105427e-15 1.776357e-15 -7.105427e-15 Re(01) 1.509903e-14 -1.065814e-14 0.00000e+00 i(01) 7.105427e-15 3.552714e-15 -5.329071e-15 j(01) 7.105427e-15 -1.776357e-15 0.000000e+00 k(01) 0.000000e+00 1.421085e-14 0.000000e+00 l(01) 7.105427e-15 0.000000e+00 00.000000e+00 il(01) 0.000000e+00 -1.421085e-14 -3.552714e-15 j(01) 0.000000e+00 1.421085e-14 0.000000e+00
kl(o1) 2.842171e-14 -8.881784e-16 0.0000000+00 Re(o2) -2.131628e-14 -7.105427e-15 0.000000e+00 i(o2) -7.105427e-15 1.776357e-15 -2.220446e-15 j(o2) 3.552714e-15 3.552714e-15 -7.105427e-15 k(o2) 0.000000e+00 7.105427e-15 -7.105427e-15 1(o2) 1.421085e-14 -7.105427e-15 -7.105427e-15 i1(o2) -1.421085e-14 1.421085e-14 -6.217249e-15 j1(o2) -7.105427e-15 -3.552714e-15 3.552714e-15 kl(o2) -1.421085e-14 -1.065814e-14 3.552714e-15 Re(o3) 7.105427e-15 0.000000e+00 -3.552714e-15 i(o3) -2.842171e-15 5.329071e-15 0.000000e+00 j(o3) 0.000000e+00 4.440829e-16 0.000000e+00 k(o3) 0.000000e+00 -7.105427e-15 -3.552714e-15 l(o3) 0.000000e+00 -1.421085e-14 0.000000e+00 i1(o3) 1.421085e-14 1.065814e-14 8.881784e-15 j1(o3) -2.842171e-14 1.065814e-14 0.000000e+00 kl(o3) -1.421085e-14 7.105427e-15 -7.105427e-15
## 5 Special identities
In 1963, C. M. Glennie discovered a pair of identities satisfied by special Jordan algebras but not the Albert algebra. Defining
\[U_{x}(y)=2x(xy)-(xx)y\]
\[\{x,y,z\}=2(x(yz)+(xy)z-(xz)y)\]
(it can be shown that Jacobson's identity \(U_{U_{x}(y)}=U_{x}U_{y}U_{x}\) holds), Glennie's identities are
\[H_{8}(x,y,z)=H_{8}(y,x,z)\qquad H_{9}(x,y,z)=H_{9}(y,x,z)\]
(see McCrimmon 2004 for details), where
\[H_{8}(x,y,z)=\{U_{x}U_{y}(z),z,xy\}-U_{x}U_{y}U_{z}(xy)\]
and
\[H_{9}(x,y,z)=2U_{x}(z)U_{y,x}U_{z}(yy)-U_{x}U_{z}U_{x,y}U_{y}(z)\]
### Numerical verification of Jacobson
We may verify Jacobson's identity:
> U <- function(x){function(y){2**x**(x**y)-(x**x**)*y}} > diff <- function(x,y,z){ + LHS <- U(x)(U(y)(U(x)(z))) + RHS <- U(U(x)(y))(z) + return(LHS-RHS) # zero if Jacobson holds + } Then we may numerically verify Jacobson for type 3-5 Jordan algebras:
> diff(ralbert(),ralbert(),ralbert()) # Albert algebra obeys Jacobson:
Vector of Albert matrices with entries
[1] [2] [3]
d1 3.637979e-12 -1.477929e-12 2.273737e-12 d2 -3.637979e-12 -9.094947e-13 4.547474e-13
d3 1.364242e-12 6.821210e-13 0.0000000+00
Re(01) -1.364242e-12 -3.979039e-13 6.821210e-13
i(01) 0.000000e+00 -1.1368668e-13 0.000000e+00
j(01) 4.547474e-13 4.547474e-13 -6.821210e-13
k(01) -2.273737e-13 1.364242e-12 -4.547474e-13
l(01) -2.273737e-12 -1.364242e-12 4.547474e-13
il(01) 0.000000e+00 -1.364242e-12 9.094947e-13
jl(01) 1.818989e-12 3.410605e-13 -4.547474e-13
kl(01) -2.728484e-12 -2.273737e-13 1.364242e-12
Re(o2) -1.364242e-12 1.421085e-12 1.818989e-12
i(o2) 9.094947e-13 1.136868e-13 0.094947e-13
j(o2) 1.818989e-12 -7.958079e-13 -9.094947e-13
k(o2) 9.094947e-13 -3.410605e-13 0.000000e+00
l(o2) -9.094947e-13 -1.364242e-12 1.136868e-12
il(o2) -1.818989e-12 0.000000e+00 9.094947e-13
jl(o2) -9.094947e-13 1.477929e-12 -9.094947e-13
kl(o2) 4.547474e-13 -3.410605e-13 0.000000e+00
Re(o3) 0.0000000e+00 1.136868e-12 -2.273737e-13
i(o3) 1.818989e-12 -2.501110e-12 -6.821210e-13
j(o3) -1.818989e-12 -7.673862e-13 4.547474e-13
k(o3) -1.364242e-12 -2.273737e-13 -4.547474e-13
l(o3) -4.547474e-13 0.000000e+00 0.000000e+00
il(o3) 1.818989e-12 -2.273737e-13 4.547474e-13
jl(o3) 4.547474e-13 3.183231e-12 0.000000e+00
kl(o3) 1.818989e-12 1.136868e-12 4.547474e-13
> diff(rqhm(),rqhm(),rqhm()) # Quaternion Jordan algebra obeys Jacobson:
Vector of quaternionic Hermitian matrices with entries
[1] [2] [3]
[1,] -9.094947e-13 1.364242e-12 1.818989e-12
[2,] -2.728484e-12 -1.193712e-12 -9.094947e-13
[3,] 5.456968e-12 0.000000e+00 -2.273737e-13
[4,] 0.0000000e+00 -5.911716e-12 -9.094947e-13
[5,] 0.0000000e+00 -4.547474e-13 1.818989e-12
..
### Numerical verification of \(G_{8}\)
> B <- function(x,y,z){2*(x*(y*z) + (x*y)*z - (x*z)*y)} # bracket function > H8 <- function(x,y,z){B(U(x)(U(y)(z)),z,x*y) - U(x)(U(y)(U(z)(x*y)))} > G8 <- function(x,y,z){HB(x,y,z)-HB(y,x,z)} and so we verify for type 3 and type 5 Jordans:
> G8(rqhm(1),rqhm(1),rqhm(1)) # Quaternion Jordan algebra obeys GS:
Vector of quaternionic Hermitian matrices with entries [1] [1,] 0.000000e+00 [2,] -4.547474e-12 [3,] 3.637979e-12 [4,] -1.818989e-12 [5,] -4.546568e-12................... [41,] 0.000000e+00 [42,] 3.637979e-12 [43,] 8.185452e-12 [44,] -4.547474e-12 [45,] -6.366463e-12
> G8(rspin(1),rspin(1),rspin(1)) # Spin factors obey GS:
Vector of spin objects with entries a r 3.552714e-15 [1] 7.494005e-16 [2] -4.440892e-16 [3] -2.220446e-16 [4] -2.664535e-15 [5] 0.000000e+00
again showing acceptable accuracy. The identity is _not_ true for Albert algebras:
> G8(ralbert(1),ralbert(1),ralbert(1)) # Albert algebra does not obey GS:
Vector of Albert matrices with entries [1] d1 -3209.77325 d2 -2183.08560 d3 5392.85884 Re(o1) 3048.77534 i(o1) -341.17026 j(o1) -5069.00757 k(o1) -3255.20454 l(o1) 6325.31177 il(o1) 4202.32334 jl(o1) -3395.24245 kl(o1) 3494.94665 Re(o2) -1279.20890 i(o2) -9545.59707 j(o2) -998.40758 k(o2) 2071.51608 l(o2) 1031.05238 il(o2) -7691.31254 jl(o2) 4705.92146 kl(o2) 685.36332
Re(o3) -7504.55626 i(o3) 4532.20967 j(o3) -5012.57428 k(o3) 902.05461 l(o3) -184.23557 i(o3) 46.94533 jl(o3) 8277.70343 kl(o3) 988.06732
### Numerical verification of \(G_{9}\)
> L <- function(x){function(y){x*y}} > U <- function(x){function(y){2*x*(x*y)-(x*x)*y}} > UZ <- function(x,y){function(z){L(x)(L(y)(z)) + L(y)(L(x)(z)) - L(x*y)(z)}} > H9 <- function(x,y,z){2*U(x)(z)*UZ(y,x)(U(z)(y*y)) - U(x)(Uz(x,y)(U(y)(z)))} > G9 <- function(x,y,z){H9(x,y,z)-H9(y,x,z)} Then we may verify the 'G9()' identity for type 3 Jordans:
> G9(rqhm(1),rqhm(1),rqhm(1)) # Quaternion Jordan algebra obeys G9:
Vector of quaternionic Hermitian matrices with entries
[1] [1,] -1.746230e-10 [2,] -5.820766e-11 [3,] -1.746230e-10 [4,] 7.275958e-12 [5,] 2.910383e-11 ... [4,1] 4.365575e-11 [42,] 0.000000e+00 [43,] 4.365575e-11 [44,] -9.458745e-11 [45,] -1.455192e-10 However, the Albert algebra does not satisfy the identity:
> G9(ralbert(1),ralbert(1),ralbert(1)) # Albert algebra does not obey G9:
Vector of Albert matrices with entries
[1] d1 32205.7845 d2 37449.9386 d3 -2734.0653 Re(o1) 21079.0754 i(o1) -8520.7870 j(o1) 2463.1298 k(o1) 11536.5988 [1(o1) 10866.9855 i1(o1) 21487.3519 j1(o1) -385.9625 kl(o1) -7022.2586 Re(o2) 29565.4992 i(o2) -10266.0920 j(o2) 6242.0058 k(o2) 15384.8470 l(o2) 6507.9183 i(o2) 8053.8656 jl(o2) -2563.1248 kl(o2) 304.0676
Re(o3) 20493.9088 i(o3) 30960.4312 j(o3) -12243.8005 k(o3) -6366.5065 l(o3) 5962.5897 i(o3) -12700.1558 jl(o3) -24581.3730 kl(o3) 5110.3595
|
2308.04261 | Novel Area-Efficient and Flexible Architectures for Optimal Ate Pairing
on FPGA | While FPGA is a suitable platform for implementing cryptographic algorithms,
there are several challenges associated with implementing Optimal Ate pairing
on FPGA, such as security, limited computing resources, and high power
consumption. To overcome these issues, this study introduces three approaches
that can execute the optimal Ate pairing on Barreto-Naehrig curves using
Jacobean coordinates with the goal of reaching 128-bit security on the Genesys
board. The first approach is a pure software implementation utilizing the
MicroBlaze processor. The second involves a combination of software and
hardware, with key operations in $F_{p}$ and $F_{p^{2}}$ being transformed into
IP cores for the MicroBlaze. The third approach builds on the second by
incorporating parallelism to improve the pairing process. The utilization of
multiple MicroBlaze processors within a single system offers both versatility
and parallelism to speed up pairing calculations. A variety of methods and
parameters are used to optimize the pairing computation, including Montgomery
modular multiplication, the Karatsuba method, Jacobean coordinates, the Complex
squaring method, sparse multiplication, squaring in $G_{\phi 6}F_{p^{12}}$, and
the addition chain method. The proposed systems are designed to efficiently
utilize limited resources in restricted environments, while still completing
tasks in a timely manner. | Oussama Azzouzi, Mohamed Anane, Mouloud Koudil, Mohamed Issad, Yassine Himeur | 2023-08-08T13:58:20Z | http://arxiv.org/abs/2308.04261v2 | # Novel Area-Efficient and Flexible Architectures for Optimal Ate Pairing on FPGA
###### Abstract
While FPGA is a suitable platform for implementing cryptographic algorithms, there are several challenges associated with implementing Optimal Ate pairing on FPGA, such as security, limited computing resources, and high power consumption. To overcome these issues, this study introduces three approaches that can execute the optimal Ate pairing on Barreto-Naehrig curves using Jacobean coordinates with the goal of reaching 128-bit security on the Genesys board. The first approach is a pure software implementation utilizing the MicroBlaze processor. The second involves a combination of software and hardware, with key operations in \(F_{p}\) and \(F_{p^{2}}\) being transformed into IP cores for the MicroBlaze. The third approach builds on the second by incorporating parallelism to improve the pairing process. The utilization of multiple MicroBlaze processors within a single system offers both versatility and parallelism to speed up pairing calculations. A variety of methods and parameters are used to optimize the pairing computation, including Montgomery modular multiplication, the Karatsuba method, Jacobean coordinates, the Complex squaring method, sparse multiplication, squaring in \(G_{\phi\phi}F_{p^{12}}\), and the addition chain method. The proposed systems are designed to efficiently utilize limited resources in restricted environments, while still completing tasks in a timely manner.
Optimal Ate pairing,d Flexible architecture, Virtex-5, MicroBlaze, Montgomery modular multiplication, Karatsuba method.
## I Introduction
Cryptography is a crucial technology for ensuring the security and privacy of data in today's digital world [1, 2]. Cryptography is the practice of converting plain text into a coded message to protect it from unauthorized access or tampering. It plays a crucial role in securing communication channels and protecting sensitive information, such as financial transactions, personal information, and state secrets [3, 4]. Cryptography is widely used in various applications, including online banking, e-commerce, secure communication between individuals and organizations, and in the protection of critical infrastructure systems [5, 6]. Without cryptography, sensitive information would be vulnerable to cyber-attacks and malicious activities, leading to severe consequences such as data breaches, identity theft, and financial loss [7, 8]. Therefore, cryptography is essential for ensuring the confidentiality, integrity, and availability of data and maintaining trust in the digital world [9, 7]. On the other hand, field-programmable gate arrays (FPGAs) are increasingly being utilized in edge computing environments due to their versatility, configurability, and performance advantages [10, 11]. In edge computing, FPGAs play a crucial role in accelerating data processing, enabling real-time analytics, and enhancing overall system performance [12].
The concept of pairing functions was introduced by Andre Weil in 1948, and later it was utilized in cryptography with the employment of elliptic curve bilinear pairings. The bilinear property enables the transformation of the discrete logarithm issue from an elliptic curve to the finite field \(F_{p}\). This brought about the emergence of the MOV attack [13] and Frey-Ruck attack [14]. The widespread use of pairing functions in cryptography emerged in the early 2000s after Joux introduced the tripartite key exchange scheme for Diffie-Hellman [15]. Since then, pairing functions have been implemented in a variety of advanced cryptosystems including identity-based signatures [16], searchable encryption [17], and functional encryption [18]. One of the most prominent applications of pairing functions is the Identity-Based Encryption (IBE) [19] proposed by Boneh and Franklin.
Pairing functions, which are used in cryptography, are typically constructed using a combination of the Miller Loop and Final Exponentiation. The performance of these functions is dependent on the arithmetic used in the primary field \(F_{p}\) and its extensions \(F_{p^{k}}\). To improve the efficiency of pairings, various curves have been discovered that offer improved computation and enhanced security. Freeman and colleagues provide a comprehensive categorization of such "pairing-friendly" curves in their work [20]. Currently, one of the most favorable options for computational efficiency and security is the use of Barreto-Naehrig (BN) curves [21]. Many articles have been published that propose protocols utilizing pairings [15, 19], while others focus on improving the computation of pairings [22, 23]. A smaller number of articles propose FPGA implementations for computing pairing functions [24, 25, 26, 27]. Recently, there has been a growing interest in implementing cryptographic pairings, with hardware implementations being considered a superior approach compared to software developments. Examples of advancements in pairing function implementations in cryptography include the following studies: In 2010, Ghosh et al. were the first to implement pairing functions on BN-curves, offering 128-bit security [28]. Moving on, Cheung et al. [29] improved execution time in their solution for optimal Ate pairing with 126-bit security by adopting the Residue Number System. Fan et al. provided a hardware implementation for pairing [30] in 2012 utilizing \(F_{p}\)-arithmetic. Then, using
\(F_{p^{k}}\)-arithmetic in hardware, Ghosh et al. created a complete hardware implementation of Ate and optimal Ate pairing [26]. A method for computing the challenging portion of the final exponentiation with the least amount of resource consumption was introduced in 2015 by Duquesne et al [31]. A high-speed and effective optimal Ate pairing processor implementation over BN and BLS12 curves on FPGA was lastly proposed by Sghaier et al. in 2018 [32].
On the other hand, the emergence of post-quantum cryptography (PQC) and the utilization of alternative schemes like Kyber and Dilithium as replacements for RSA/ECC have generated significant interest due to their ability to withstand quantum attacks [33]. An ideal choice for low-resource applications is the ECC since it offers the same level of security with smaller key sizes compared to other existing public key encryption schemes. An effective platform for an embedded co-processor is achieved by designing efficient functional units for elliptic curve computations over binary fields, making it suitable for low-resource applications. [34] presents an efficient co-processor for elliptic curve cryptography (ECC) over binary Edwards curves, designed for area-constrained devices. By utilizing state-of-the-art binary Edwards curve equations, it achieves a secure yet fast implementation of point multiplication. The co-processor offers the same level of security as other public key encryption schemes but with smaller key sizes, making it ideal for low-resource applications. Synthesis results show that it requires about 50% fewer clock cycles for point multiplication and occupies a similar silicon area compared to recent literature.
Although ECC is widely implemented and efficient, its security is reliant on the complexity of the elliptic curve discrete logarithm problem, which can be solved by quantum computers employing Shor's algorithm. To ensure long-term security, researchers have actively explored and developed PQC schemes that offer robust protection against the threats posed by quantum computing [7]. Kyber, focusing on key exchange protocols, and Dilithium, specializing in digital signatures, exemplify such schemes. The adoption of post-quantum algorithms such as Kyber and Dilithium represents a proactive approach in guaranteeing the ongoing security of cryptographic systems in anticipation of forthcoming advancements in quantum computing [35]. For instance, the authors in [36] demonstrate the practicality and efficiency of the Supersingular Isogeny Diffie-Hellman (SIDH) key exchange on 64-bit ARM architectures. SIDH is a cryptographic key exchange protocol that relies on supersingular isogenies, a concept derived from elliptic curve theory. Moving on, Anastasova et al. [37] explore the fast strategies for the implementation of Supersingular Isogeny Key Encapsulation (SIKE) Round 3 on ARM Cortex-M4, showcasing optimized techniques. Additionally, [38] discusses error detection architectures for Ring Polynomial Multiplication and Modular Reduction of Ring-LWE, providing valuable insights into ASIC implementations. These works collectively contribute to the advancement of cryptographic implementations on resource-constrained platforms and are crucial in the context of secure and reliable systems. Besides, [39] investigates hardware accelerators that are specifically designed to improve the efficiency of digital signature operations utilizing the Ed25519 algorithm. Ed25519 is a widely employed digital signature algorithm that relies on the elliptic curve Curve25519.
Moving on, to acknowledge the significance of lightweight cryptography (LWC) and building blocks in low-energy and low-power implementations, many studies have been proposed in the literature. For instance, [40] presents low-complexity superserial architectures for dual basis (DB) multiplication over GF(2m) to achieve lightweight cryptographic algorithms. It is the first time such a multiplier is proposed in open literature. Moving forward, [41] explores cryptographic architectures' reliability in providing security properties to sensitive usage models. It considers two underlying block ciphers suitable for authenticated encryption algorithms: the Advanced Encryption Standard type and Feistel network structure. In the same direction, [42] discusses augmenting block ciphers' confidentiality with authentication using the standardized Galois Counter Mode (GCM). Existing GCM error detection methods are either limited to specific architectures or ineffective against biased faults.
While FPGA is a suitable platform for implementing cryptographic algorithms, there are several challenges associated with implementing optimal Ate pairing on FPGA. Some of these challenges include (i) high computational complexity due to the fact that optimal Ate pairing involves complex mathematical operations; (ii) lightweight cryptography [40] poses resource constraints on FPGAs, necessitating the optimization of optimal Ate pairing to efficiently utilize logic gates, memory, and power. Techniques like algorithmic optimization, parallelization, and hardware-specific optimization can enable faster and more efficient FPGA implementations [43]; (iii) high power consumption as optimal Ate pairing requires a large number of clock cycles to execute, which increases the power consumption of the FPGA [44]; and (iv) design complexity which is due to the requirement of a thorough understanding of the mathematical operations involved, as well as the hardware design and implementation [45]. (v) FPGA implementations are vulnerable to physical attacks due to the hardware's inherent properties. Techniques such as resistance to power analysis, and secure key storage must be employed to mitigate vulnerabilities related to side-channel attacks [46]. The assessment of combined attacks requires a deep understanding of potential vulnerabilities in FPGA designs, the detection mechanisms employed by attackers, and techniques for analyzing power consumption. Implementation of specific countermeasures is possible, including the utilization of error detection and correction techniques to identify and mitigate injected faults. Furthermore, reducing information leakage through masking techniques and continuously monitoring power consumption to detect anomalies can be employed.
In this paper, we propose three different methods for implementing optimal Ate pairing on BN-curves with 128-bit security using the Virtex-5 circuit. Our first method is a full software implementation on an FPGA with a MicroBlaze processor, offering high flexibility. Our second approach integrates an intellectual property (IP) core written in VHDL into the MicroBlaze, offering a balance of flexibility, area, and speed. The third method builds upon the second by utilizing parallelism for enhanced computation speed. Our work adds to the existing literature on FPGA-based pairing implementations by providing flexible solutions that support various pairing methods and parameters, such as Montgomery modular multiplication, the Karatsuba method, and the addition chain method. The goal is to minimize resource consumption while
maintaining reasonable execution times by combining a mixed software and hardware approach and utilizing parallelism. Overall the main contributios of this paper are summarized as follows:
* Proposing three methods for implementing optimal Ate pairing on BN-curves with 128-bit security using Virtex-5 circuit by (i) using full software implementation on FPGA with MicroBlaze processor, offering high flexibility; (ii) introducing IP core written in VHDL integrated into MicroBlaze, offering balance of flexibility, area, and speed; and (iii) building on second method by utilizing parallelism for enhanced computation speed.
* Adding to existing literature on FPGA-based pairing implementations.
* Providing flexible solutions that support various pairing methods and parameters (Montgomery modular multiplication, Karatsuba method, addition chain method)
* Helping minimizing resource consumption while maintaining reasonable execution times through a mixed software and hardware approach and utilization of parallelism
The reminder of this paper is organized as follows. Section 2 presents an overview of optimal Ate pairing on BN curves and the relevant parameters. Section 3 covers the IP cores made using VHDL. Section 4 details three methods for embedding optimal Ate pairing on FPGA. In Section 5, our implementation results are evaluated and compared to previous studies. Finally, Section 6 concludes the research findings.
## II Optimal Ate Pairing over BN-Curves
A pairing function, denoted as \(e(P,Q)\), maps two points, \(P\) and \(Q\), on an elliptic curve \(E\) to an element in an extension field \(F_{p^{12}}\) for two cyclic additive groups \(G_{1}\) and \(G_{2}\) and a multiplicative group \(G_{3}\). It is required to possess the properties of bilinearity and non-degeneracy. One of the most useful properties derived from bilinearity is : \(for\ P\in G_{1}\), \(Q\in G_{2}\), we have:
\[\forall j\in\mathbb{N}:e([j]P,Q)=e(P,Q)^{j}=e(P,[j]Q) \tag{1}\]
As stated in [21], Barreto-Naehrig introduced a technique for creating pairing-friendly elliptic curves that are defined over a prime field \(F_{p}\). These curves, known as ordinary elliptic curves, are crucial for achieving a 128-bit level of security and for efficient pairing computation. They are defined by the following equation:
\[E:y^{2}=x^{3}+b\ \ where\ b\neq 0 \tag{2}\]
The embedding degree for BN-curves is 12. Additionally, the prime field characteristic, \(p\), the group order, \(r\), and the trace of Frobenius, \(t_{r}\) of these curves are determined by the following:
\[\begin{split} p(t)&=36t^{4}+36t^{3}+24t^{2}+6t+1\\ r(t)&=36t^{4}+36t^{3}+18t^{2}+6t+1\\ t_{r}(t)&=6t^{2}+1,\ where\ t\in\mathbb{Z}\end{split} \tag{3}\]
The choice of parameters plays a crucial role in the security and efficiency of the pairing function. The variable \(t\) is chosen so that both \(p\) and \(r\) are prime numbers. Furthermore, it is important to select a large enough value of \(t\) in order to attain a higher level of security. According to the recommendations of National Institute of Standards and Technology (NIST) [47], for a security level similar to AES 128 bits, \(t\) should be such that \(log_{2}(r(t))\geq 256\) and \(3000\leq k.log_{2}(p(t))\leq 5000\), which leads to \(t\) having roughly 64 bits.
The notation \(E[r]\) represents the \(r\)-torsion subgroup of \(E\), and \(\pi_{p}\) is the Frobenius endomorphism that maps \(E\) to \(E\), defined as \(\pi_{p}(x,y)=(x^{p},y^{p})\). We define \(G_{1}\) as \(E(F_{p})\), \(G_{2}\) as a subset of \(E(F_{p^{12}})\), and \(G_{3}\) as \(\mu_{r}\) which is part of \(F_{p^{12}}^{*}\). The optimal Ate pairing on BN-curves can be represented by the following mapping:
\[\begin{split} e_{opt}:G_{2}\times G_{1}\to G_{3}\\ (Q,P)\mapsto(f_{s,Q}(P)\cdot f_{[s]Q,\pi_{p}(Q)}(P)\cdot f_{[s]Q+ \pi_{p}(Q)},\\ -\pi_{p}^{2}(Q)(P))^{\frac{p-1}{r}}\end{split} \tag{4}\]
The optimal Ate pairing algorithm, as described in [22], is outlined in Algorithm 1. Using the non-adjacent form (NAF representation), the algorithm has three main steps. The Miller Loop, computed in lines 3-11, generates the value of \(f_{s,Q}(P)\). Point additions with the Frobenius map of point \(Q\) are calculated in lines 12-14, and the final exponentiation is performed in line 15. Note that in this algorithm, \(s\) is defined as \(6t+2\).
```
Data:\(P\in G_{1}\) and \(Q\in G_{2}\) Result:\(a_{o}pt(Q,P)\) \(write\ s=6t+2\) as \(s=\sum_{i=0}^{L-1}s_{i}2^{i}\),where \(s_{i}\in\{-1,0,1\}\); \(L=bitlength(s)\) \(T\gets Q\); \(f\gets 1\); for\(i\gets L-2\)dodo \(f\gets f^{2}.I_{T,T}(P)\); \(T\gets 2T\); if\(s_{i}=-1\)then \(f\gets f.I_{T,-Q}(P)\); \(T\gets T-Q\); end if if\(s_{i}=1\)then \(f\gets f.I_{T,Q}(P)\); \(T\gets T+Q\); end if end for \(Q_{1}\leftarrow\pi_{p}(Q)\); \(Q_{2}\leftarrow\pi_{p^{2}}(Q)\); \(f\gets f.I_{T,Q_{1}}(P)\); \(T\gets T+Q_{1}\); \(f\gets f.I_{T,-Q_{2}}(P)\); \(T\gets T-Q_{2}\); \(f\gets f^{\frac{p-1}{r}}\); \(return\ f\);
```
**Algorithm 1**Optimal Ate pairing over BN-curves
The key operations utilized in the optimal Ate pairing algorithm, as detailed in [22], include: Doubling and Addition steps (occurring on lines 4, 6, 9, 13 and 14), Sparse multiplication as outlined in [48] (on lines 4, 6, 9, 13 and 14) which is a multiplication in \(F_{p^{12}}\) where the second operand has half of the coefficients equal to zero, the Frobenius operation (on line 12), Squaring in the cyclotomic subgroup \(G_{\phi 6}(F_{p^{12}})\) (on line 15), and the Final Exponentiation (on line 15). The doubling and addition steps are executed in \(F_{p^{2}}\), while most of the other operations are performed in \(F_{p^{12}}\).
In order to efficiently perform extended field operations in \(F_{p^{12}}\), advanced techniques can be used to construct the arithmetic step by step in smaller extensions fields, such as the polynomial irreducible \(X^{k}-\beta\), and a tower of extensions of degree 2 and 3 can be utilized, similar to the method presented in [49].
\[\begin{array}{l}F_{p^{2}}=\frac{F_{p}[\mu]}{\mu^{2}-\beta},\ where\ \beta=-5\\ F_{p^{6}}=\frac{F_{p^{2}}[\nu]}{\mu^{2}-\nu},\ where\ \xi=\mu\\ F_{p^{12}}=\frac{F_{p^{6}}[\omega]}{(\omega^{2}-\nu)}\end{array} \tag{5}\]
A technique for representing elements of the field \(F_{p^{12}}\) using a combination of smaller extensions can be used to improve the speed of pairing. This method, called a "towering scheme," expresses an element \(f\) as \(f=g+h\omega\), where \(g,h\in F_{\mu}\). The element \(g\) can be further broken down into \(g_{0}+g_{1}\nu+g_{2}\nu^{2}\), and the same is done for \(h\), where \(g_{i}\), \(h_{i}\in F_{p^{2}}\) for \(i=0,1,2\). This approach, as outlined in [49], allows for a faster computation of pairing.
### _Miller Loop_
The Miller algorithm, as found in popular pairings such as Weil, Tate, Ate and optimal Ate pairing [50], is used to construct a rational function \(f_{r,P}\) associated with a point \(P\) on an elliptic curve \(E\), which is evaluated at another point \(Q\). This is achieved through an iterative process using the double and addition method, and the function \(f_{r,P}\) is defined by its divisor.
\[Div(f_{r,P})=r(P)-([r]P)-(r-1)(P_{\infty}) \tag{6}\]
where \(r\) is an integer and \(P_{\infty}\) denotes the point at infinity. The function is calculated by utilizing Miller's equality.
\[f_{[i+j],P}=f_{[i]P}.f_{[j]P}.\frac{l_{[i]P,[j]P}}{v_{[i+j]P}} \tag{7}\]
where \(l_{[i]P,[j]P}\) is the line passing through \([i]P\) and \([j]P\), and \(v_{[i+j]P}\) is the vertical to \(E\) at \([i+j]P\). The performance of the Miller Loop is affected by the number of bits in the exponent, as well as its Hamming weight.
#### Iii-A1 Doubling and tangent equations
The formulas for \(T=2Q=(X_{T},Y_{T},Z_{T})\) in Jacobian coordinates are defined as follows:
\[\begin{array}{l}X_{R}=9X_{T}^{4}-8Y_{T}Y_{T}^{2}\\ Y_{R}=3X_{T}^{2}(4X_{T}Y_{T}-X_{R})-8Y_{T}^{4}\\ Z_{R}=4X_{T}Y_{T}\end{array} \tag{8}\]
To find the tangent line equation at \(T\) when a point \(P=(x_{p},y_{p})\) in \(E(F_{p})\) is given in affine coordinates, the following calculation can be performed:
\[l_{T,T}(P)=(4Z_{R}Z_{T}^{2}y_{p})-(6X_{T}^{2}Z_{T}^{2}x_{p})\omega+(6X_{T}^{3 }-4Y_{T}^{2})\omega^{2}\in F_{p^{12}} \tag{9}\]
#### Iii-A2 Addition and line equations
The formulas for addition \(R=T+Q=(X_{R},Y_{R},Z_{R})\) are defined as follows:
\[\begin{array}{l}X_{R}=\hskip 28.452756pt(2Y_{Q}Z_{T}^{3}-2Y_{T})^{2}-4(X_{Q }Z_{T}^{2}-X_{T})^{3}\\ \hskip 28.452756pt-8(X_{Q}Z_{T}^{2}X_{T})^{2}X_{T}\\ Y_{R}=\hskip 28.452756pt(2Y_{Q}Z_{T}^{3}-2Y_{T})\ (4(X_{Q}Z_{T}^{2}-X_{T})^{2}X_{T}-X_{R})\\ \hskip 28.452756pt-8Y_{T}(X_{Q}Z_{T}^{2}-X_{T})\\ Z_{R}=\hskip 56.905512pt2Z_{T}(X_{Q}Z_{T}^{2}-X_{T})\end{array} \tag{10}\]
The equation of the line passing through \(T\) and \(Q\) when evaluated at point \(P\) is:
\[\begin{array}{l}l_{T,Q}(P)=(4Z_{T}(X_{Q}Z_{T}^{2}-X_{T})y_{p})-(4x_{p}(Y_{Q }Z_{T}^{3}+Y_{T}))\omega+\\ (4X_{Q}(Y_{Q}Z_{T}^{2}X_{Q}-Y_{T})-4Y_{Q}Z_{T}(X_{Q}Z_{T}^{2}-X_{T}))\omega^{2} \in F_{p^{12}}\end{array} \tag{11}\]
After the Miller Loop has been completed, an additional step known as the Final Exponentiation must be performed. This step involves raising the result of the Miller Loop to the power \(\frac{p^{k}-1}{r}\).
### _Final Exponentiation_
Several techniques can be employed to perform the Final Exponentiation step in algorithm 1. The traditional approach is to use the square and multiply method, however, this method can be time-consuming as the exponent \(e=\frac{p^{k}-1}{r}\) is large. To reduce computation time, the exponent can be broken down into smaller components.
\[e=\frac{p^{12}-1}{r}=(p^{6}-1).(p^{2}+1).\frac{p^{4}-p^{2}+1}{r} \tag{12}\]
To calculate the first part \(f^{(p^{6}-1)(p^{2}+1)}\in F_{p^{12}}\), which is the easy part, we can use simple conjugation and Frobenius operations to raise \(f\) to the power \(p^{6}\) and \(p^{2}\), respectively. This results in an element of the cyclotomic subgroup \(G_{\phi 6}(F_{p^{2}})\). There are various methods available in the literature for calculating the hard part of the Final Exponentiation. One such method is the approach proposed by Scott et al. in 2009 [23], which is based on addition chain. This method simplifies computations by keeping all elements involved within the cyclotomic subgroup \(G_{\phi 6}(F_{p^{2}})\), reducing the number of required operations for \(f^{2}\) computations [51], and allowing for inversions to be performed as a simple conjugation [48].
The addition chain method utilizes the polynomial representation of \(p\) and \(r\) in \(t\) to effectively decompose the hard part of the Final Exponentiation. This method involves a clever procedure that involves the computation of ten intermediate values, as follows:
\[f^{t},f^{t^{2}},f^{t^{3}},f^{p},f^{p^{2}},f^{p^{3}},f^{(tp)},f^{(t^{2}p)},f^{(t^ {2}p^{2})},f^{(t^{2}p^{2})} \tag{13}\]
These crucial components are employed to build a chain of multiplications, the evaluation of which results in the Final Exponentiation \(f^{e}\), through the implementation of the following equation:
\[\begin{array}{l}[f^{p}.f^{p^{2}}.f^{p^{2}}].[\frac{1}{f}]^{2}.[(f^{t^{2}})^{p^ {2}}].[\frac{1}{(f^{t^{2}})^{p^{2}}}]^{6}.[\frac{1}{(f^{t}.f^{t^{2}})^{p^{2}}}] ^{18}\\ \hskip 28.452756pt-[\frac{1}{f^{t^{2}}}]^{30}.[\frac{(f^{t^{2}}.f^{t^{3}})^{p^{ 2}}}]^{36}\end{array} \tag{14}\]
To raise an element to the power \(p\), we can compute it by applying the Frobenius operation. Additionally, to raise an element to the power \(t\), which can be time-consuming, we can use the square and multiply method. Lastly, we can use Fermat's little theorem to perform modular inversion in \(F_{p}\) by using this equation:
\[A^{-1}\equiv A^{p-2}modp \tag{15}\]
## III IP Cores on FPGA
The costs of each operation required to compute the optimal Ate pairing, as presented in this work, are outlined in Table I. The table includes notations such as \(\{a,m,s,i:F_{p}\}\) and \(\{a_{2},m_{2},s_{2},i_{2}:F_{p^{2}}\}\) for operations such as modular addition, subtraction, multiplication, squaring and inversion, as well as \(m_{\beta}\) for multiplication by a constant in \(F_{p}\). Many pairing functions rely on the Miller Loop and Final Exponentiation, which necessitate arithmetic operations in \(F_{p^{k}}\).
In this work, we have proposed a technique to perform mathematical operations in the fields \(F_{p^{6}}\) and \(F_{p^{12}}\) using arithmetic in the fields \(F_{p}\) and \(F_{p^{2}}\) as outlined in Table I. This method enables us to avoid the challenge of routing where operations in \(F_{p^{6}}\) and \(F_{p^{12}}\) are implemented in hardware. Our approach is intended to minimize resource consumption and to increase system flexibility by working in \(F_{p}\) and \(F_{p^{2}}\). Additionally, we have developed modular operations in both fields, \(F_{p}\) and \(F_{p^{2}}\) as VHDL IP cores, which are controlled by MicoBlaze(s). Furthermore, any curves that require arithmetic in \(F_{p}\) and \(F_{p^{2}}\) can utilize these IP cores by configuring only the software aspect.
### _MMM Core_
The multiplication operation in the base field \(F_{p}\) is a crucial step in computing a cryptographic pairing. There are various methods that can be used to perform this operation. In this paper, we utilize the Montgomery modular multiplication (MMM) algorithm, which is an efficient technique for performing modular multiplication. This algorithm eliminates the need for division by converting modulus reduction into a series of additions and right shifts. The MMM algorithm based on High Radix-\(r\) (\(r=2^{n}\)) is defined by the following expression:
\[S_{e}=Mont(A,B)=(A\times B\times R^{-1})\ mod\ p\]
\(R\) is the Montgomery constant. Algorithm 2 illustrates the Montgomery modular multiplication in Radix-\(2^{32}\) as presented in [52]. It is composed of two nested loops \((i)\) and \((j)\). The outer loop \((i)\) is used to calculate the \(q_{i}\) digits. The inner loop \((j)\) incorporates the digits \(B[j]\) and \(p[j]\) to compute the digits of the intermediate result \(S[j-1]\). The final output \(S_{e}\) is obtained when \(i=j=e\).
For practical use, each operand must be converted to its Montgomery form, adding an extra modular multiplication step due to the \(R^{-1}\) factor needed for each multiplication. But in pairing computation, where multiple multiplications occur, the operands only need to be converted once at the start and then back at the end.
Our hardware implementation of the Montgomery modular multiplication (MMM) is shown in Figure 1. It follows the operations defined in algorithm 2. The architecture features two 32-by-32 bit multipliers (Mul1 and Mul2), four carry-propagate adders (Add1, Add2, Add3 and Add4), four registers (Reg1, Reg2, Reg3, Reg4), four D Flip-flops, two multiplexers (Mux1, Mux2), and one block register. The inputs \(A\), \(B\), and \(p\) are stored in memory, and the algorithm's intermediate results \(S[j]_{i}\) are stored in the block register as a queue. The MMM core is controlled by four signals: \(Ctr\_Mux\), \(Ctr\_q_{i}\), \(Ctr\_c1\_c2\), and \(Ctr\_c3\_c4\).
In our implementation of the Montgomery modular multiplication (MMM), we employ the steps in Algorithm 2 and the hardware architecture depicted in Figure 1. This architecture encompasses components like multipliers (Mul1 and Mul2), adders, registers, D Flip-flops, multiplexers, and a block register. The execution of the MMM involves storing the operands \(A\), \(B\), and \(p\) in memory, and the intermediate results \(S[j]_{i}\) are temporarily stored in the block register as a queue. The MMM process occurs in three stages: First, the digit \(q_{i}\) is computed and kept in Reg3, which is managed by the signal \(Ctr\_q_{i}\). Then, the multiplications outlined in lines 8, 9 and 10, 11 of Algorithm 2 are performed, enabling the computation of the digits \(H[j]_{i}\) and \(S[j]_{i}\). Note that the multiplier Mul2 is shared between the multiplications of lines 6 and 10 in Algorithm 2.
### _KARATSUBA Core_
The arithmetic operations in \(F_{p^{2}}\), including modular addition, subtraction, multiplication, squaring, multiplication by a constant, reduction and inversion, are represented by two numbers in \(F_{p}\). The traditional method of performing modular multiplication in \(F_{p^{2}}\), as outlined in algorithm 3, requires a minimum of four multiplications and five additions/subtractions in \(F_{p}\). However, it can be optimized through parallel computation, which reduces the number of required operations to two multiplications and two additions/subtractions in \(F_{p}\). However, this optimization comes at the cost of duplicating the area required.
In this work, the KARATSUBA IP core is introduced, which facilitates the performance of various modular operations in \(F_{p^{2}}\), including multiplication, squaring, constant multiplication, and reduction. Furthermore, it can be employed
to execute Montgomery modular multiplication in \(F_{p}\). The design of KARATSUBA is shown in Figure 2 and involves five stages, controlled by a control circuit that selects the appropriate IPs for each stage. The core includes the MMM and ADD/SUB IP VHDL cores. This later is used for modular addition/subtraction in \(F_{p}\).
The KARATSUBA IP enhances the efficiency of modular operations in \(F_{p^{2}}\). All of these operations can be executed by a single IP, resulting in reduced computational resources. The KARATSUBA IP leverages the Karatsuba algorithm, a fast multiplication technique, to minimize the number of elementary multiplications compared to traditional methods. Moreover, it improves the execution time compared to purely software implementations. In particular, the computation cost of modular multiplication in \(F_{p^{2}}\) was initially 53942 cycles
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Optimal Arc** & **Add/sub** & **Multiplication** & **Squaring** & **Inversion** \\ \hline \(F_{p}\) & \(a\) & \(m\) & \(s\) & \(i\) \\ \hline \(F_{p^{2}}\) & \(a_{2}=2a_{2}\) & \(m_{2}=3m+m_{\beta}+5a\) & \(s_{2}=2m+2m_{\beta}+5a\) & \(i_{2}=4m+m_{\beta}+2a+i\) \\ \hline \(F_{p^{8}}\) & \(3a_{2}\) & \(6m_{2}+2m_{\beta}+15a_{2}\) & \(2m_{2}+3a_{2}+2m_{\beta}+10a_{2}\) & \(9m_{2}+3s_{2}+4m_{\beta}+5a_{2}+i_{2}\) \\ \hline \(F_{p^{12}}\) & \(6a_{2}\) & \(18m_{2}+7m_{\beta}+60a_{2}\) & \(12m_{2}+6m_{\beta}+45a_{2}\) & \(25m_{2}+9s_{2}+13m_{\beta}+61a_{2}+i_{2}\) \\ \hline \(G_{\phi_{6}}(F_{p^{2}})\) & \(6a_{2}\) & \(18m_{2}+7m_{\beta}+60a_{2}\) & \(6m_{2}+6m_{\beta}+39a_{2}\) & Conjugation \\ \hline \hline \multicolumn{4}{|c|}{**Sparse multiplication**} & \(13m_{2}+3m_{\beta}+28a_{2}\) \\ \hline \multicolumn{4}{|c|}{**Doubling and tangent line step**} & \(3m_{2}+8s_{2}+25a_{2}+4m\) \\ \hline \multicolumn{4}{|c|}{**Addition and line step**} & \(7m_{2}+8s_{2}+25a_{2}+4m\) \\ \hline \end{tabular}
\end{table} TABLE I: The cost of computing optimal Ate pairing operations
Fig. 1: Design of Montgomery Modular Multiplication on an FPGA
with pure Software implementation in MicroBlaze, but it reduced significantly to only 1240 cycles with the use of KARTSUBA IP. Table II presents the FPGA-based hardware outcomes of the KARTSUBA IP.
## IV Proposed architectures for optimal Ate pairing
In this research, we propose three different designs for implementing the optimal Ate pairing algorithm as an embedded system on an FPGA. We will now describe these hardware architectures in detail.
### _Signal MicroBlaze-based software implementation_
In this approach, a fully software-based implementation of optimal Ate on a Genesys board is presented as a pioneering solution. The method involves storing all the required functions and operations for computing Optimal Ate in memory (BRAM) and executing them sequentially using a MicroBlaze processor.
The MicroBlaze is a 32-bit RISC soft processor designed by Xilinx for embedded systems, and can be implemented on various development boards from Xilinx or their partners. It offers fundamental operations like addition, subtraction, and multiplication. To achieve optimal performance in Ate pairing, all operands are represented in 32-bit packets.
The optimal Ate pairing is executed through a C program on the MicroBlaze using SDK tools. The software architecture is structured into four levels as depicted in Figure 3. The top level encompasses the pairing function, followed by the second level that focuses on the Miller algorithm and Final Exponentiation. The third level consists of the Doubling step, Addition step, and the Frobenius function. Finally, the fourth level encompasses the Frobenius operations, arithmetic operations in finite fields (such as addition, subtraction, multiplication, and division), and exponentiation in fields \(F_{p}\), \(F_{p^{2}}\), \(F_{p^{6}}\), and \(F_{p^{12}}\).
Most operations are performed in the extended fields of quadratic \((F_{p^{2}})\) and cubic \((F_{p^{3}})\) within the towering scheme \(F_{((p^{2})^{2})^{3})}\). According to [53], there are several techniques available for multiplication and squaring in such extended fields. In particular, the Karatsuba approach is utilized for multiplication and the complex method is implemented for squaring in \(F_{p^{2}}\). Additionally, the Karatsuba method is applied for both multiplication and squaring in \(F_{p^{3}}\).
The hardware architecture for executing the optimal Ate pairing on a Virtex-5 circuit with a MicroBlaze processor is illustrated in Figure 4. The design encompasses a MicroBlaze processor, Block Random Access Memory (BRAM), Local Memory Buses (ILMB, DLMB) to organize the BRAM, a Timer for timing the execution, and a Universal Asynchronous Receiver Transmitter (UART) to communicate input and output data with the serial port.
### _Single MicroBlaze-based SW/HW implementation_
The second approach in this work involves a combination of software and hardware design for optimal Ate pairing on BN-curves using a Virtex-5 circuit. To enhance performance, an accelerator IP core was integrated into the design and
\begin{table}
\begin{tabular}{|c||c|c|c||c|} \hline
**IP Core** & **Slices** & **DSP** & **BRAM** & **cycles** \\ \hline
**ADD/SUB** & 487 & 6 & 7 & 10 \\ \hline
**MMM** & 495 & 8 & 3 & 130 \\ \hline
**KARTSUBA** & 982 & 14 & 10 & 550 \\ \hline \end{tabular}
\end{table} TABLE II: The hardware results of KARTSUBA
Fig. 2: Hardware architecture of KARTSUBA on FPGA
implemented in conjunction with the MicroBlaze processor. This approach aims to improve the overall execution time compared to the initial one. In particular, the computation cost of modular multiplication in \(F_{p}\) was initially 12968 cycles with pure software implementation in MicroBlaze, but it reduced significantly to only 475 cycles with the use of MMM IP.
The first design based on this approach use our MMM core to perform all the necessary modular multiplication operations, which can be significant for pairing defined on 256-bit BN-curves, as shown in reference [54]. The hardware architecture for this approach is illustrated in Figure 5.
In the second design of the hardware/software approach for optimal Ate pairing on BN-curves, a KARATSUBA core is utilized in conjunction with the MicroBlaze processor to perform all necessary operations in the fields \(F_{p}\) and \(F_{p^{2}}\). The architecture of this embedded system is depicted in Figure 6.
The partitioning method suggested in this work combines both software and hardware elements, resulting in improved execution speed and increased flexibility in the design of the embedded system. The higher level functions are implemented in software, while the lower level functions are executed by specialized IP cores. However, it is important to note that the transfer time of data between the MicroBlaze and the IP core also plays a significant role in determining the overall execution time.
The overall structure of how our IP cores are integrated with the MicroBlaze processor is depicted in Figure 7. The IP cores are connected to the MicroBlaze through the use of the Xilinx PLB Bus, which facilitates the exchange of data and instructions between the two components. The design includes the Xilinx Intellectual Property InterFace (IPIF) and User Logic blocks, which communicate with each other through a
Fig. 4: Hardware architecture of Mb software approach
Fig. 5: Hardware architecture of Mb/MMM approach
Fig. 3: Optimal Ate pairing implementation hierarchy
standard interface called IP InterConnect (IPIC).
The IP cores' integration with the MicroBlaze processor is shown in Figure 7. The cores are linked to the MicroBlaze through the PLB Bus from Xilinx, which manages data and instruction transfer. The architecture consists of Xilinx's Intellectual Property Interface (IPIF) and User_Logic blocks, communicating via the standard IP InterConnect (IPIC) back-end interface. The IPIF interface decodes the PLB system bus communication protocol and has three registers: Ins_reg, DataIn_reg, and DataOut_reg. MicroBlaze sends instruction codes via the Ins_reg instruction register. The User_Logic block implements the circuit logic and includes three units: the Memory Unit (MU), the Control Unit (CU), and the IP core. The CU retrieves instructions from the Ins_reg and manages the MU and IP core.
### _Dual MicroBlaze-based SW/HW implementation_
Optimal Ate pairing demonstrates parallelism at various levels, ranging from functions in \(F_{p}\) to higher levels in \(F_{p_{12}}\). As we move from lower to higher levels, a significant level of parallelism becomes evident, providing the impetus for exploring and developing diverse architecture configurations. These configurations involve variations in hardware components, including the number of MicroBlaze processors and KARTAUBA IPs employed.
In our study, we have investigated multiple software/hardware architecture configurations for the implementation of optimal Ate pairing. This analysis enables us to evaluate the performance and hardware resources utilized in each configuration, aiding in the identification of the most resource-efficient option while maintaining reasonable execution time. Several architectures can be explored and developed, such as: 1MB/2KARTAUBA, 1MB/3KARTAUBA, 2MB/1KARTAUBA, 2MB/2KARTAUBA, 3MB/1KARTAUBA, 3MB/2KARTAUBA, 3MB/3KARTSUBA, and more.
The third approach, in this work, focuses on utilizing the inherent parallelism of key operations, which include modular multiplication in \(F_{p^{a}}\) and \(F_{p^{12}}\), sparse multiplication, squaring in the cyclotomic subgroup \(G_{\phi 6}(F_{p^{2}})\), as well as doubling and addition steps. Moreover, parallelism becomes crucial when executing frequently repeated key operations for calculating optimal Ate. For example, algorithm 4 shows the multiplication function in \(F_{p^{b}}\).
```
Data:\(A=a_{0}+a_{1}x+a_{2}x^{2},\;\;B=b_{0}+b_{1}x+b_{2}x^{2}\) Result:\(C=c_{0}+c_{1}x+c_{2}x^{2}\) \(t_{0}\gets a_{0}*b_{0}\) \(t_{1}\gets a_{1}*b_{1}\) \(t_{2}\gets a_{2}*b_{2}\) \(c_{0}\leftarrow[((a_{1}+a_{2})*(b_{1}+b_{2}))-t_{1}-t_{2}].\xi+t_{0}\) \(c_{1}\leftarrow[((a_{0}+a_{1})*(b_{0}+b_{1}))-t_{0}-t_{1}]+t_{2}.\xi\) \(c_{2}\leftarrow[((a_{0}+a_{2})*(b_{0}+b_{2}))-t_{0}-t_{2}]+t_{1}\)
```
**Algorithm 4**Multiplication in \(F_{p^{a}}\)
The cost of algorithm 4 is : 6 Karatsuba + 15 add \(F_{p^{2}}\) + 2 red \(F_{p^{2}}\)
After developing and testing the various operations/functions on the Virtex5 board, We have obtained the following significant result.
**2 add soft \(F_{p^{2}}\) (1272 cycles) \(>\approx\) Karatsuba \(F_{p^{2}}\) (1240 cycles) add soft \(F_{p^{2}}\) (636 cycles) \(\approx\) red \(F_{p^{2}}\) (590 cycles)**
We have the execution time of a single multiplication in \(F_{p^{2}}\), which is almost the same as that of two addition operations in software on MicroBlaze. Additionally, the execution time of a reduction operation in \(F_{p^{2}}\) using KARTAUBA is almost the same as that of a software addition operation on MicroBlaze. Based on these results, the algorithm 4 is executed on the two processors, as it shown in table III.
\(\{\_\xi\}\) represents modular reduction in \(F_{p^{2}}\). {t and r} represent the transfer time by FSL.
Now, the cost of algorithm 4 is : Karatsuba + 14 add soft \(F_{p^{2}}\) + 21 transfert FSL.
We can clearly observe a significant improvement in execution time for the multiplication function in \(F_{p^{6}}\).
The same principle is applied to the key operations/functions in optimal Ate, such as functions in \(F_{p^{6}}\), \(F_{p^{12}}\), Doubling and Addition steps, Sparse multiplication, exponentiation, and so on.
In order to implement optimal Ate pairing in an efficient manner, a specific architecture was chosen that meets the criteria of minimal memory usage while maintaining an acceptable execution time. As shown in Figure 8, a parallel and flexible approach is proposed, utilizing two MicroBlaze processors and an IP KARTAUBA core. The processors, labeled as \(MB_{0}\) and \(MB_{1}\), are connected through a high-speed FSL bus. The \(MB_{0}\) processor acts as the master and
Fig. 6: Hardware architecture of Mb/KARTAUBA approach
Fig. 7: The design of the hardware components for our IP cores.
\(MB_{1}\) acts as the slave, responsible for performing operations in \(F_{p}\) and \(F_{p^{2}}\) in conjunction with the KARATSUBA core, which is connected through a PLB bus.
The first idea involves adding KARATSUBA IPs around a single MicroBlaze processor. However, this approach has a major drawback, which is the transfer time between the MicroBlaze processor and the different IPs. To illustrate, conducting a multiplication operation in \(F_{p^{2}}\) demands a total of 1240 cycles, with 550 cycles allocated for processing and an additional 690 cycles dedicated to data transfer. Notably, the processing time nearly matches the transfer time, underscoring the crucial role of transfer time in the overall system performance. Additionally, the MicroBlaze and the IP cannot work at the same time. The MicroBlaze always waits for the results sent by the IP. Consequently, architectures with parallel processors have emerged as a more favored alternative.
To determine the percentage of task between the MicroBlaze and the IP core, we can achieve this by thoroughly examining the algorithms involved in key operations, such as multiplication in \(F_{p^{6}}\) and \(F_{p^{12}}\), squaring in \(G_{\phi 6}F_{p^{2}}\), sparse multiplication, doubling step, and others. Through this analysis, we can determine the specific tasks or operations allocated to each component and evaluate their respective contributions.For instance, the multiplication in \(F_{p^{6}}\) is executed with a percentage of 90.01% on the first processor and 76.82% on the second, as shown in Table III.
## V Implementation Results and Discussion
### _Implementation Results_
The design of an embedded system for computing optimal Ate pairings was created using the Xilinx Platform Studio environment and a Virtex-5 Genessy development board. The MMM and KARATSUBA components were written in VHDL and tested with Modeslim SE before being synthesized with the ISE Design Suite. The DSP48E and RAM blocks were created with the Core Generator tool, while the high-level arithmetic was developed with C programming language in the SDK.
In order to ensure that the proposed design offers a 128-bit security level, we selected the parameter \(t=2^{62}-2^{54}+2^{44}\), as stated in [48], and the BN-curve \(E:y^{2}=x^{3}+5\). This choice results in the exponent number \(t\) and the parameter \(s=6t+2\) in the Miller Loop having a signed bit length of 63 and 65, respectively.
A comparison of the results obtained from our implementation of optimal Ate pairing with those of recent implementations based on BN-curves is presented in Table IV. The comparison takes into account execution time, hardware requirements, and design efficiency, which is computed using a this expression:
\[\mathtt{efficiency}=\frac{\mathtt{datapath(bit)}}{\mathtt{occupiedarea(slice) \times executiontime(s)}} \tag{16}\]
The area of the design is determined by taking into consideration the following information: it is assumed that the height of a DSP48E is equivalent to that of five configurable logic blocks (CLBs) and is also equivalent to the height of one block RAM. Each CLB is made up of four slices.
The number of BRAMs is configurable on Xilinx FPGA
Fig. 8: Hardware architecture of 2Mb/KARATSUBA approach
\begin{table}
\begin{tabular}{|c|c|c||c|} \hline
**MB0 (master)** & **Transfer FSL** & **MB1 (slave)** & **Cost** \\ \hline - & \(\{a_{0},b_{0}\}\) & - & 2t \\ \hline \(ta_{01}\gets a_{0}+a_{1}\) & - & \(t_{0}\gets a_{0}\star b_{0}\) & 2 add \(F_{p^{2}}\) \\ \hline \(tb_{01}\gets b_{0}+b_{1}\) & - & - & 2t \\ \hline \(ta_{02}\gets a_{0}+a_{2}\) & - & \(t_{1}\gets a_{1}\star b_{1}\) & 2 add \(F_{p^{2}}\) \\ \hline \(tb_{01}\gets b_{0}+b_{1}\) & \(\{a_{2},b_{2}\}\) & - & 2t \\ \hline \(ta_{12}\gets a_{1}+a_{2}\) & - & \(t_{2}\gets a_{2}\star b_{2}\) & 2 add \(F_{p^{2}}\) \\ \hline \(tb_{12}\gets b_{1}+b_{2}\) & \(\{ta_{12},tb_{12}\}\) & - & 2t \\ \hline - & \(\{ta_{12},tb_{01},ta_{12},t_{1},t_{2}\}\) & - & 2t+3t \\ \hline \(ta_{12}\gets ta_{12}-t_{1}\) & - & \(ta_{01}\gets ta_{01}\star tb_{01}\) & 2 add \(F_{p^{2}}\) \\ \hline \(ta_{12}\gets ta_{12}-t_{2}\) & - & \(ta_{02},tb_{02},ta_{01},t_{0}\) & - & 2t+2T \\ \hline \(ta_{01}\gets ta_{01}-t_{0}\) & - & \(ta_{02}\gets ta_{02}\star tb_{02}\) & 2 add \(F_{p^{2}}\) \\ \hline \(ta_{01}\gets ta_{01}-t_{1}\) & - & \(ta_{02},ta_{02}\) + \(ta_{02}\star tb_{02}\) & 2 add \(F_{p^{2}}\) \\ \hline \(ta_{01}\gets ta_{01}-t_{0}\) & - & \(ta_{12},ta_{02}\) & - & 1t+1T \\ \(ta_{01}\gets ta_{01}-t_{1}\) & - & \(ta_{12}\gets ta_{12}\star tb_{01}\gets ta_{12}\star tb_{01}\gets ta_{ 12}\star tb_{01}\gets ta_{ 12}\star tb_{01}\gets ta_{ 12}\star tb_{01}\gets ta_{ 12}\star tb_{01}\gets ta_{ 12}\star tb_{01}\gets ta_{ 12}\star tb_{01}\gets ta_{ 12}\star tb_{01}\) & 2 add \(F_{p^{2}}\) \\ \hline \(ta_{02}\gets ta_{02}-t_{0}\) & - & \(ta_{12}\gets ta_{12}\star tb_{01}\) & 2 add \(F_{p^{2}}\) \\ \(ta_{02}\gets ta_{02}-t_{2}\) & - & \(tb_{01}\gets ta_{2}\star tb_{01}\gets ta_{ 12}\star tb_{01}\gets ta_{ 12}\star tb_{01}\gets ta_{ 12}\star tb_{01}\gets ta_{ 12}\star tb_{01}\) & 2 add \(F_{p^{2}}\) \\ \hline \(a_{02}\gets ta_{02}-t_{0}\) & - & \(ta_{12}\gets ta_{12}\star tb_{01}\) & 2 add \(F_{p^{2}}\) \\ \(ta_{02}\gets ta_{02}-t_{2}\) & - & \(tb_{01}\gets ta_{2}\star tb_{01}\gets ta_{ 12}\star tb_{01}\gets ta_{ 12}\star tb_{01}\) & 2 add \(F_{p^{2}}\) \\ \hline \(a_{01}\gets ta_{01}+tb_{01}\) & - & \(c_{0}\gets ta_{12}+t_{0}\) & 2 add \(F_{p^{2}}\) \\ \(c_{2}\gets ta_{02}+t_{1}\) & - & \(c_{0}\) & - & 1r \\ \hline - & \(\{c_{0}\}\) & - & \(76,82\%\) & Percentage \\ \hline \hline \(90,01\%\) & - & \(76,82\%\) & Percentage \\ \hline \end{tabular}
\end{table} TABLE III: Multiplication in \(F_{p^{6}}\) (2Mb/KARATSUBA)
boards. By default, when working on a new project, the number of BRAMs is set to 18. However, our initial implementation (purely software-based) required us to increase this number to 32. However, with the task separation and the utilization of two MicroBlaze processors, this number increased to 42.
The implementation of optimal Ate pairing through software running on MicroBlaze has a slower speed in comparison to other approaches. While the SW/HW design aims to optimize the execution time, it also results in an increased consumption of hardware area. The transfer time between the MicroBlaze processor and IP cores can also affect the global execution time. Taking advantage of the parallel elements in optimal Ate pairing holds potential for improving the global execution time while keeping hardware usage minimal. Table IV summarizes the evaluation of the cost associated with key functions in Optimal Ate utilizing 2Mb/KARATSUBA.
### _Discussion_
The research presented in the paper contributes to advancing the state-of-the-art in optimal Ate pairing algorithms by offering area-efficient and flexible architectures. These architectures enhance the efficiency, performance, and practicality of cryptographic pairings, opening up possibilities for secure and efficient implementations in various applications such as identity-based cryptography, attribute-based encryption, and cryptographic protocols involving pairings. The research's key implications and contributions encompass three aspects: (i) The proposal of novel architectures targeting area efficiency in FPGA implementations of optimal Ate pairing, which holds particular relevance for resource-constrained environments necessitating effective FPGA resource utilization. (ii) Emphasizing flexibility, the architectures can be easily tailored and adapted to suit diverse parameters and security requirements of the optimal Ate pairing implementation. (iii) Providing practical insights and experimental findings by conducting FPGA-based implementations and tests, delivering valuable guidance for real-world applications.
In the context of this study, the outcomes achieved through the implementation of optimal Ate pairing on FPGA are discussed, and the results are presented in Table V.
Typically, in [28], the first implementation of pairing functions for 128-bit security level using BN-curves was reported. The authors utilized Blakley's algorithm for modular multiplication, leading to a high area consumption without using DSP or RAM cores. Our SW/HW designs, on the other hand, show improved slice consumption and efficiency. In [26], a fully hardware-based implementation of Ate and optimal Ate pairing was presented, where all \(F_{p^{k}}\)-arithmetic was implemented in hardware. This design utilized 23k logic slices, but had a faster time performance compared to the 2Mb/KARATSUBA design. However, it also consumed more slices, being 5.6 times higher. Moving on, the authors in [27] proposed a hardware cryptoprocessor for optimal Ate pairing which utilizes two processing engines to perform parallel computation of \(F_{p}\)-arithmetic using the Montgomery algorithm. This design has a reasonable increase in area with a higher number of DSP blocks, making it limited to integration on high-resource FPGA boards, unlike our designs which can be implemented on large FPGA circuits. In [55], a high-performance processor for optimal Ate pairing on BN-curves is proposed, exploiting parallelism and pipeline at various levels of the algorithm. However, this design has a higher area occupation with a higher number of DSP blocks and is not suitable for restricted environments. In [32], a high-speed and efficient design for optimal Ate pairing over BN and BLS12 curves on FPGA was presented. The design boasts the highest reported speed and the best reported area-time performance. Although the design offers improved efficiency compared to our implementations, it has a reasonable increase in area and is less flexible.
All in all, these findings open avenues for further research and optimization in implementing efficient and secure cryptographic systems.
## VI Conclusion
In this paper, we proposed three different approaches for implementing optimal Ate pairing based on Jacobean coordinates over BN-curves with 128-bit security as an embedded system on FPGA devices. Our first approach utilized a pure software design executed by MicroBlaze processors, while the second approach combined software and hardware to perform essential operations in \(F_{p}\) and \(F_{p^{2}}\). Our third approach employed parallelism at critical operation levels to further improve execution time and minimize area consumption. Our designs are suitable for restricted environments and offer reasonable execution times.
To further improve the implementation of optimal Ate pairing and address potential limitations, the following aspects can be considered in future works: (i) investigating and implementing algorithmic optimizations to enhance the efficiency of the Ate pairing computation. Research on new techniques or adaptations specific to the BN-curve can lead to significant improvements in execution time and resource utilization; (ii) exploring the use of more advanced FPGA platforms or application-specific integrated circuits (ASICs) to increase computational capabilities and achieve higher performance. Utilizing modern FPGA families with improved hardware resources and higher clock frequencies can result in faster computations; (iii) investigating and incorporating parallelization and pipelining techniques to exploit parallel hardware resources effectively. By distributing tasks across multiple processing units and overlapping computations, the overall execution time can be reduced; (iv) designing and implementing custom hardware accelerators tailored to the specific requirements of the optimal Ate pairing computation. This can lead to dedicated hardware units optimized for the BN-curve operations, further improving efficiency; (v) focusing on optimizing power consumption and resource utilization without compromising security. This is especially important for embedded systems and IoT devices where energy efficiency is a critical consideration; (vi) exploring opportunities for further software-level optimization, such as using advanced compiler techniques or employing custom assembly code to fine-tune critical arithmetic operations; (vii) Conducting a thorough security analysis of the proposed design against potential side-channel attacks and fault injections. Implement countermeasures to mitigate these vulnerabilities and ensure robustness against various security threats; and (viii) evaluating the implementation in real-world applications, such as secure communication protocols or cryptographic schemes,
to assess its practical viability and gather feedback for further improvements.
## Ethical Approval
Not Applicable
## Competing interests
The authors declare no conflict of interest.
## Authors' contributions
Conceptualization, O. Azzouzi - Methodology, O. Azzouzi, M. Anane - Validation, O. Azzouzi, M. Anane, M. Koudil - Writing--original draft preparation, O. Azzouzi, M. Issad, Y. Himeur - Proofreading, M. Anane, Y. Himeur - Formal analysis, M. Anane, M. Koudil, M. Issad, Y. Himeur - Supervision, M. Anane, M. Koudil- Project administration, M. Anane.
## Funding
This research received no external funding
## Availability of data and materials
Data will be shared upon request
|
2305.13939 | Tropical second main theorem and the Nevanlinna inverse problem | A generalization of the second main theorem of tropical Nevanlinna theory is
presented for noncontinuous piecewise linear functions and for tropical
hypersurfaces without requiring a growth condition. The method of proof is
novel and significantly more straightforward than previously known proofs. The
tropical analogue of the Nevanlinna inverse problem is formulated and solved
for tropical meromorphic functions and tropical hypersurfaces. | Juho Halonen, Risto Korhonen, Galina Filipuk | 2023-05-05T10:38:26Z | http://arxiv.org/abs/2305.13939v1 | # Tropical second main theorem and the Nevanlinna inverse problem
###### Abstract.
A generalization of the second main theorem of tropical Nevanlinna theory is presented for noncontinuous piecewise linear functions and for tropical hypersurfaces without requiring a growth condition. The method of proof is novel and significantly more straightforward than previously known proofs. The tropical analogue of the Nevanlinna inverse problem is formulated and solved for tropical meromorphic functions and tropical hypersurfaces.
Key words and phrases:Tropical Nevanlinna theory, tropical meromorphic functions, piecewise linear functions, tropical hypersurfaces, Nevanlinna inverse problem, defect relation 2020 Mathematics Subject Classification: Primary 14T90; Secondary 30D35, 32H30 GF acknowledges the support of the grant entitled "Geometric approach to ordinary differential equations" funded under New Ideas 3B competition within Priority Research Area III implemented under the "Excellence Initiative - Research University" (IDUB) Programme (University of Warsaw) (nr 01/IDUB/2019/94). GF is also partially supported by the Ministry of Science and Innovation of Spain and the European Regional Development Fund (ERDF) grant number PID2021-124472NB-I00.
Introduction
The purpose of this paper is to study the \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-\(n\)-dimensional \(n\)-dimensional \(n\)-\(n\)-dimensional \(n\)-dimensional \(n\)-\(n\)-dimensional \(n\)-\(n\)-dimensional \(n\)-\(n\)-dimensional \(n\)-\(n\)-dimensional \(n\)-\(n\)-dimensional \(n\)-\(n\)-dimensional \(n\)-\(n\)-dimensional \(n\)-\(n\)-dimensional \(n\)-\(n\)-dimensional \(n\)-\(n\)-dimensional \(n\)-\(n\)-dimensional \(n\)-\(n\)-\(n\)-dimensional \(n\)-\(n\)-\(n\)-dimensional \(n\)-\(n\)-\(n\)-dimensional \(n\)-\(n\)-\(n\)-dimensional \(n\)-\(n\)-\(n\)-dimensional \(n\)-\(n\)-\(n\)-dimensional \(n\)-\(n\)-\(n\)-\(n\)-dimensional \(n\)-\(n\)-\(n\)-\(n\)-dimensional \(n\)-\
many noncontinuous piecewise linear solutions. In the first part of this paper we will extend tropical Nevanlinna theory for noncontinuous piecewise linear functions proving the Poisson-Jensen formula and the first and the second main theorems for these functions. The first and second main theorems will also be extended for a class of piecewise linear target values and the growth condition will be dropped from the second main theorem. The proof of the second main theorem will be greatly simplified compared to the previous proofs.
In the second part of the paper, we will prove an improved version of the second main theorem for tropical hypersurfaces, in which the growth condition will be dropped and the inequalities will be made tighter.
In the last part of this paper, we will formulate and answer three different versions of the inverse problem in the context of tropical Nevanlinna theory. First, we will show that for any \(\delta\in[0,1]\), we can find a tropical rational function \(f\) and a constant \(a\in\mathbb{R}\) such that \(\delta(a,f)=\delta\). Second, we will demonstrate that there exists a tropical meromorphic function \(f\) such that for all \(\delta\in[0,1]\), there exists \(a\in\mathbb{R}\) such that \(\delta(a,f)=\delta\). Lastly we will formulate and answer the inverse problem for tropical hypersurfaces. We will also examine some properties of the defect as a real valued function. Finally we will disprove the tropical version of Griffiths conjecture [8] which was proposed by Cao and Zheng [2, Conjecture 4.11].
## 2. The second main theorem for piecewise linear functions
**Definition 2.1**.: Let \(f:\mathbb{R}\to\mathbb{R}\). If there exists disjoint intervals \(I_{k}\subset\mathbb{R}\), \(k\in\mathbb{N}\) such that each of them contains more than one element,
\[\bigcup_{k\in\mathbb{N}}I_{k}=\mathbb{R}\]
and for each interval \(I_{k}\) there exists \(\alpha,\beta\in\mathbb{R}\) such that \(f(x)=\alpha x+\beta\) for all \(x\in I_{k}\), then \(f\) is said to be piecewise linear.
Halburd and Southall [9] described the following method for generating continuous piecewise linear solutions for ultra-discrete equations of the type
\[y(x+1)\otimes y(x-1)=R(x,y(x)), \tag{2.1}\]
where \(R\) is a tropical rational function of \(x\) and \(y\). Choose any values for \(y(0)\) and \(y(1)\). Then compute \(y(2)=R(1,y(1))-y(0)\). Now if you define \(y(x)\) on the intervals \((0,1)\cup(1,2)\) in a way that \(y(x)\) is a continuous piecewise linear function on the interval \([0,2]\), then the equation (2.1) extends \(y\) uniquely to a tropical meromorphic function on the whole real line. However, if we allow discontinuities for the piecewise linear function in the initial interval \([0,2]\), then we can generate infinitely many noncontinuous piecewise linear solutions to the equation (2.1). Motivated by this we will extend tropical Nevanlinna theory for noncontinuous piecewise linear functions.
Discontinuities are classified into three different categories: removable discontinuities, jump discontinuities and essential discontinuities [1]. We say that a point of discontinuity \(x_{0}\) of \(f:\mathbb{R}\to\mathbb{R}\) is a jump discontinuity if the one-sided limits
\[\lim_{x\to x_{0}^{+}}f(x)=:f(x_{0}+)\in\mathbb{R}\]
and
\[\lim_{x\to x_{0}^{-}}f(x)=:f(x_{0}-)\in\mathbb{R}\]
exist and \(f(x_{0}+)\neq f(x_{0}-)\). If the one-sided limits above exist, but \(f(x_{0}+)=f(x_{0}-)\) then \(x_{0}\) is said to be a removable discontinuity of \(f\), and if one or both of the one-sided limits above do not exist in \(\mathbb{R}\), then \(x_{0}\) is said to be an essential discontinuity of \(f\).
**Lemma 2.2**.: _All discontinuities of a piecewise linear function are jump discontinuities._
Proof.: Let \(f:\mathbb{R}\to\mathbb{R}\) be a piecewise linear function. By definition there exists a partition of \(\mathbb{R}\) into intervals \(I_{k}\) such that \(f\) is linear on each interval. If \(f\) is discontinuous at \(x_{0}\), then \(x_{0}\) must be at the ends of two of the intervals. We know that \(f\) is linear on both intervals and therefore the one-sided limits \(f(x_{0}+)\) and \(f(x_{0}-)\) must exist. Since \(f\) is continuous on each of the intervals and \(x_{0}\) is included in one of the intervals we must have either \(f(x_{0}+)=f(x_{0})\) or \(f(x_{0}-)=f(x_{0})\). On the other hand, since \(f\) is discontinuous at \(x_{0}\) we must also have either \(f(x_{0}+)\neq f(x_{0})\) or \(f(x_{0}-)\neq f(x_{0})\). Therefore we arrive at the conclusion that \(f(x_{0}+)\neq f(x_{0}-)\).
From the proof above we can also see that a piecewise linear function must be left-continuous or right-continuous at every point.
For a piecewise linear function \(f\) we define
\[\Omega_{f}(x):=f(x+)-f(x-)\]
and we say that \(x\neq 0\) is a positive jump if \(x\Omega_{f}(x)>0\) and a negative jump if \(x\Omega_{f}(x)<0\). If there is a discontinuity at \(0\), we say that it is a positive jump if \(f(0)=\max\{f(0+),f(0-)\}\) and a negative jump if \(f(0)=\min\{f(0+),f(0-)\}\) (see Figure 1). Since all the discontinuities of a piecewise linear function are jump discontinuities, we know that \(\Omega_{f}(x)=0\) if and only if \(f\) is continuous at \(x\). The height of a jump at \(x\) is \(h_{f}(x):=|\Omega_{f}(x)|\). When moving away from the origin, a piecewise linear function jumps up at a positive jump and down at a negative jump.
**Lemma 2.3**.: _Let \(f\) be a piecewise linear function. Then the set \(\{x\in\mathbb{R}:\Omega_{f}(x)\neq 0\}\) containing the points of discontinuity of \(f\) cannot have accumulation points._
Proof.: Let \(f:\mathbb{R}\to\mathbb{R}\). Suppose that the set of discontinuities of \(f\) has an accumulation point \(x_{0}\). Then any interval of \(\mathbb{R}\) that contains \(x_{0}\) also contains another discontinuity. This means that on any interval that contains \(x_{0}\), there do not exist \(\alpha,\beta\in\mathbb{R}\) such that \(f(x)=\alpha x+\beta\) on the whole interval. Therefore \(f\) cannot be piecewise linear.
Figure 1. From left to right, two negative jumps and two of positive jumps at \(0\).
By Lemma 2.3 we can be sure, that on any finite interval a piecewise linear function has only finitely many discontinuities.
The Poisson-Jensen formula for continuous piecewise linear functions was proven by Halburd and Southall.
**Theorem 2.4** ([9]).: _Suppose \(f\) is a continuous piecewise linear function on \([-r,r]\) for some \(r>0\) and denote the distinct roots, resp. poles of \(f\) in this interval by \(a_{\mu}\), resp. by \(b_{\nu}\) with their corresponding multiplicities \(\tau_{f}\). Then for any \(x\in(-r,r)\) we have the tropical Poisson-Jensen formula_
\[f(x)= \frac{1}{2}(f(r)+f(-r))+\frac{x}{2r}(f(r)+f(-r))\] \[-\frac{1}{2r}\sum_{|a_{\mu}|<r}\tau_{f}(a_{\mu})(r^{2}-|a_{\mu}-x |r-a_{\mu}x)\] \[+\frac{1}{2r}\sum_{|b_{\nu}|<r}\tau_{f}(b_{\nu})(r^{2}-|b_{\nu}-x |r-b_{\nu}x).\]
_In particular, the case \(x=0\) gives the tropical Jensen formula_
\[f(0)=\frac{1}{2}(f(r)+f(-r))-\frac{1}{2}\sum_{|a_{\mu}|<r}\tau_{f}(a_{\mu})(r -|a_{\mu}|)+\frac{1}{2}\sum_{|b_{\nu}|<r}\tau_{f}(b_{\nu})(r-|b_{\nu}|).\]
We will generalise the Poisson-Jensen formula for piecewise linear functions that may have discontinuities.
**Theorem 2.5**.: _Suppose \(f\) is a piecewise linear function on \([-r,r]\) for some \(r>0\) and denote the distinct roots, resp. poles of \(f\) in this interval by \(a_{\mu}\), resp. by \(b_{\nu}\) with their corresponding multiplicities \(\tau_{f}\) and the distinct positive jumps, resp. negative jumps of \(f\) in this interval by \(\alpha_{\mu}\), resp. by \(\beta_{\nu}\) with their corresponding heights \(h_{f}\)._
_Then for any \(x\in(-r,r)\) we have the Poisson-Jensen formula_
(2.2) \[f(x) =\frac{1}{2}(f(r)+f(-r))+\frac{x}{2r}(f(r)-f(-r))\] \[-\frac{1}{2r}\sum_{|a_{\mu}|<r}\tau_{f}(a_{\mu})(r^{2}-|a_{\mu}-x| r-a_{\mu}x)\] \[+\frac{1}{2r}\sum_{|b_{\nu}|<r}\tau_{f}(b_{\nu})(r^{2}-|b_{\nu}-x |r-b_{\nu}x)\] \[-\frac{x}{2r}\left(\sum_{-r\leq\beta_{\nu}\leq 0}h_{f}(\beta_{\nu}) +\sum_{0\leq\alpha_{\mu}\leq r}h_{f}(\alpha_{\mu})\right)\] \[+\frac{x}{2r}\left(\sum_{-r\leq\alpha_{\mu}\leq 0}h_{f}(\alpha_{ \mu})+\sum_{0\leq\beta_{\nu}\leq r}h_{f}(\beta_{\nu})\right)\] \[-\frac{1}{2}\left(\sum_{-r\leq\alpha_{\mu}\leq\min\{0,x\}}\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
**Example 2.6**.: Let \(r=4\) and define \(f:[-4,4]\rightarrow\mathbb{R}\),
\[f(x)=\begin{cases}-x-2,&-4\leq x<-2,\\ \max\{-x-1,2x+2\},&-2\leq x<0,\\ -x+1,&0\leq x\leq 1,\\ -\max\{x-4,2x-6\},&1<x<4,\\ -1,&x=4.\end{cases}\]
This function has a poles of multiplicities \(3\) and \(1\) at \(0\) and \(2\) respectively and a root of multiplicity \(3\) at \(-1\). In addition \(f\) has negative jumps of height \(1\) at \(0\) and \(-2\) and a positive jump of height \(3\) at \(1\). Let \(x\in(-1,0)\). Denote the slopes of the function \(f\) by \(m_{-2}=-1,\ m_{-1}=m_{1}=2,\ m_{2}=-1\) and \(m_{3}=-2\), the points of discontinuity of the derivative of \(f\) by \(c_{-1}=-1,\ c_{1}=0\) and \(c_{2}=2\) and the points of discontinuity of \(f\) by \(k_{1}=-2,\ k_{2}=0,\ k_{3}=1\) and \(k_{4}=4\). Then we can write
\[f(r)-f(x)\] \[= m_{1}(c_{1}-x)+m_{2}(c_{2}-c_{1})+m_{3}(r-c_{2})+\Omega_{f}(k_{2} )+\Omega_{f}(k_{3})+\Omega_{f}(k_{4})\] \[= c_{1}(m_{1}-m_{2})+c_{2}(m_{2}-m_{3})+m_{3}r-m_{1}x+\Omega_{f}(k_ {2})+\Omega_{f}(k_{3})+\Omega_{f}(k_{4})\] \[= m_{1}(r-x)-(m_{1}-m_{2})(r-c_{1})-(m_{2}-m_{3})(r-c_{2})+\Omega_ {f}(k_{2})+\Omega_{f}(k_{3})+\Omega_{f}(k_{4})\]
and similarly
\[f(x)-f(-r)=m_{-1}(r+x)+(m_{-2}-m_{-1})(r+c_{-1})+\Omega_{f}(k_{1}).\]
Figure 2. Function \(f\) in Example 2.6.
By multiplying the above equations by \((r+x)\) and \((r-x)\) respectively and subtracting we obtain
\[2rf(x)\] \[=r(f(r)+f(-r))+x(f(r)-f(-r))\] \[+(m_{-2}-m_{-1})(r^{2}-r(c_{-1}-x)-c_{-1}x)+\sum_{j=1}^{2}(m_{j}-m_ {j+1})(r^{2}-r(x-c_{j})-c_{j}x)\] \[+r(\Omega_{f}(k_{1})-\Omega_{f}(k_{2})-\Omega_{f}(k_{3})-\Omega_{ f}(k_{4}))-x(\Omega_{f}(k_{1})+\Omega_{f}(k_{2})+\Omega_{f}(k_{3})+\Omega_{f}(k_{4}))\] \[=r(f(r)+f(-r))+x(f(r)-f(-r))\] \[+\omega_{f}(c_{-1})(r^{2}-r(c_{-1}-x)-c_{-1}x)+\sum_{j=1}^{2} \omega_{f}(c_{j})(r^{2}-r(x-c_{j})-c_{j}x)\] \[+r(\Omega_{f}(k_{1})-\Omega_{f}(k_{2})-\Omega_{f}(k_{3})-\Omega_{ f}(k_{4}))-x(\Omega_{f}(k_{1})+\Omega_{f}(k_{2})+\Omega_{f}(k_{3})+\Omega_{f}(k_{4} )).\]
By the definition of multiplicity of a root and a pole and the height of a jump we can see that
\[f(x)=\frac{1}{2}(f(r)+f(-r))+\frac{x}{2r}(f(r)-f(-r))\] \[-\frac{1}{2r}\tau_{f}(c_{-1})(r^{2}-r|c_{-1}-x|-c_{-1}x)+\frac{1 }{2r}\sum_{j=1}^{2}\tau_{f}(c_{j})(r^{2}-r|c_{j}-x|-c_{j}x)\] \[-\frac{1}{2}(h_{f}(k_{3})+h_{f}(k_{4}))+\frac{1}{2}(h_{f}(k_{1}) +h_{f}(k_{2}))\] \[-\frac{x}{2r}(h_{f}(k_{1})+h_{f}(k_{3})+h_{f}(k_{4}))+\frac{x}{2r} h_{f}(k_{2}).\]
Finally with concrete values we obtain
\[f(x)= \frac{1}{2}(-1+2)+\frac{x}{8}(-1-2)\] \[-\frac{1}{8}3(16-4|-1-x|+x)+\frac{1}{8}3(16-4|x|)+\frac{1}{8}(16- 4|2-x|-2x)\] \[+\frac{1}{2}(1+1-3-1)-\frac{x}{8}(1-1+3+1)\] \[= 2x+2,\]
when \(-1<x<0\).
Proof of Theorem 2.5.: Let \(x\in(-r,r)\). Define an increasing sequence \((c_{j})\), \(j=-p,\ldots,q\) in \((-r,r)\) in the following way. Let \(c_{0}=x\), and let the other points in this sequence be the points in \((-r,r)\), at which the derivative of \(f\) does not exist, i.e. the roots and poles of \(f\). Then define
\[m_{j-1}=f^{\prime}(c_{j}-)\]
for \(j=-p,\ldots,0\) and
\[m_{j+1}=f^{\prime}(c_{j}+)\]
for \(j=0,\ldots,q\). Let \((k_{j}),j=1,\ldots,s\) be an increasing sequence of the discontinuities of \(f\) in \([-r,r]\) including the possible discontinuity at \(r\) (resp. at \(-r\)) only if \(f\) is left-discontinuous at \(r\) (resp. right-discontinuous at \(-r\)). Define
\[K_{j}=f(k_{j}+)-f(k_{j}-)\]
for \(j=1,\ldots,s\). Also define the index sets
\[J^{+}=\{j\in\{1,\ldots,s\}:(x\leq k_{j}\leq r)\wedge(k_{j}=x\implies f\text{ is left-discontinuous at }x)\}\]
and
\[J^{-}=\{j\in\{1,\ldots,s\}:(-r\leq k_{j}\leq x)\wedge(k_{j}=x\implies f\text{ is right-discontinuous at }x)\}.\]
See Figure 3 for the notation. In Figure 3\(J^{-}=\{1,\ldots,j-1\}\) and \(J^{+}=\{j,\ldots,s\}\). A geometric observation implies that
\[f(r)-f(x)\] \[=m_{1}(c_{1}-x)+m_{2}(c_{2}-c_{1})+\cdots+m_{q}(c_{q}-c_{q-1})+m_ {q+1}(r-c_{q})+\sum_{j\in J^{+}}K_{j}\] \[=m_{1}(r-x)-\sum_{j=1}^{q}(m_{j}-m_{j+1})(r-c_{j})+\sum_{j\in J^{ +}}K_{j}.\]
Similarly
\[f(x)-f(-r)=m_{-1}(r+x)+\sum_{j=1}^{p}(m_{-j-1}-m_{-j})(r+c_{-j})+\sum_{j\in J^ {-}}K_{j}.\]
Figure 3. Notation in the proof of the Poisson-Jensen formula.
When multiplying the equations above by \((r+x)\) and \((r-x)\), respectively, and subtracting we obtain
\[2rf(x)= r(f(r)+f(-r))+x(f(r)-f(-r))+(m_{-1}-m_{1})(r^{2}-x^{2})\] \[+\sum_{j=1}^{p}(m_{-j-1}-m_{j})(r^{2}-(x-c_{-j})r-c_{-j}x)\] \[+\sum_{j=1}^{q}(m_{j}-m_{j+1})(r^{2}-(c_{j}-x)r-c_{j}x)\] \[+\sum_{j\in J^{-}}K_{j}(r-x)-\sum_{j\in J^{+}}K_{j}(r+x)\] \[= r(f(r)+f(-r))+x(f(r)-f(-r))\] \[+\sum_{j=-p}^{q}-\omega_{f}(c_{j})(r^{2}-|c_{j}-x|r-c_{j}x)\] \[+x\sum_{j=1}^{s}-\Omega_{f}(k_{j})-r\left(\sum_{j\in J^{-}}-\Omega _{f}(k_{j})-\sum_{j\in J^{+}}-\Omega_{f}(k_{j})\right).\]
We can see that
\[\sum_{j\in J^{-}}-\Omega_{f}(k_{j})-\sum_{j\in J^{+}}-\Omega_{f}(k_{j})=\sum_ {-r\leq k_{j}\leq x}-\Omega_{f}(k_{j})-\sum_{x\leq k_{j}\leq r}-\Omega_{f}(k_ {j})+A_{f}(x),\]
where
\[A_{f}(x)=\begin{cases}0,&\text{if $f$ is continuous at $x$},\\ \Omega_{f}(x),&\text{if $f$ is left-discontinuous at $x$},\\ -\Omega_{f}(x),&\text{if $f$ is right-discontinuous at $x$}.\end{cases}\]
We also have
\[\sum_{-r\leq k_{j}\leq x}-\Omega_{f}(k_{j})-\sum_{x\leq k_{j} \leq r}-\Omega_{f}(k_{j})\] \[=\sum_{-r\leq k_{j}\leq\min\{0,x\}}\hskip-10.0pt-\hskip-10.0pt \Omega_{f}(k_{j})+\sum_{0\leq k_{j}\leq x}\hskip-10.0pt-\hskip-10.0pt\Omega_{ f}(k_{j})\] \[-\left(\sum_{\max\{0,x\}\leq k_{j}\leq r}\hskip-10.0pt-\hskip-10.0pt \Omega_{f}(k_{j})+\sum_{x\leq k_{j}\leq 0}\hskip-10.0pt-\hskip-10.0pt\Omega_{f}(k_{j}) \right)+B_{f}(x)\] \[=\sum_{-r\leq\alpha_{\mu}\leq\min\{0,x\}}\hskip-10.0pth_{f}( \alpha_{\mu})-\sum_{-r\leq\beta_{\nu}\leq\min\{0,x\}}\hskip-10.0pth_{f}(\beta _{\nu})-\sum_{0\leq\alpha_{\mu}\leq x}\hskip-10.0pth_{f}(\alpha_{\mu})+\sum_{ 0\leq\beta_{\nu}\leq x}\hskip-10.0pth_{f}(\beta_{\nu})\] \[-\left(\hskip-10.0pt-\hskip-10.0pt\sum_{\max\{0,x\}\leq\alpha_{ \mu}\leq r}\hskip-10.0pth_{f}(\alpha_{\mu})+\hskip-10.0pt\sum_{\max\{0,x\}\leq \beta_{\nu}\leq r}\hskip-10.0pth_{f}(\beta_{\nu})+\sum_{x\leq\alpha_{\mu}\leq 0 }\hskip-10.0pth_{f}(\alpha_{\mu})-\sum_{x\leq\beta_{\nu}\leq 0}\hskip-10.0pth_{f}( \beta_{\nu})\right)+B_{f}(x)\] \[=\sum_{-r\leq\alpha_{\mu}\leq\min\{0,x\}}\hskip-10.0pth_{f}(\alpha _{\mu})+\hskip-10.0pt\sum_{\max\{0,x\}\leq\alpha_{\mu}\leq r}\hskip-10.0pth_{f}( \alpha_{\mu})-\hskip-10.0pt\sum_{x\leq\alpha_{\mu}\leq 0}\hskip-10.0pth_{f}(\alpha_{\mu})- \hskip-10.0pt\sum_{0\leq\alpha_{\mu}\leq x}\hskip-10.0pth_{f}(\alpha_{\mu})\] \[-\sum_{-r\leq\beta_{\nu}\leq\min\{0,x\}}\hskip-10.0pth_{f}(\beta _{\nu})-\sum_{\max\{0,x\}\leq\beta_{\nu}\leq r}\hskip-10.0pth_{f}(\beta_{\nu}) +\hskip-10.0pt\sum_{0\leq\beta_{\nu}\leq x}\hskip-10.0pth_{f}(\beta_{\nu})+ \sum_{x\leq\beta_{\nu}\leq 0}\hskip-10.0pth_{f}(\beta_{\nu})+B_{f}(x),\]
where
\[B_{f}(x)=\begin{cases}0,&\text{if }x=0,\\ -\Omega_{f}(0),&\text{if }x<0,\\ \Omega_{f}(0),&\text{if }x>0.\end{cases}\]
We also have
\[\sum_{j=1}^{s}-\Omega_{f}(k_{j})\] \[=\sum_{-r\leq\alpha_{\mu}\leq 0}h(\alpha_{\mu})-\sum_{-r\leq\beta_{ \nu}\leq 0}h(\beta_{\nu})\] \[-\sum_{0\leq\alpha_{\mu}\leq r}h(\alpha_{\mu})+\sum_{0\leq\beta_{ \nu}\leq r}h(\beta_{\nu})-\Omega_{f}(0)\] \[=\sum_{-r\leq\alpha_{\mu}\leq 0}h(\alpha_{\mu})+\sum_{0\leq\beta_{ \nu}\leq r}h(\beta_{\nu})\] \[-\left(\sum_{-r\leq\beta_{\nu}\leq 0}h(\beta_{\nu})+\sum_{0\leq \alpha_{\mu}\leq r}h(\alpha_{\mu})\right)-\Omega_{f}(0).\]
By combining the above equalities we obtain (2.2) and in the case \(x=0\) we have
\[f(0) =\frac{1}{2}(f(r)+f(-r))\] \[-\frac{1}{2r}\sum_{|a_{\mu}|<r}\tau_{f}(a_{\mu})(r^{2}-|a_{\mu}|r)\] \[+\frac{1}{2r}\sum_{|b_{\nu}|<r}\tau_{f}(b_{\nu})(r^{2}-|b_{\nu}|r)\] \[-\frac{1}{2}\left(\sum_{-r\leq\alpha_{\mu}\leq 0}h_{f}(\alpha_{ \mu})+\sum_{0\leq\alpha_{\mu}\leq r}h_{f}(\alpha_{\mu})-2\sum_{\alpha_{\mu}=0 }h_{f}(\alpha_{\mu})\right)\] \[+\frac{1}{2}\left(\sum_{-r\leq\beta_{\nu}\leq 0}h_{f}(\beta_{ \nu})+\sum_{0\leq\beta_{\nu}\leq r}h_{f}(\beta_{\nu})-2\sum_{\beta_{\nu}=0}h_{ f}(\beta_{\nu})\right)\] \[-\frac{1}{2}A_{f}(0)\] \[=\frac{1}{2}(f(r)+f(-r))-\frac{1}{2}\sum_{|a_{\mu}|<r}\tau_{f}(a_ {\mu})(r-|a_{\mu}|)+\frac{1}{2}\sum_{|b_{\nu}|<r}\tau_{f}(b_{\nu})(r-|b_{\nu}|)\] \[-\frac{1}{2}\sum_{|\alpha_{\mu}|\leq r}h_{f}(\alpha_{\mu})+\frac{ 1}{2}\sum_{|\beta_{\nu}|\leq r}h_{f}(\beta_{\nu})\] \[-\frac{1}{2}\left(A_{f}(0)-\sum_{\alpha_{\mu}=0}h_{f}(\alpha_{ \mu})+\sum_{\beta_{\nu}=0}h_{f}(\beta_{\nu})\right).\]
By the definition of the positive and negative jump at \(0\) we can see that
\[A_{f}(0)-\sum_{\alpha_{\mu}=0}h_{f}(\alpha_{\mu})+\sum_{\beta_{\nu}=0}h_{f}( \beta_{\nu})=0.\]
For piecewise linear functions with discontinuities we will define the proximity function and counting function in the same way as for tropical meromorphic functions. The proximity function and the counting function are well defined and the properties
\[V(r,f^{\otimes\alpha}) =\alpha V(r,f)\] \[V(r,f\otimes g) \leq V(r,f)+V(r,g)\] \[V(r,f\oplus g) \leq V(r,f)+V(r,g)\]
remain true for both \(V(r,f)=m(r,f)\) and \(V(r,f)=N(r,f)\). However, the proximity function is not continuous for noncontinuous functions.
We will define the jump counting function as
\[J(r,f)=\frac{1}{2}\sum_{j=1}^{n}h_{f}(\beta_{j}),\]
where \(\beta_{1},\dots,\beta_{n}\) are the negative jumps of \(f\) on the interval \([-r,r]\) counting the possible negative jump at \(r\) (resp. at \(-r\)) if \(f\) is left-discontinuous at \(r\) (resp. right-discontinuous at \(-r\)).
Then we can define the tropical Nevanlinna characteristic function for a piecewise linear function \(f\) as
\[T(r,f)=m(r,f)+N(r,f)+J(r,f).\]
The jump counting function is a non-negative non-decreasing piecewise constant function. This means that the characteristic function will also remain a non-negative non-decreasing piecewise linear function. The jump counting function also has the following properties
\[J(r,f^{\otimes\alpha}) =\alpha J(r,f)\] \[J(r,f\otimes g) \leq J(r,f)+J(r,g)\] \[J(r,f\oplus g) \leq J(r,f)+J(r,g).\]
This means that all the above properties remain true for the characteristic function. The only basic properties of the characteristic function that do not always hold with this definition are continuity and convexity. That is because as a piecewise constant function, the jump counting function is not always continuous or convex.
The property
\[J(r,f\oplus g)\leq J(r,f)\oplus J(r,g)\]
also does not hold. This can be seen by considering the functions
\[f(x)=\begin{cases}2,&x\leq 1\\ 3-2x,&x>1\end{cases}\quad\text{ and }\quad g(x)=\begin{cases}\frac{1}{2},&x\leq 2 \\ 0,&x>2.\end{cases}\]
In this case \(J(r,f)=1\), \(J(r,g)=\frac{1}{2}\) and \(J(r,f\oplus g)=\frac{3}{2}\) for all \(r>2\).
Next we will move on to the second main theorem. The tropical second main theorem was first proven by Laine and Tohge.
**Theorem 2.7** ([14]).: _Suppose that is \(f\) a non-constant tropical meromorphic function of hyper-order \(\rho_{2}<1\), and take \(0<\delta<1-\rho_{2}\). If \(q\geq 1\) distinct values \(a_{1},\ldots,a_{q}\in\mathbb{R}\) satisfy_
\[\max\{a_{1},\ldots,a_{q}\}<\inf\{f(b):\omega_{f}(b)<0\} \tag{2.3}\]
_and_
\[\inf\{f(b):\omega_{f}(b)>0\}>-\infty. \tag{2.4}\]
_Then_
\[qT(r,f)\leq\sum_{j=1}^{q}N\left(r,\frac{1_{\circ}}{f\oplus a_{j}}\oslash\right) +o\left(\frac{T(r,f)}{r^{\delta}}\right) \tag{2.5}\]
_outside an exceptional set of finite logarithmic measure._
Korhonen and Tohge [12] improved this result by dropping the assumption (2.4) and by noting that (2.5) can be turned into an equality. In this paper we will show that the growth condition can be dropped entirely and in addition, we shall generalize the second main theorem for piecewise linear functions with discontinuities and for a class of piecewise linear target values instead of constant targets.
To prove the second main theorem, we will need to prove a version of the first main theorem. Laine and Tohge [15] proved the first main theorem for tropical meromorphic functions on a finite interval. With a similar proof we can prove the same result for piecewise linear functions on the whole real line with piecewise linear targets.
**Theorem 2.8**.: _Let \(f\) and \(a\) be piecewise linear functions. Then_
\[T\left(r,\frac{1_{\circ}}{f\oplus a}\oslash\right) =T(r,f)-N(r,f)+N(r,f\oplus a)\] \[-J(r,f)+J(r,f\oplus a)-f(0)\oplus a(0)+\varepsilon_{f}(r,a), \tag{2.6}\]
_for some quantity \(\varepsilon_{f}(r,a)\) satisfying \(0\leq\varepsilon_{f}(r,a)\leq m(r,a)\) for all \(r\geq 0\)._
Proof.: By the Jensen formula we have
\[T\left(r,\frac{1_{\circ}}{f\oplus a}\oslash\right)-\left(T(r,f)- N(r,f)+N(r,f\oplus a)-J(r,f)+J(r,f\oplus a)\right)\] \[=T\left(r,f\oplus a\right)-\left(m(r,f)+N(r,f\oplus a)+J(r,f \oplus a)\right)+f(0)\oplus a(0)\] \[=m\left(r,f\oplus a\right)-m(r,f)+f(0)\oplus a(0).\]
By the properties of the proximity function we obtain
\[m\left(r,f\oplus a\right)-m(r,f)\leq m(r,f)+m(r,a)-m(r,f)=m(r,a)\]
and
\[m\left(r,f\oplus a\right)-m(r,f)\geq m(r,f)-m(r,f)=0.\]
If \(m(r,a)=o(T(r,f))\), where \(r\) approaches infinity, then (2.6) can be written in the form
\[T\left(r,\frac{1_{\circ}}{f\oplus a}\oslash\right)=T(r,f)-N(r,f)+N(r,f\oplus a )-J(r,f)+J(r,f\oplus a)+o(T(r,f)),\]
where \(r\) approaches infinity. From now on in this paper, unless specified otherwise, whenever we use asymptotic notation such as \(g(r)=o(T(r,f))\) or \(g(r)=O(1)\) it is implied that \(r\) approaches infinity without any exceptional set.
On the other hand, if \(a\in\mathbb{R}\) is a constant we can see that
\[\frac{1_{\circ}}{f(x)\oplus a}\oslash=-\max\{f(x),a\}=\min\{-f(x),-a\}\leq-a,\]
which means that
\[0\leq m\left(r,\frac{1_{\circ}}{f\oplus a}\oslash\right)\leq(-a)^{+}. \tag{2.7}\]
Now (2.6) can be written in the form
\[T(r,f)= N\left(r,\frac{1_{\circ}}{f\oplus a}\oslash\right)+J\left(r,\frac{1_{ \circ}}{f\oplus a}\oslash\right)\] \[+N(r,f)-N(r,f\oplus a)\] \[+J(r,f)-J(r,f\oplus a)+f(0)\oplus a(0)-\varepsilon_{f}(r,a),\]
where \(0\leq\varepsilon_{f}(a,r)\leq a^{+}+(-a)^{+}=|a|\) for all \(a\in\mathbb{R}\) and \(r\geq 0\). This can be seen as the second main theorem for piecewise linear functions with constant target values. Motivated by this we introduce the following version of the second main theorem for a class of piecewise linear target values.
**Theorem 2.9**.: _Let \(f\) and \(a\) be tropical meromorphic functions such that \(m(r,a)=o(T(r,f))\). If_
\[m\left(r,\frac{1_{\circ}}{f\oplus a}\oslash\right)=o(T(r,f)) \tag{2.8}\]
_then_
\[T(r,f) =N\left(r,\frac{1_{\circ}}{f\oplus a}\oslash\right)+J\left(r,\frac {1_{\circ}}{f\oplus a}\oslash\right)\] \[+N(r,f)-N(r,f\oplus a)\] \[+J(r,f)-J(r,f\oplus a)+o(T(r,f)). \tag{2.9}\]
Note that by (2.7) the condition (2.8) is always satisfied for constant values \(a\in\mathbb{R}\). In fact, the error term \(o(T(r,f))\) becomes \(O(1)\) for constant values \(a\in\mathbb{R}\).
The condition (2.8) is implied by \(m\left(r,\frac{1_{\circ}}{f}\oslash\right)=o(T(r,f))\) or \(m\left(r,\frac{1_{\circ}}{a}\oslash\right)=o(T(r,f))\). However, the converse is not true. For example if \(f(x)=-x\) and \(a(x)=-\max\{-x,0\}\), then
\[T(r,f)=m\left(r,\frac{1_{\circ}}{f}\oslash\right)=\frac{1}{2}r=m\left(r,\frac{ 1_{\circ}}{a}\oslash\right),\]
but
\[m\left(r,\frac{1_{\circ}}{f\oplus a}\oslash\right)=0.\]
Theorem 2.9 works also if \(a\equiv 0_{\circ}=-\infty\) and \(m\left(r,\frac{1_{\circ}}{f\oplus a}\oslash\right)=m\left(r,\frac{1_{\circ}}{ f}\oslash\right)=o(T(r,f))\). This can be seen by considering the Jensen formula
\[N\left(r,\frac{1_{\circ}}{f}\oslash\right)+m\left(r,\frac{1_{\circ}}{f}\oslash \right)+J\left(r,\frac{1_{\circ}}{f}\oslash\right)=N(r,f)+m(r,f)+J(r,f)+O(1)\]
and the fact that \(N(r,f\oplus 0_{\circ})\equiv N(r,f)\) and \(J(r,f\oplus 0_{\circ})\equiv J(r,f)\).
If \(f\) is a tropical meromorphic function and \(a\in\mathbb{R}\) is a constant such that (2.3) holds, then we can see that \(N(r,f\oplus a)\equiv N(r,f)\) and therefore
\[T(r,f)=N\left(r,\frac{1_{\circ}}{f\oplus a}\oslash\right)+O(1).\]
This means that Theorem 2.9 implies Theorem 2.7. In the following result we will introduce conditions similar to (2.3) with piecewise linear targets.
**Corollary 2.10**.: _Let \(f\) and \(a\) be piecewise linear functions such that \(T(r,a)=o(T(r,f))\). Then_
1. _if_ \(\max\{a(x+),a(x-)\}<\min\{f(x+),f(x-)\}\) _for all_ \(x\in\{x\in\mathbb{R}:\omega_{f}(x)<0\}\) _we have_ (2.10) \[T(r,f)=N\left(r,\frac{1_{\circ}}{f\oplus a}\oslash\right)+J\left(r,\frac{1_{ \circ}}{f\oplus a}\oslash\right)+J(r,f)-J(r,f\oplus a)+o(T(r,f));\]
2. _if_ \(\max\{a(x+),a(x-)\}<\min\{f(x+),f(x-)\}\) _for all_ \(x\in\{x\in\mathbb{R}:x\Omega_{f}(x)<0\}\) _we have_ \[T(r,f)=N\left(r,\frac{1_{\circ}}{f\oplus a}\oslash\right)+J\left(r,\frac{1_{ \circ}}{f\oplus a}\oslash\right)+N(r,f)-N(r,f\oplus a)+o(T(r,f));\]
3. _if_ \(\min\{a(x+),a(x-)\}>\max\{f(x+),f(x-)\}\) _for all_ \(x\in\{x\in\mathbb{R}:\omega_{f}(x)<0\}\) _we have_ (2.11) \[m(r,f)=N\left(r,\frac{1_{\circ}}{f\oplus a}\oslash\right)+J\left(r,\frac{1_{ \circ}}{f\oplus a}\oslash\right)+J(r,f)-J(r,f\oplus a)+o(T(r,f));\]
4. _if_ \(\min\{a(x+),a(x-)\}>\max\{f(x+),f(x-)\}\) _for all_ \(x\in\{x\in\mathbb{R}:x\Omega_{f}(x)<0\}\) _we have_ \[m(r,f)=N\left(r,\frac{1_{\circ}}{f\oplus a}\oslash\right)+J\left(r,\frac{1_{ \circ}}{f\oplus a}\oslash\right)+N(r,f)-N(r,f\oplus a)+o(T(r,f)).\]
Proof.: First we attain
\[\frac{1_{\circ}}{f(x)\oplus a(x)}\oslash=-\max\{f(x),a(x)\}=\min\{-f(x),-a(x) \}\leq-a(x).\]
Now because \(T(r,a)=o(T(r,f))\) we can see that
\[m\left(r,\frac{1_{\circ}}{f\oplus a}\oslash\right)\leq m\left(r,\frac{1_{\circ }}{a}\oslash\right)\leq T\left(r,\frac{1_{\circ}}{a}\oslash\right)=T(r,a)+O(1)= o(T(r,f)).\]
We can now utilize Theorem 2.9 to obtain
\[T(r,f)=N\left(r,\frac{1_{\circ}}{f\oplus a}\oslash\right)+N(r,f)-N(r,f\oplus a )+J(r,f)-J(r,f\oplus a)+o(T(r,f)). \tag{2.12}\]
Assume that \(\max\{a(x+),a(x-)\}<\min\{f(x+),f(x-)\}\) for all \(x\in\{x:\omega_{f}(x)<0\}\). Then we know that \(N(r,f\oplus a)\geq N(r,f)\). In general it is true that \(N(r,f\oplus a)\leq N(r,f)+N(r,a)\). So overall we have
\[0\geq N(r,f)-N(r,f\oplus a)\geq-N(r,a)=o(T(r,f)). \tag{2.13}\]
Now (2.10) follows by combining (2.12) and (2.13). Part (ii) follows from similar reasoning.
Assume next that \(\min\{a(x+),a(x-)\}>\max\{f(x+),f(x-)\}\) for all \(x\in\{x:\omega_{f}(x)<0\}\). Then we can see that \(0\leq N(r,f\oplus a)\leq N(r,a)=o(T(r,f))\). Now by subtracting \(N(r,f)\) from both sides of (2.12) we obtain (2.11). Part (iv) follows from similar reasoning.
By the proof of Corollary 2.10 we can see that the assumptions \(m(r,a)=o(T(r,f))\) and \(m\left(r,\frac{1_{\circ}}{f\oplus a}\oslash\right)=o(T(r,f))\) of Theorem 2.9 follow from the assumption \(T(r,a)=o(T(r,f))\) of Corollary 2.10. However the converse is not true. For example, if we choose \(f(x)=-|x|+1\) and \(a(x)=\max\{-|x+2|+1,-|x-2|+1,0\}\) we can see that
\[m(r,a)=O(1)\quad\text{ and }\quad m\left(r,\frac{1_{\circ}}{f\oplus a}\oslash \right)=O(1).\]
On the other hand, \(N(r,a)=2(r-2)^{+}\) and \(T(r,f)=r\), which means \(T(r,a)\neq o(T(r,f))\). In addition, even though the assumption \(a(x)<f(x)\) for all \(x\in\{b:\omega_{f}(b)<0\}\) holds, we have
\[N\left(r,\frac{1_{\circ}}{f\oplus a}\oslash\right)=\begin{cases}0,&0\leq r<1,\\ 2r-2,&1\leq r\leq 3,\\ 3r-5,&r>3,\end{cases}\]
which means that
\[N\left(r,\frac{1_{\circ}}{f\oplus a}\oslash\right)=3T(r,f)+O(1).\]
This means that we cannot weaken the assumptions of Corollary 2.10 (i) to the same assumptions as in Theorem 2.9. The same can be said about Corollary 2.10 (iii) by considering the function \(f(x)=-|x|-2\) instead, and the same function for \(a(x)\) as previously.
**Remark 2.11**.: Theorem 2.9 and Corollary 2.10 also work on finite intervals. The proofs are identical, so they are not presented here. For more information on tropical Nevanlinna theory on finite intervals see [15].
## 3. The second main theorem with tropical hypersurfaces
Korhonen and Tohge proved the tropical version of the Cartan's second main theorem.
**Theorem 3.1** ([12]).: _Let \(q\) and \(n\) be positive integers with \(q>n\), and let \(\varepsilon>0\). Given \(n+1\) tropical entire functions \(g_{0},\ldots,g_{n}\) without common roots, and linearly independent in Gondran-Minoux sense, let the \(q+1\) tropical linear combinations \(f_{0}\ldots,f_{q}\) of the \(g_{j}\) over the semi-ring \(\mathbb{T}\) be defined by_
\[f_{k}(x)=a_{0k}\otimes g_{0}\oplus a_{1k}\otimes g_{1}(x)\oplus\cdots\oplus a_ {nk}\otimes g_{n}(x),\quad 0\leq k\leq q.\]
_Let \(\lambda=\mathrm{ddg}(\{f_{n+1},\ldots,f_{q}\})\) and_
\[L=\frac{f_{0}\otimes f_{1}\otimes\cdots\otimes f_{n}\otimes f_{n+1}\otimes \cdots\otimes f_{q}}{C_{\circ}(f_{0},f_{1},\ldots,f_{n})}\oslash\,.\]
_If the tropical holomorphic curve \(g\) of \(\mathbb{R}\) into \(\mathbb{TP}^{n}\) with reduced representation \(\mathbf{g}=(g_{0},\ldots,g_{n})\) is of hyper-order_
\[\rho_{2}:=\rho_{2}(\mathbf{g})<1,\]
_then_
\[(q-n-\lambda)T_{g}(r)\leq N\left(r,\frac{1_{\circ}}{L}\oslash\right)-N(r,L)+o \left(\frac{T_{g}(r)}{r^{1-\rho_{2}-\varepsilon}}\right),\]
_where \(r\) approaches infinity outside an exceptional set of finite logarithmic measure._
Cao and Zheng improved this result by extending it for tropical hypersurfaces and by improving the growth condition.
**Theorem 3.2** ([2]).: _Let \(q\), \(n\) and \(d\) be positive integers such that \(q>M\), where \(M=\binom{n+d}{d}-1\). Let the tropical holomorphic curve \(f:\mathbb{R}\rightarrow\mathbb{TP}^{n}\) be tropical algebraically nondegenerate. Assume that tropical hypersurfaces \(V_{P_{0}},\ldots,V_{P_{q}}\) are defined by homogeneous tropical polynomials \(P_{0},\ldots,P_{q}\) with degrees \(d_{0},\ldots,d_{q}\), respectively, such that the least common multiple of \(d_{0},\ldots,d_{q}\) is \(d\). If \(\lambda=\mathrm{ddg}(\{P_{M+1}\circ f,\ldots,P_{q}\circ f\})\) and_
\[\limsup_{r\rightarrow\infty}\frac{\log T_{f}(r)}{r}=0,\]
_then_
\[(q-M-\lambda)T_{f}(r)\] \[\leq \sum_{j=0}^{q}\frac{1}{d_{j}}N\left(r,\frac{1_{\circ}}{P_{j} \circ f}\oslash\right)\] \[-\frac{1}{d}N\left(r,\frac{1_{\circ}}{C_{\circ}(P_{0}^{\otimes \frac{d}{d_{0}}}\circ f,\ldots,P_{M}^{\otimes\frac{d}{d_{M}}}\circ f)}\oslash \right)+o(T_{f}(r))\] \[= \sum_{j=M+1}^{q}\frac{1}{d_{j}}N\left(r,\frac{1_{\circ}}{P_{j} \circ f}\oslash\right)+o(T_{f}(r))\] \[\leq (q-M)T_{f}(r),\]
_where \(r\) approaches infinity outside an exceptional set of zero upper density measure. In the special case whenever \(\lambda=0\),_
\[(q-M)T_{f}(r)\] \[= \sum_{j=0}^{q}\frac{1}{d_{j}}N\left(r,\frac{1_{\circ}}{P_{j}\circ f }\oslash\right)\] \[-\frac{1}{d}N\left(r,\frac{1_{\circ}}{C_{\circ}(P_{0}^{\otimes \frac{d}{d_{0}}}\circ f,\ldots,P_{M}^{\otimes\frac{d}{d_{M}}}\circ f)}\oslash \right)+o(T_{f}(r))\] \[= \sum_{j=M+1}^{q}\frac{1}{d_{j}}N\left(r,\frac{1_{\circ}}{P_{j} \circ f}\oslash\right)+o(T_{f}(r)),\]
_where \(r\) approaches infinity outside an exceptional set of zero upper density measure._
In Theorem 3.2 the homogeneous tropical polynomials have integer exponents and constant coefficients. In this paper we will consider tropical homogeneous polynomials with non-negative real number exponents and tropical meromorphic coefficients. Such a polynomial can be written in the form
\[P(x,x_{0},\ldots,x_{n})=\bigoplus_{\alpha_{0}+\alpha_{1}+\cdots+\alpha_{n}=d}a _{\alpha_{0},\ldots,\alpha_{n}}(x)\otimes x_{0}^{\otimes\alpha_{0}}\otimes x _{1}^{\otimes\alpha_{1}}\otimes\cdots\otimes x_{n}^{\otimes\alpha_{n}},\]
where \(a_{\alpha_{0},\ldots,\alpha_{n}}(x)\) are tropical meromorphic functions such that \(a_{\alpha_{0},\ldots,\alpha_{n}}(x)\not\equiv 0_{\circ}\) for only finitely many \((\alpha_{0},\ldots,\alpha_{n})\in\mathbb{R}_{+}^{n+1}\) such that \(\alpha_{0}+\alpha_{1}+\cdots+\alpha_{n}=d\). If \(f=[f_{0}:\cdots:f_{n}]:\mathbb{R}\to\mathbb{TP}^{n}\) is a tropical holomorphic curve then we often consider the composition of a tropical homogeneous polynomial \(P\) and \(f\)
\[P(f)(x)=\bigoplus_{\alpha_{0}+\alpha_{1}+\cdots+\alpha_{n}=d}a_{\alpha_{0}, \ldots,\alpha_{n}}(x)\otimes f_{0}(x)^{\otimes\alpha_{0}}\otimes f_{1}(x)^{ \otimes\alpha_{1}}\otimes\cdots\otimes f_{n}(x)^{\otimes\alpha_{n}}.\]
Next we will define
\[\psi(P,f):=\liminf_{r\to\infty}\frac{\frac{1}{2}\left(P(f)(r)+P(f)(-r)\right)} {dT_{f}(r)}, \tag{3.1}\]
and
\[\Psi(P,f):=\limsup_{r\to\infty}\frac{\frac{1}{2}\left(P(f)(r)+P(f)(-r)\right)} {dT_{f}(r)}, \tag{3.2}\]
for a homogeneous tropical polynomial \(P\) and a tropical holomorphic curve \(f\). These values will appear in many results in this paper.
The next theorem improves the results above by dropping the growth condition and by making the inequalities tighter.
**Theorem 3.3**.: _Let \(f:\mathbb{R}\to\mathbb{TP}^{n}\) be a non-constant tropical holomorphic curve and let \(P:\mathbb{TP}^{n}\to\mathbb{R}(\not\equiv 0_{\circ})\) be a tropical homogeneous polynomial of degree \(d\) with tropical meromorphic coefficients \(a_{0}(x),\ldots,a_{k}(x)\). If for all coefficients \(a_{j}\) of \(P\circ f\) we have \(N(r,a_{j})=o(T_{f}(r))\), then_
\[\psi(P,f)T_{f}(r)\leq\frac{1}{d}N\left(r,\frac{1_{\circ}}{P\circ f}\oslash \right)+o(T_{f}(r))\leq\Psi(P,f)T_{f}(r).\]
Proof.: Since \(f\) is a tropical holomorphic curve \(P\circ f\) can have poles only at the poles of the meromorphic coefficients of \(P\) and therefore \(N(r,P\circ f)=o(T_{f}(r))\). By the tropical Jensen formula we have
\[N\left(r,\frac{1_{\circ}}{P\circ f}\oslash\right)\] \[=\frac{1}{2}\left(P(f)(r)+P(f)(-r)\right)-P(f)(0)+N(r,P\circ f)\] \[=\frac{1}{2}\left(P(f)(r)+P(f)(-r)\right)+o(T_{f}(r)).\]
We can see that
\[\frac{1}{2}\left(P(f)(r)+P(f)(-r)\right)\leq\Psi(P,f)dT_{f}(r)+o(T_{f}(r))\]
and
\[\frac{1}{2}\left(P(f)(r)+P(f)(-r)\right)\geq\psi(P,f)dT_{f}(r)+o(T_{f}(r)).\]
Therefore
\[\psi(P,f)T_{f}(r)\leq\frac{1}{d}N\left(r,\frac{1_{\circ}}{P\circ f}\oslash \right)+o(T_{f}(r))\leq\Psi(P,f)T_{f}(r).\]
The next lemma gives bounds for the values \(\psi(P,f)\) and \(\Psi(P,f)\).
**Lemma 3.4**.: _Let \(f:\mathbb{R}\to\mathbb{TP}^{n}\) be a non-constant tropical holomorphic curve and let \(P:\mathbb{TP}^{n}\to\mathbb{R}(\not\equiv 0_{\circ})\) be a homogeneous tropical polynomial with tropical meromorphic coefficients \(a_{1},\dots,a_{k}\). If for all coefficients \(a_{j}\) we have \(m(r,a_{j})=o(T_{f}(r))\), then_
\[0\leq\psi(P,f)\leq\Psi(P,f)\leq 1.\]
Proof.: Since \(P\) is homogeneous, we can write \(P\circ f\) in the form
\[P(f)(x)=\max_{\alpha_{0}+\dots+\alpha_{n}=d}\{a_{\alpha_{0},\dots,\alpha_{n}}( x)+\alpha_{0}f_{0}(x)+\dots+\alpha_{n}f_{n}(x)\},\]
where \(a_{\alpha_{0},\dots,\alpha_{n}}(x)\not\equiv 0_{\circ}\) for finitely many \((\alpha_{0},\dots,\alpha_{n})\in\mathbb{R}^{n+1}_{+}\) such that \(\alpha_{0}+\dots+\alpha_{n}=d\). Then since at least one coefficient is non-zero we have
\[P(f)(r) \leq\max_{\alpha_{0}+\dots+\alpha_{n}=d}\{\alpha_{0}f_{0}(r)+ \dots+\alpha_{n}f_{n}(r)\}+\max_{\alpha_{0}+\dots+\alpha_{n}=d}\{a_{\alpha_{ 0},\dots,\alpha_{n}}(r)\}\] \[\leq d\|f(r)\|+o(T_{f}(r)).\]
This implies that
\[\Psi(P,f) =\limsup_{r\to\infty}\frac{\frac{1}{2}\left(P(f)(r)+P(f)(-r) \right)}{dT_{f}(r)}\] \[\leq\limsup_{r\to\infty}\frac{\frac{1}{2}d\left(\|f(r)\|+\|f(-r) \|\right)+o(T_{f}(r))}{dT_{f}(r)}=1.\]
Next by convexity we obtain
\[\frac{1}{2}\left(P(f)(r)+P(f)(-r)\right)\] \[\geq\frac{1}{2}\left(\sum_{\sigma=\pm 1}\max_{\begin{subarray}{c} \alpha_{0}+\cdots+\alpha_{n}=d\\ a_{\alpha_{0},\ldots,\alpha_{n}}\not\equiv 0_{\circ}\end{subarray}}\{\alpha_{0}f_{0}( \sigma r)+\cdots+\alpha_{n}f_{n}(\sigma r)\}\right)\] \[+\frac{1}{2}\left(\sum_{\sigma=\pm 1}\min_{\begin{subarray}{c} \alpha_{0}+\cdots+\alpha_{n}=d\\ a_{\alpha_{0},\ldots,\alpha_{n}}\not\equiv 0_{\circ}\end{subarray}}\{a_{\alpha_{0}, \ldots,\alpha_{n}}(\sigma r)\}\right)\] \[\geq\max_{\begin{subarray}{c}\alpha_{0}+\cdots+\alpha_{n}=d\\ a_{\alpha_{0},\ldots,\alpha_{n}}\not\equiv 0_{\circ}\end{subarray}}\{\alpha_{0}f_{0}(0)+ \cdots+\alpha_{n}f_{n}(0)\}\] \[+\frac{1}{2}\left(\sum_{\sigma=\pm 1}\min_{\begin{subarray}{c} \alpha_{0}+\cdots+\alpha_{n}=d\\ a_{\alpha_{0},\ldots,\alpha_{n}}\not\equiv 0_{\circ}\end{subarray}}\{a_{ \alpha_{0},\ldots,\alpha_{n}}(\sigma r)\}\right)\] \[=\max_{\begin{subarray}{c}\alpha_{0}+\cdots+\alpha_{n}=d\\ a_{\alpha_{0},\ldots,\alpha_{n}}\not\equiv 0_{\circ}\end{subarray}}\{\alpha_{0}f_{0}( 0)+\cdots+\alpha_{n}f_{n}(0)\}+o(T_{f}(r)).\]
Then
\[\psi(P,f)=\liminf_{r\to\infty}\frac{\frac{1}{2}\left(P(f)(r)+P(f) (-r)\right)}{dT_{f}(r)}\] \[\geq\liminf_{r\to\infty}\frac{\alpha_{0}+\cdots+\alpha_{n}=d}{ \frac{1}{2}\left(P(f)(r)+P(f)(-r)\right)}{dT_{f}(r)}\]
If we do not assume \(m(r,a_{j})=o(T_{f}(r))\) in Lemma 3.4 the upper bound can be arbitrarily large, but the lower bound stays at \(0\).
Whenever \(\psi(P,f)=\Psi(P,f)\), we obtain an equality in Theorem 3.3. In fact, it is possible that \(\psi(P,f)=\Psi(P,f)=t\) for any \(t\in[0,1]\). That can be seen by Lemma 4.8.
It is also possible that \(\psi(P,f)<\Psi(P,f)\). To see that let us first define two tropical entire functions \(f\) and \(g\). Set \(f(x)=g(x)=1\) for \(x\in(-\infty,0]\). Next set \(g(x)=x+1\) on the interval \((0,2]\), \(f(x)=1\) on the interval \((0,1]\) and \(f(x)=2x-1\) on the interval \((1,3]\). Now we can see that \(\frac{f(0)}{g(0)}=1\),\(\frac{f(1)}{g(1)}=\frac{1}{2}\) and \(\frac{f(2)}{g(2)}=1\). By defining \(g(x)=7x-11\) on the interval \((2,4]\), we again see that \(\frac{f(3)}{g(3)}=\frac{1}{2}\). We can continue this pattern on the whole real line to obtain functions \(f\) and \(g\) such that \(\frac{f(n)}{g(n)}=1\) for all even \(n\) and \(\frac{f(n)}{g(n)}=\frac{1}{2}\) for all odd \(n\) and also \(g(x)\geq f(x)\) for all \(x\in\mathbb{R}\). Since \(f\) and \(g\) are tropical entire functions we can define a tropical holomorphic curve \(h(x)=(f(x),g(x))\) and a homogeneous tropical polynomial \(P(x,y)=x\oplus(0_{\circ}\otimes y)=x\). Since \(g(x)\geq f(x)\) for all \(x\in\mathbb{R}\) we have \(T_{h}(r)=\frac{1}{2}g(r)+1\). Now we can see that
\[\psi(P,h)=\liminf_{r\to\infty}\frac{\frac{1}{2}\left(P(h)(r)+P(h)(-r)\right)}{ T_{h}(r)}=\liminf_{r\to\infty}\frac{f(r)+1}{g(r)+1}=\frac{1}{2}\]
and similarly
\[\Psi(P,h)=\limsup_{r\to\infty}\frac{\frac{1}{2}\left(P(h)(r)+P(h)(-r)\right)}{T_{h} (r)}=\limsup_{r\to\infty}\frac{f(r)+1}{g(r)+1}=1.\]
In a similar way one can obtain any values for \(\psi(P,h)\) and \(\Psi(P,h)\) on the interval \([0,1]\) as long as \(\psi(P,h)\leq\Psi(P,h)\). See Figure 5 for the case when \(\psi(P,h)=\frac{2}{3}\) and \(\Psi(P,h)=1\).
The next corollary follows directly from Theorem 3.3.
**Corollary 3.5**.: _Let \(q\) and \(n\) be positive integers, \(f:\mathbb{R}\to\mathbb{TP}^{n}\) a tropical holomorphic curve, \(P_{0},\ldots,P_{q}\) homogeneous tropical polynomials with tropical meromorphic coefficients and degrees \(d_{0},\ldots,d_{q}\), respectively. If for all the coefficients \(a_{j}\) of all the polynomials \(P_{0},\ldots,P_{q}\) we have \(N(r,a_{j})=o(T_{f}(r))\), then_
\[\sum_{j=0}^{q}\psi(P_{j},f)T_{f}(r)\leq\sum_{j=0}^{q}\frac{1}{d_{j}}N\left(r, \frac{1_{\circ}}{P_{j}\circ f}\oslash\right)+o(T_{f}(r))\leq\sum_{j=0}^{q}\Psi( P_{j},f)T_{f}(r).\]
If all of the coefficients of the tropical homogeneous polynomials have integer exponents and their coefficients are constants, then from Theorem 3.2 it follows that
\[(q-M-\lambda)T_{f}(r)\] \[\leq \sum_{j=M+1}^{q}\frac{1}{d_{j}}N\left(r,\frac{1_{\circ}}{P_{j} \circ f}\oslash\right)+o(T_{f}(r))\] \[\leq (q-M)T_{f}(r), \tag{3.3}\]
where \(r\) approaches infinity outside an exceptional set of zero upper density measure. Let \(Q=q-M\). Then by renaming \(P_{M+1},\ldots,P_{q}\) to \(P_{0},\ldots,P_{Q-1}\) we can write (3.3) in the form
\[(Q-\lambda)T_{f}(r)\leq\sum_{j=0}^{Q-1}\frac{1}{d_{j}}N\left(r,\frac{1_{\circ}} {P_{j}\circ f}\oslash\right)+o(T_{f}(r))\leq QT_{f}(r),\]
where \(r\) approaches infinity outside an exceptional set of zero upper density measure. On the other hand, Corollary 3.5 implies that
\[\sum_{j=0}^{Q-1}\psi(P_{j},f)T_{f}(r)\leq\sum_{j=0}^{Q-1}\frac{1}{d_{j}}N\left( r,\frac{1_{\circ}}{P_{j}\circ f}\oslash\right)+o(T_{f}(r))\leq\sum_{j=0}^{Q-1} \Psi(P_{j},f)T_{f}(r).\]
If \(P_{j}\circ f\) is complete, then we can write it in the form
\[P_{j}(f)(x)=\max_{i_{0}+\cdots+i_{n}=d_{j}}\{c_{i_{0},\ldots,i_{n}}+i_{0}f_{0} (x)+\cdots+i_{n}f_{n}(x)\},\]
where all the coefficients \(c_{j}\) are real numbers. From this we can see that
\[\min_{i_{0}+\cdots+i_{n}=d_{j}}\{c_{i_{0},\ldots,i_{n}}\}+d_{j} \max\{f_{0}(x),\ldots,f_{n}(x)\}\] \[\leq P_{j}(f)(x)\] \[\leq\max_{i_{0}+\cdots+i_{n}=d_{j}}\{c_{i_{0},\ldots,i_{n}}\}+d_{ j}\max\{f_{0}(x),\ldots,f_{n}(x)\}.\]
Thus when \(P_{j}\circ f\) is complete, we have
\[\lim_{r\to\infty}\frac{\frac{1}{2}\left(P_{j}(f)(r)+P_{j}(f)(-r)\right)}{d_{j} T_{f}(r)}=\lim_{r\to\infty}\frac{d_{j}T_{f}(r)+O(1)}{d_{j}T_{f}(r)}=1.\]
Therefore \(Q-\lambda\leq\sum_{j=0}^{Q-1}\psi(P_{j},f)T_{f}(r)\) and Lemma 3.4 implies that \(\sum_{j=0}^{Q-1}\Psi(P_{j},f)\leq Q\). In addition, any time we have \(0<\psi(P_{j},f)<1\) or \(0<\Psi(P_{j},f)<1\) for any \(j\), we have \(Q-\lambda<\sum_{j=0}^{Q-1}\psi(P_{j},f)\) or \(\sum_{j=0}^{Q-1}\Psi(P_{j},f)<Q\) respectively. This shows that the bounds of Corollary 3.5 are tighter than in Theorem 3.2. By introducing a growth condition we can also obtain the following version of the second main theorem for tropical hypersurfaces.
**Corollary 3.6**.: _Let \(q\), \(n\) and \(d\) be positive integers such that \(q>M\), where \(M=\binom{n+d}{d}-1\). Let the tropical holomorphic curve \(f:\mathbb{R}\to\mathbb{TP}^{n}\) be tropical algebraically nondegenerate. Assume that tropical hypersurfaces \(V_{P_{0}},\ldots,V_{P_{q}}\) are defined by homogeneous tropical polynomials \(P_{0},\ldots,P_{q}\) with integer exponents and degrees \(d_{0},\ldots,d_{q}\), respectively, such that the least common multiple of \(d_{0},\ldots,d_{q}\) is equal to \(d\). If \(\lambda=\mathrm{ddg}(\{P_{M+1}\circ f,\ldots,P_{q}\circ f\})\) and_
\[\limsup_{r\to\infty}\frac{\log T_{f}(r)}{r}=0,\]
_then_
\[\begin{split}&\sum_{j=M+1}^{q}\psi(P_{j},f)T_{f}(r)\\ \leq&\sum_{j=0}^{q}\frac{1}{d_{j}}N\left(r,\frac{1_{ \circ}}{P_{j}\circ f}\oslash\right)\\ &-\frac{1}{d}N\left(r,\frac{1_{\circ}}{C_{\circ}(P_{0}^{\otimes \frac{d}{d_{0}}}\circ f,\ldots,P_{M}^{\otimes\frac{d}{d_{M}}}\circ f)}\oslash \right)+o(T_{f}(r))\\ &=\sum_{j=M+1}^{q}\frac{1}{d_{j}}N\left(r,\frac{1_{\circ}}{P_{j} \circ f}\oslash\right)+o(T_{f}(r))\\ &\leq\sum_{j=M+1}^{q}\Psi(P_{j},f)T_{f}(r),\end{split} \tag{3.4}\]
_where \(r\) approaches infinity outside an exceptional set of zero upper density measure._
Proof.: Theorem 3.2 implies that
\[\begin{split}&\sum_{j=0}^{q}\frac{1}{d_{j}}N\left(r,\frac{1_{ \circ}}{P_{j}\circ f}\oslash\right)-\frac{1}{d}N\left(r,\frac{1_{\circ}}{C_{ \circ}(P_{0}^{\otimes\frac{d}{d_{0}}}\circ f,\ldots,P_{M}^{\otimes\frac{d}{d _{M}}}\circ f)}\oslash\right)+o(T_{f}(r))\\ &=\sum_{j=M+1}^{q}\frac{1}{d_{j}}N\left(r,\frac{1_{\circ}}{P_{j} \circ f}\oslash\right)+o(T_{f}(r))\end{split}\]
and by Corollary 3.5 we obtain
\[\begin{split}\sum_{j=M+1}^{q}\psi(P_{j},f)T_{f}(r)& \leq\sum_{j=M+1}^{q}\frac{1}{d_{j}}N\left(r,\frac{1_{\circ}}{P_{j} \circ f}\oslash\right)+o(T_{f}(r))\\ &\leq\sum_{j=M+1}^{q}\Psi(P_{j},f)T_{f}(r).\end{split}\]
The growth condition from Theorem 3.2 and Corollary 3.6 cannot be dropped due to the equality in (3.4). This can be seen in the following way. Let \(f(x)=(0,e_{2}(x))\) and \(P_{0}(x_{0},x_{1})=x_{0}\oplus x_{1}=P_{1}(x_{0},x_{1})=P_{2}(x_{0},x_{1})\). Here \(e_{2}(x)\) is the tropical hyper-exponential function. Now we have \(n=1\), \(d_{0}=d_{1}=d_{2}=d=1\), \(M=1\) and \(q=2>M\). We can see that
\[\begin{split}& C_{\circ}(P_{0}^{\otimes\frac{d}{d_{0}}}\circ f,P_{1}^{\otimes\frac{d}{d_{1}}}\circ f)\\ &=C_{\circ}(P_{0}\circ f,P_{1}\circ f)\\ &=e_{2}(x)^{+}+e_{2}(x+1)^{+}\\ &=e_{2}(x)+2e_{2}(x)\\ &=3e_{2}(x).\end{split}\]
Now, on one hand,
\[\sum_{j=0}^{2}N\left(r,\frac{1_{\circ}}{P_{j}\circ f}\oslash\right)-N \left(r,\frac{1_{\circ}}{C_{\circ}(P_{0}\circ f,P_{1}\circ f)}\oslash\right)\] \[=3N\left(r,\frac{1_{\circ}}{e_{2}(x)}\oslash\right)-3N\left(r, \frac{1_{\circ}}{e_{2}(x)}\oslash\right)\] \[\equiv 0,\]
but on the other hand,
\[\sum_{j=M+1}^{q}N\left(r,\frac{1_{\circ}}{P_{j}\circ f}\oslash\right)=N\left(r,\frac{1_{\circ}}{e_{2}(x)}\oslash\right)=T\left(r,\frac{1_{\circ}}{e_{2}(x)} \oslash\right).\]
Since it is evidently true that \(T\left(r,\frac{1_{\circ}}{e_{2}(x)}\oslash\right)\neq o(T_{f}(r))\), this shows that we cannot drop the growth condition. In this case also \(\sum_{j=M+1}^{q}\psi(P_{j},f)=\sum_{j=M+1}^{q}\Psi(P_{j},f)=1\), so changing the middle equality in (3.4) to an inequality will not help either.
In the case when the dimension of the tropical projective space is equal to one we have the following version of the second main theorem.
**Corollary 3.7**.: _Let \(f=[f_{0}:f_{1}]:\mathbb{R}\to\mathbb{TP}^{1}\) be a tropical holomorphic curve, \(a=[a_{0}:a_{1}]\) a real constant and_
\[P(f)(x)=(a_{0}\otimes f_{0}(x))\oplus(a_{1}\otimes f_{1}(x)). \tag{3.5}\]
_Then_
\[T(r,f)=N\left(r,\frac{1_{\circ}}{P\circ f}\oslash\right)+O(1)\]
_for all \(a\in\mathbb{R}\)._
Proof.: By Jensen formula we have
\[N\left(r,\frac{1_{\circ}}{P\circ f}\oslash\right)\] \[=N\left(r,\frac{1_{\circ}}{P\circ f}\oslash\right)-N(r,P\circ f)\] \[=\frac{1}{2}\sum_{\sigma=\pm 1}P(f)(\sigma r)-P(f)(0).\]
Furthermore,
\[\min\{a_{0},a_{1}\}+\|f(x)\|\leq(a_{0}\otimes f_{0}(x))\oplus(a_{1}\otimes f _{1}(x))\leq\max\{a_{0},a_{1}\}+\|f(x)\|,\]
which means that \(P(f)(x)=\|f(x)\|+O(1)\). By combining the above equations we obtain
\[T(r,f)=T_{f}(r)+O(1)=N\left(r,\frac{1_{\circ}}{P\circ f}\oslash\right)+O(1).\]
We can see a connection between \(N\left(r,\frac{1_{\circ}}{P\circ f}\oslash\right)\) and \(N\left(r,\frac{1_{\circ}}{f\oplus a}\oslash\right)\) in the following way. If \(P(f)(x)=(a_{0}\otimes f_{0}(x))\oplus(a_{1}\otimes f_{1}(x))\) then by the tropical Jensen formula
\[N\left(r,\frac{1_{\circ}}{P\circ f}\oslash\right)\] \[= \frac{1}{2}\left(P(f)(r)+P(f)(-r)\right)+O(1)\] \[= \frac{1}{2}\left((a_{0}\otimes f_{0}(r))\oplus(a_{1}\otimes f_{1} (r))+(a_{0}\otimes f_{0}(-r))\oplus(a_{1}\otimes f_{1}(-r))\right)+O(1)\] \[= \frac{1}{2}\left((a_{0}\oslash a_{1})\oplus(f_{1}(r)\oslash f_{0} (r))+(a_{0}\oslash a_{1})\oplus(f_{1}(-r)\oslash f_{0}(-r))\right)\] \[+\frac{1}{2}\left(f_{0}(r)+f_{0}(-r)\right)+O(1)\] \[= \frac{1}{2}\left((a_{1}\oslash a_{0})\oplus(f_{1}(r)\oslash f_{0} (r))\right)\] \[+\frac{1}{2}\left((a_{1}\oslash a_{0})\oplus(f_{1}(-r)\oslash f_{0} (-r))+f_{0}(r)+f_{0}(-r)\right)+O(1)\] \[= \frac{1}{2}\left(a\oplus f(r)+a\oplus f(-r)+f_{0}(r)+f_{0}(-r) \right)+O(1)\] \[= N\left(r,\frac{1_{\circ}}{f\oplus a}\oslash\right)-N(r,f\oplus a )+N\left(r,\frac{1_{\circ}}{f_{0}}\oslash\right)-N(r,f_{0})+O(1)\] \[= N\left(r,\frac{1_{\circ}}{f\oplus a}\oslash\right)+N\left(r,f \right)-N(r,f\oplus a)+O(1).\]
If \(a\) is a tropical meromorphic function such that \(m(r,a)=o(T(r,f))\) then similarly we can see that
\[N\left(r,\frac{1_{\circ}}{P\circ f}\oslash\right)=N\left(r,\frac{1_{\circ}}{f \oplus a}\oslash\right)+N\left(r,f\right)-N(r,f\oplus a)+o(T(r,f)).\]
This means that Theorem 3.3 implies Theorem 2.9 for tropical meromorphic functions.
## 4. Tropical Nevanlinna inverse problem
The defect for a tropical meromorphic function \(f\) and a constant \(a\in\mathbb{R}\),
\[\delta(a,f):=1-\limsup_{r\to\infty}\frac{N\left(r,\frac{1_{\circ}}{f\oplus a} \oslash\right)}{T(r,f)},\]
was first considered by Laine and Tohge [14]. Here we will consider the defect for tropical meromorphic functions \(f\) and \(a\). With Theorem 2.9 we can see that if
\(m(r,a)=o(T(r,f))\) and \(m\left(r,\frac{1_{o}}{f\oplus a}\oslash\right)=o(T(r,f))\), then
\[\begin{split}\delta(a,f)&=1-\limsup_{r\to\infty} \frac{N\left(r,\frac{1_{o}}{f\oplus a}\oslash\right)}{T(r,f)}\\ &=1-\limsup_{r\to\infty}\frac{T(r,f)-N(r,f)+N(r,f\oplus a)}{T(r, f)}\\ &=1-\limsup_{r\to\infty}\left(1+\frac{N(r,f\oplus a)-N(r,f)}{T(r, f)}\right)\\ &=-\limsup_{r\to\infty}\frac{N(r,f\oplus a)-N(r,f)}{T(r,f)}\\ &=\liminf_{r\to\infty}\frac{N(r,f)-N(r,f\oplus a)}{T(r,f)}. \end{split} \tag{4.1}\]
From this formula we can see that in some sense the defect \(\delta(a,f)\) measures how many of the poles of \(f\) have a value greater than or equal to \(a\). It is good to note that the term \(N(r,f)-N(r,f\oplus a)\) also appears in Theorem 2.9. As we will see later on, this form of the defect will often be very useful.
In classical Nevanlinna theory the defect has the properties \(0\leq\delta(a,f)\leq 1\) and
\[\sum_{a\in\mathbb{C}\cup\{\infty\}}\delta(a,f)\leq 2.\]
In the tropical context the defect does have the property \(0\leq\delta(a,f)\leq 1\), but it is possible, that
\[\sum_{a\in\mathbb{T}}\delta(a,f)=\infty.\]
In fact
\[\sum_{a\in\mathbb{T}}\delta(a,f)<\infty\]
if and only if \(\delta(a,f)=0\) for all \(a\in\mathbb{T}\). That is because \(\delta(a,f)\leq\delta(b,f)\) for all \(a\leq b\). In the tropical context it makes sense to consider a singe target value in the inverse problem since, unlike in the classical Nevanlinna theory, in tropical Nevanlinna theory the second main theorem can be stated for a single target value. Because of this we obtain the following versions of the inverse problem.
**Theorem 4.1**.: _For all \(\delta\in[0,1]\) there exists a tropical rational function \(f\) and a constant \(a\in\mathbb{R}\) such that \(\delta(a,f)=\delta\)._
**Theorem 4.2**.: _There exists a tropical meromorphic function \(f\) such that for all \(\delta\in[0,1]\) there exists \(a\in\mathbb{R}\) such that \(\delta(a,f)=\delta\)._
Example 4.3 and Example 4.7 will prove the above theorems. In Theorem 4.1 it is interesting to note, that unlike in classical Nevanlinna theory, here we can solve the inverse problem with just a rational function. Theorem 4.2 cannot be solved with a rational function, but in some sense it resembles the classical inverse problem more, since we may choose multiple different targets for a single function.
**Example 4.3**.: Let \(\alpha,\beta>0\) and define
\[f(x)=\max\left\{-\alpha\left|x-\frac{1}{\alpha}\right|+1,-\beta\left|x+\frac{2 }{\beta}\right|+2\right\}.\]
Now \(f\) has a pole of multiplicity \(\alpha\) at \(\frac{1}{\alpha}\) and a pole of multiplicity \(\beta\) at \(-\frac{2}{\beta}\). It is also true that \(f(\frac{1}{\alpha})=1\) and \(f(-\frac{2}{\beta})=2\). Clearly now \(m(r,f)=O(1)\) and thus by (4.1) we have
\[\delta(x,f)=\begin{cases}1,&x\geq 2,\\ \frac{\beta}{\alpha+\beta},&1\leq x<2,\\ 0,&x<1.\end{cases}\]
In general if we think of \(\delta(x,f):\mathbb{R}\to[0,1]\) as a function of \(x\) for some tropical meromorphic function \(f\) we can say that \(\delta(x,f)\) is a non-decreasing function. That is because in the form (4.1) we can see that with bigger and bigger values of \(a\), more and more poles are flattened, and thus \(N(r,f\oplus a)\) becomes smaller and smaller. The target value \(a\) cannot create poles, so therefore \(N(r,f\oplus a)\) cannot increase by making \(a\) bigger. We also have the following result concerning the defect.
**Theorem 4.4**.: _Let \(f\) be a tropical meromorphic function. Define the function \(\delta(x,f):\mathbb{R}\to[0,1]\) as follows_
\[\delta(x,f)=1-\limsup_{r\to\infty}\frac{N\left(r,\frac{1_{\alpha}}{f\oplus x} \oslash\right)}{T(r,f)}.\]
_If \(x_{0}\in\mathbb{R}\) is not an accumulation point of \(\{f(a):\omega_{f}(a)<0\}\), then \(\delta(x,f)\) is locally constant at \(x_{0}\), i.e. there exists \(a<b\) such that \(x_{0}\in[a,b]\) and \(\delta(x,f)\) is constant for all \(x\in[a,b]\)._
Proof.: If \(f\) does not have any poles, then \(\delta(x,f)\equiv 0\). If all of the poles of \(f\) have the same value \(c\) then for all \(x_{0}<c\) we have \(\delta(x,f)=0\) for all \(x\leq x_{0}\) and for all \(x_{0}\geq c\) we have \(\delta(x,f)=\delta(c,f)\) for all \(x\geq x_{0}\). So in that case \(\delta(x,f)\) is locally constant for all \(x\in\mathbb{R}\). Therefore we can assume that \(f\) has at least two poles with different values.
If \(x_{0}<\inf\{f(a):\omega_{f}(a)<0\}\), then \(\delta(x,f)=0\) for all \(x\leq x_{0}\). Similarly if \(x_{0}\geq s:=\sup\{f(a):\omega_{f}(a)<0\}\), then \(\delta(x,f)=\delta(s,f)\) for all \(x\geq x_{0}\). If \(\inf\{f(a):\omega_{f}(a)<0\}\leq x_{0}<\sup\{f(a):\omega_{f}(a)<0\}\) and \(x_{0}\) is not an accumulation point of \(\{f(a):\omega_{f}(a)<0\}\), then we can find two poles with values \(c_{1},c_{2}\in\mathbb{R}\) such that \(x_{0}\in[c_{1},c_{2}]\) and there is no pole of \(f\) with value \(c\) such that \(c\in[c_{1},c_{2}]\). Now by (4.1) we can see that on the interval \([c_{1},c_{2}]\) the function \(\delta(x,f)\) is constant.
Sometimes it is possible that for a tropical meromorphic function \(f\) the defect \(\delta(x,f)\) is locally constant at the accumulation points of \(\{f(a):\omega_{f}(a)<0\}\). For
example, if \(\inf\{f(a):\omega_{f}(a)<0\}\) or \(\sup\{f(a):\omega_{f}(a)<0\}\) is an accumulation point, then by the proof of Theorem 4.4 we can see that \(\delta(x,f)\) is still locally constant at those points. Later on in Example 4.7 we will see that defect is not always locally constant at some accumulation points.
We can see that when thought of as a function of \(x\), the defect \(\delta(x,f)\) seems to change value only at the values of the poles of \(f\) even though \(N\left(r,\frac{1_{\alpha}}{f\oplus x}\oslash\right)\) is counting roots of \(f\) and not poles. The interaction that the target value has with poles is more straightforward than with roots. When the target value goes above a value of a pole, it is flattened completely. With roots, depending on the situation, when the target value goes above it we get one or two new roots such that the total multiplicity of the roots stays the same. Only when the target value goes over a pole, some contribution from the roots is lost. In fact the lost contribution is exactly equal to the multiplicity of the pole, which was flattened. This is evident when the defect is written in the form (4.1), since the target value can only create roots but not poles. In Figure 7 you can see from left to right the situations, when one root is created, two roots are created and when we lose contribution of the root equal to the multiplicity of the pole.
For a tropical meromorphic function \(f\), if \(\{f(a):\omega_{f}(a)<0\}\) does not have any accumulation points, then \(\delta(x,f)\) is a piecewise constant function by Theorem 4.4. In the next example we will construct a tropical meromorphic function \(f\) such that the defect \(\delta(x,f)\) is piecewise constant function of \(x\) with infinitely many distinct values.
**Example 4.5**.: Define the sequence \((a_{n})_{n\in\mathbb{N}}\) as follows. First make every other element equal to \(1\)
\[(a_{n})=(1,\_,1,\_,1,\_,1,\_,1,\_,1,\_,1,\_,1,\_,1,\_,1,\_,1,\_,1,\_,1,\_,1, \_,1,\_,1,\_,\ldots).\]
Then in the remaining blank spaces, starting from \(2\), make every other number equal to \(2\)
\[(a_{n})=(1,2,1,\_,1,2,1,\_,1,\_,1,2,1,\_,1,2,1,\_,1,2,1,\_,1,2,1,\_,1,2,1, \_,\ldots).\]
Then every other blank space becomes \(3\) and so on
\[(a_{n})=(1,2,1,3,1,2,1,\_,2,1,3,1,2,1,\_,1,2,1,3,1,2,1,\_,1,2,1,3,\ldots),\]
\[(a_{n})=(1,2,1,3,1,2,1,4,1,2,1,3,1,2,1,\_,1,2,1,3,1,2,1,4,1,2,1,3,\ldots),\]
\[(a_{n})=(1,2,1,3,1,2,1,4,1,2,1,3,1,2,1,5,1,2,1,3,1,2,1,4,1,2,1,3,\ldots).\]
Figure 7. Different interactions between a root and the target value.
In the first step of the construction of the sequence we put \(a_{2n-1}=1\) for all \(n\in\mathbb{N}\). Next we put \(a_{2(2n-1)}=2=a_{2n-1}+1\), then \(a_{2^{2}(2n-1)}=3=a_{2(2n-1)}+1\), \(a_{2^{3}(2n-1)}=4=a_{2^{2}(2n-1)}+1\) and in general in the \((k+1)\)th step we put \(a_{2^{k}(2n-1)}=k+1=a_{2^{k-1}(2n-1)}+1\) for all \(n\in\mathbb{N}\). From this we can see that \(a_{2n}=a_{n}+1\) for all \(n\in\mathbb{N}\).
Then define \(f(x)=\max_{n\in\mathbb{N}}\left\{-\left|x-b(n)\right|+a_{n}\right\}\), where
\[b(n)=2\sum_{k=1}^{n}a_{k}-a_{n}=\sum_{k=1}^{n}a_{k}+\sum_{k=1}^{n-1}a_{k}.\]
Now \(f(x)\) is a tropical meromorphic function such that it has a pole of multiplicity \(2\) and value \(a_{n}\) at \(b(n)\) for every \(n\in\mathbb{N}\). In addition \(f(b(n)\pm a_{n})=0\) for every \(n\in\mathbb{N}\).
Next we will examine the quantity \(b(n)\). We can see that
\[\sum_{k=1}^{n}a_{k} =\sum_{k=1}^{\left[\frac{n+1}{2}\right]}a_{2k-1}+\sum_{k=1}^{ \left[\frac{n}{2}\right]}a_{2k}\] \[=\left[\frac{n+1}{2}\right]+\sum_{k=1}^{\left[\frac{n}{2}\right]} (1+a_{k})\] \[=\left[\frac{n+1}{2}\right]+\left[\frac{n}{2}\right]+\sum_{k=1}^{ \left[\frac{n}{2}\right]}a_{k}\] \[=n+\sum_{k=1}^{\left[\frac{n}{2}\right]}a_{k}\] \[=n+\left[\frac{n}{2}\right]+\sum_{k=1}^{\left[\frac{n}{2^{2}} \right]}a_{k}\] \[\vdots\] \[=\sum_{k=1}^{\infty}\left[\frac{n}{2^{k-1}}\right].\]
The above sum is a finite sum, because for large enough \(k\) we have \(\left[\frac{n}{2^{k-1}}\right]=0\). In fact we can see that \(\left[\frac{n}{2^{k-1}}\right]\geq 1\) whenever \(k\leq\log_{2}n+1\). Therefore
\[\sum_{k=1}^{n}a_{k}=\sum_{k=1}^{\infty}\left[\frac{n}{2^{k-1}}\right]=\sum_{k=1 }^{\left[\log_{2}n\right]+1}\left[\frac{n}{2^{k-1}}\right].\]
We can see that
\[\sum_{k=1}^{\left[\log_{2}n\right]+1}\left[\frac{n}{2^{k-1}}\right] \leq\sum_{k=1}^{\left[\log_{2}n\right]+1}\frac{n}{2^{k-1}}\] \[=\left(2-\frac{1}{2^{\left[\log_{2}n\right]+1}}\right)n\] \[\leq\left(2-\frac{1}{2^{\log_{2}n+1}}\right)n\] \[=\left(2-\frac{1}{2n}\right)n=2n-\frac{1}{2}.\]
Similarly
\[\sum_{k=1}^{\left[\log_{2}n\right]+1}\left[\frac{n}{2^{k-1}}\right] \geq\sum_{k=1}^{\left[\log_{2}n\right]+1}\left(\frac{n}{2^{k-1}}-1\right)\] \[=\left(2-\frac{1}{2^{\left[\log_{2}n\right]+1}}\right)n-\left[ \log_{2}n\right]-1\] \[\geq 2n-1-\left[\log_{2}n\right]-1=2n-\left[\log_{2}n\right]-2.\]
So overall
\[2n-\left[\log_{2}n\right]-2\leq\sum_{k=1}^{n}a_{k}\leq 2n-\frac{1}{2}.\]
From this we obtain
\[4n-\left[\log_{2}n\right]-\left[\log_{2}(n-1)\right]-6\leq b(n)\leq 4n-3.\]
Define
\[m(r):=\max\left\{n\in\mathbb{N}:b(n)\leq r\right\}.\]
We can see that
\[r\leq b(m(r)+1)\leq 4m(r)+1\iff m(r)\geq\frac{r-1}{4}\]
and
\[r\geq b(m(r))\geq 4m(r)-\left[\log_{2}m(r)\right]-\left[\log_{2}(m(r)-1) \right]-6\geq 2m(r)-6,\]
which means that \(\frac{r-1}{4}\leq m(r)\leq\frac{r+6}{2}\) and thus \(m(r)=O(r)\). We can also see that
\[\sum_{n=1}^{k}b(n)\leq\sum_{n=1}^{k}(4n-3)=2k(k+1)-3k=2k^{2}-k\]
and
\[\sum_{n=1}^{k}b(n) \geq\sum_{n=1}^{k}(4n-[\log_{2}n]-[\log_{2}(n-1)]-6)\] \[=2k(k+1)-6k-\sum_{n=1}^{k}\left([\log_{2}n]+[\log_{2}(n-1)]\right)\] \[=2k^{2}-4k-\sum_{n=1}^{k}\left([\log_{2}n]+[\log_{2}(n-1)]\right).\]
Therefore
\[\sum_{n=1}^{k}b(n)=2k^{2}+O(k\log k).\]
Then the counting function for \(f\) is
\[N(r,f) =\sum_{n=1}^{m(r)}\left(r-b(n)\right)\] \[=rm(r)-\sum_{n=1}^{m(r)}b(n)\] \[=rm(r)-2m(r)^{2}+O(m(r)\log m(r))\] \[=m(r)(r-2m(r))+O(r\log r).\]
Also
\[N(r,f)-N(r,f\oplus 1)=\sum_{n=1}^{\left[\frac{m(r)+1}{2}\right]}(r-b(2n-1))\]
and in general since \(N(r,f)-N(r,f\oplus a)\) is counting only the poles of \(f\) that have a value of \(a\) or less, we can see that
\[N(r,f)-N(r,f\oplus a)=\sum_{j=1}^{k}\sum_{n=1}^{\left[\frac{m(r)}{2^{j}}+ \frac{1}{2}\right]}(r-b(2^{j-1}(2n-1))),\]
whenever \(k\) is a positive integer and \(a\in[k,k+1)\). Now
\[\sum_{n=1}^{\left[\frac{m(r)}{2^{j}}+\frac{1}{2}\right]}b(2^{j-1} (2n-1))\] \[=\sum_{n=1}^{\left[\frac{m(r)}{2^{j}}+\frac{1}{2}\right]}\left(4 \cdot 2^{j-1}(2n-1)+O(\log n)\right)\] \[=4\cdot 2^{j-1}\left[\frac{m(r)}{2^{j}}+\frac{1}{2}\right]\left( \left[\frac{m(r)}{2^{j}}+\frac{1}{2}\right]+1\right)+O(r\log r)\]
and
\[\sum_{j=1}^{k}\sum_{n=1}^{\left[\frac{m(r)}{2^{j}}+\frac{1}{2}\right]} b(2^{j-1}(2n-1))\] \[\leq\sum_{j=1}^{k}4\cdot 2^{j-1}\left(\frac{m(r)}{2^{j}}+\frac{1}{2} \right)\left(\frac{m(r)}{2^{j}}+\frac{3}{2}\right)+O(r\log r)\] \[=2m(r)^{2}\sum_{j=1}^{k}\frac{1}{2^{j}}+O(r\log r)\] \[=2m(r)^{2}\left(1-\frac{1}{2^{k}}\right)+O(r\log r)\]
and similarly
\[\sum_{j=1}^{k}\sum_{n=1}^{\left[\frac{m(r)}{2^{j}}+\frac{1}{2}\right]}b(2^{j-1 }(2n-1))\geq 2m(r)^{2}\left(1-\frac{1}{2^{k}}\right)+O(r\log r).\]
In a similar way we obtain
\[\sum_{j=1}^{k}\sum_{n=1}^{\left[\frac{m(r)}{2^{j}}+\frac{1}{2}\right]}r=rm(r) \left(1-\frac{1}{2^{k}}\right).\]
By combining above equalities, we can see that
\[\delta(a,f) =\limsup_{r\to\infty}\frac{N(r,f)-N(r,f\oplus a)}{N(r,f)}\] \[=\limsup_{r\to\infty}\frac{\left(1-\frac{1}{2^{k}}\right)(r-2m(r ))m(r)+O(r\log r)}{(r-2m(r))m(r)+O(r\log r)}\] \[=1-\frac{1}{2^{k}}\]
whenever \(k\) is a positive integer and \(a\in[k,k+1)\). This means that the defect is a piecewise constant function that attains a different value on each interval \([k,k+1)\) for \(k\in\mathbb{N}\).
**Theorem 4.6**.: _Let \(f:\mathbb{R}\to\mathbb{R}\) be a tropical meromorphic function and \(a:\mathbb{R}\to\mathbb{R}\) a tropical entire function such that \(m(r,a)=o(T(r,f))\) and \(m\left(r,\frac{1_{o}}{f\oplus a}\mathbb{O}\right)=o(T(r,f))\). If \(f\) has at least one pole, then_
\[\delta(a,f)\leq\limsup_{r\to\infty}\frac{n(r,f)-n(r,f\oplus a)}{n(r,f)}.\]
_If in addition \(m(r,f)=o(T(r,f))\), then_
\[\liminf_{r\to\infty}\frac{n(r,f)-n(r,f\oplus a)}{n(r,f)}\leq\delta(a,f)\leq \limsup_{r\to\infty}\frac{n(r,f)-n(r,f\oplus a)}{n(r,f)}.\]
Proof.: If \(N(r,f)-N(r,f\oplus a)\equiv 0\) then also \(n(r,f)-n(r,f\oplus a)\equiv 0\) and thus \(\delta(a,f)=0\). Assume that \(N(r,f)-N(r,f\oplus a)\not\equiv 0\). Then we can see that
\(N(r,f)-N(r,f\oplus a)\to\infty\) and \(N(r,f)\to\infty\) as \(r\to\infty\) and by applying Theorem 2.9 and the L'Hopital's rule, we can see that
\[\delta(a,f) =\liminf_{r\to\infty}\frac{N(r,f)-N(r,f\oplus a)}{T(r,f)}\] \[\leq\liminf_{r\to\infty}\frac{N(r,f)-N(r,f\oplus a)}{N(r,f)}\] \[\leq\limsup_{r\to\infty}\frac{n(r,f)-n(r,f\oplus a)}{n(r,f)}.\]
If \(m(r,f)=o(T(r,f))\), then by the L'Hopital's rule we obtain
\[\delta(a,f) =\liminf_{r\to\infty}\frac{N(r,f)-N(r,f\oplus a)}{T(r,f)}\] \[=\liminf_{r\to\infty}\frac{N(r,f)-N(r,f\oplus a)}{N(r,f)}\] \[\geq\liminf_{r\to\infty}\frac{n(r,f)-n(r,f\oplus a)}{n(r,f)}.\]
The next example will show that sometimes at the accumulation points of \(\{f(a):\omega_{f}(a)<0\}\) the defect is not locally constant. It also proves Theorem 4.2, which is the inverse problem.
**Example 4.7**.: Let \(F:[0,1]\to[0,1]\) be a non-decreasing function such that \(F(0)=0\) and \(F(1)=1\). By [13, Chapter 2, Theorem 4.3] there exists a sequence \((a_{n})\subset[0,1]\) such that
\[\lim_{k\to\infty}\frac{\operatorname{card}(\{a_{1},\ldots,a_{k}\}\cap[0,x])}{k }=F(x),\]
for all \(x\in[0,1]\). Then define
\[f(x)=\max_{n\in\mathbb{N}}\left\{-\left|x-b(n)\right|+a_{n}\right\},\]
where
\[b(n)=2\sum_{k=1}^{n}a_{k}-a_{n}=\sum_{k=1}^{n}a_{k}+\sum_{k=1}^{n-1}a_{k}.\]
Define \(m(r):=\max\left\{n\in\mathbb{N}:b(n)\leq r\right\}\) and
\[I(a,x)=\begin{cases}1,&a\leq x,\\ 0,&a>x.\end{cases}\]
Now we can see that
\[\lim_{r\to\infty}\frac{n(r,f)-n(r,f\oplus x)}{n(r,f)}\] \[=\lim_{r\to\infty}\frac{\sum_{n=1}^{m(r)}I(a_{n},x)}{\sum_{n=1}^ {m(r)}1}\] \[=\lim_{r\to\infty}\frac{\operatorname{card}(\{a_{1},\ldots,a_{m( r)}\}\cap[0,x])}{m(r)}=F(x),\]
when \(x\in[0,1]\). Theorem 4.6 now implies that
\[\delta(x,f)=\begin{cases}1,&x\geq 1,\\ F(x),&0<x<1,\\ 0,&x\leq 0.\end{cases}\]
If \(F\) is chosen to be any increasing continuous function then \(f\) satisfies Theorem 4.2.
Given a tropical hypersurface \(V_{P}\) with a tropical homogeneous polynomial \(P\) of degree \(d\), Cao and Zheng [2] defined the defect as follows
\[\delta(V_{P},f):=1-\limsup_{r\to\infty}\frac{N\left(r,\frac{1_{\alpha}}{P_{0} f}\oslash\right)}{dT_{f}(r)}.\]
We aim to formulate the inverse problem for tropical hypersurfaces. To that end we will first give the following lemma.
**Lemma 4.8**.: _For all \(t\in[0,1]\) there exists a tropical holomorphic curve \(f:\mathbb{R}\to\mathbb{TP}^{n}\) and a homogeneous tropical polynomial \(P\) of degree \(d\) such that \(\psi(P,f)=\Psi(P,f)=t\)._
Proof.: Define \(f(x)=[t|x|:|x+1|:0:\cdots:0]\), where \(t\in[0,1)\) and \(P(x_{0},\ldots,x_{n})=x_{0}^{\otimes d}\). Now we have \(P(f)(x)=dt|x|\) and
\[T_{f}(r)=\frac{1}{2}(\max\{tr,r+1\}+\max\{tr,|r-1|\})=r,\]
when \(r\geq\frac{1}{1-t}\). Therefore
\[\lim_{r\to\infty}\frac{\frac{1}{2}(P(f)(r)+P(f)(-r))}{dT_{f}(r)}=t=\psi(P,f)= \Psi(P,f),\]
for any \(t\in[0,1)\). To get \(\psi(P,f)=\Psi(P,f)=1\) we can just choose
\[P(x_{0},x_{1},\ldots,x_{n})=x_{1}^{\otimes d}\]
instead.
We are now ready to formulate the inverse problem with tropical hypersurfaces.
**Theorem 4.9**.: _For all \(\delta\in[0,1]\) there exists a tropical holomorphic curve \(f:\mathbb{R}\to\mathbb{TP}^{n}\) and a homogeneous tropical polynomial \(P\) of degree \(d\) such that \(\delta(V_{P},f)=\delta\)._
Proof.: By Lemma 4.8 we can find a tropical holomorphic curve \(f\) and a homogeneous tropical polynomial \(P\) such that \(\psi(P,f)=\Psi(P,f)=1-\delta\). Theorem 3.3 implies now that \(\delta(V_{P},f)=\delta\).
Cao and Zheng proved the following result regarding the defect relation, which can be seen as the tropical version of the Shiffman conjecture [19].
**Theorem 4.10** ([2]).: _Let \(q\), \(n\) and \(d\) be positive integers such that \(q>M\), where \(M=\binom{n+d}{d}-1\). Let the tropical holomorphic curve \(f:\mathbb{R}\to\mathbb{TP}^{n}\) be tropical algebraically nondegenerate. Assume that tropical hypersurfaces \(V_{P_{0}},\ldots,V_{P_{q}}\) are defined by homogeneous tropical polynomials \(P_{0},\ldots,P_{q}\) with degrees \(d_{0},\ldots,d_{q}\), respectively, such that the least common multiple of \(d_{0},\ldots,d_{q}\) is \(d\). If \(\lambda=\operatorname{ddg}(\{P_{M+1}\circ f,\ldots,P_{q}\circ f\})\) and_
\[\limsup_{r\to\infty}\frac{\log T_{f}(r)}{r}=0,\]
_then_
\[\sum_{j=0}^{q}\delta(V_{P_{j}},f)\leq M+1+\lambda,\text{ and }\sum_{j=M+1}^{q} \delta(V_{P_{j}},f)\leq\lambda.\]
The following defect relation follows directly from Theorem 3.3.
**Theorem 4.11**.: _Let \(f:\mathbb{R}\to\mathbb{TP}^{n}\) be a non-constant tropical holomorphic curve and let \(P_{j}:\mathbb{TP}^{n}\to\mathbb{R}(\not\equiv 0_{\circ})\) be a tropical homogeneous polynomial of degree \(d\) with tropical meromorphic coefficients. If for all coefficients \(a_{j}\) of \(P\circ f\) we have \(N(r,a_{j})=o(T_{f}(r))\), then_
\[1-\Psi(P,f)\leq\delta(V_{P},f)\leq 1-\psi(P,f). \tag{4.2}\]
We can see that Theorem 4.11 implies Theorem 4.10. By Theorem 4.11 we obtain
\[\sum_{j=0}^{q}\delta(V_{P_{j}},f)\leq q+1-\sum_{j=0}^{q}\psi(P_{j},f)\]
and
\[\sum_{j=M+1}^{q}\delta(V_{P_{j}},f)\leq q-M-\sum_{j=M+1}^{q}\psi(P_{j},f).\]
We have already shown that if \(P_{j}\circ f\) is complete, then \(\psi(P_{j},f)=1\). This means that \(\sum_{j=M+1}^{q}\psi(P_{j},f)\geq q-M-\lambda\). From this immediately follows that
\[q-M-\sum_{j=M+1}^{q}\psi(P_{j},f)\leq\lambda\]
and
\[q+1-\sum_{j=0}^{q}\psi(P_{j},f)\leq q+1-\sum_{j=M+1}^{q}\psi(P_{j},f)\leq M+1+\lambda.\]
Based on the tropical version of the Shiffman conjecture, Cao and Zheng also proposed the tropical version of Griffiths conjecture [8].
**Conjecture 4.12** ([2]).: _Let \(q\), \(n\) and \(d\) be positive integers such that \(q>M\), where \(M=\binom{n+d}{d}-1\). Let the tropical holomorphic curve \(f:\mathbb{R}\to\mathbb{TP}^{n}\) be tropical algebraically nondegenerate. Assume that tropical hypersurfaces \(V_{P_{0}},\ldots,V_{P_{q}}\) are defined by homogeneous tropical polynomials \(P_{0},\ldots,P_{q}\) with degrees \(d_{0},\ldots,d_{q}\), respectively, such that the least common multiple of \(d_{0},\ldots,d_{q}\) is \(d\). If \(\lambda=\operatorname{ddg}(\{P_{M+1}\circ f,\ldots,P_{q}\circ f\})\) and_
\[\limsup_{r\to\infty}\frac{\log T_{f}(r)}{r}=0,\]
_then_
\[\sum_{j=0}^{q}\delta(V_{P_{j}},f)\leq\frac{n+1+\lambda}{d}.\]
When \(d=1\) we have \(M=n\) and in that case the conjecture is true by Theorem 4.10. However, we can see that this conjecture is not true when \(d>1\). By Lemma 4.8 we can find a holomorphic curve \(f\) and a homogeneous tropical polynomial \(P\) such that
\[\Psi(P,f)=\psi(P,f)=0. \tag{4.3}\]
By choosing \(P_{j}=P\) for all \(j=0,\ldots,q\) we obtain
\[q+1=\sum_{j=0}^{q}\delta(V_{P_{j}},f)\]
and \(\lambda=q-M\). In this case it can be seen that
\[\frac{n+1+\lambda}{d}=\frac{n+q+1-M}{d}<q+1, \tag{4.4}\]
whenever
\[q>\frac{n-M}{d-1}-1.\]
Since \(n\leq M\) in general, we can see that (4.4) is actually true for all values of \(q\). This contradicts Conjecture 4.12, because we would have
\[\frac{n+1+\lambda}{d}<q+1=\sum_{j=0}^{q}\delta(V_{P_{j}},f)\leq\frac{n+1+ \lambda}{d}.\]
We can also see that Conjecture 4.12 is not true even if you replace \(\frac{n+1+\lambda}{d}\) with \(\frac{M+1+\lambda}{d}\).
It is good to note that in the simplest case, when \(n=1,d=1\), \(a=[a_{0}:a_{1}]\) and \(P(x_{0},x_{1})=a_{0}\otimes x_{0}\oplus a_{1}\otimes x_{1}\) and the tropical holomorphic curve \(f=[f_{0}:f_{1}]\) is thought of as a tropical meromorphic function \(f=f_{1}\oslash f_{0}\), then as we have seen before
\[N\left(r,\frac{1_{\circ}}{P\circ f}\oslash\right)=N\left(r,\frac{1_{\circ}}{f \oplus a}\oslash\right)+N(r,f)-N(r,f\oplus a)+O(1).\]
Thus in this case the defect \(\delta(V_{P},f)\) cannot be seen as a generalization of \(\delta(a,f)\). In fact, in this simplest case we always have \(\delta(V_{P},f)=0\) by Theorem 2.9.
## 5. Appendix
### Inverse problem in classical Nevanlinna theory
In this section of the appendix we will state the inverse problem in the classical Nevanlinna theory. In order to do that we need to give some definitions, starting with the Nevanlinna functions. The proximity function for a meromorphic function \(f\) in classical Nevanlinna theory is defined as
\[m(r,f):=\frac{1}{2\pi}\int_{0}^{2\pi}\log^{+}|f(re^{i\theta})|\,d\theta,\]
where \(\log^{+}x=\max\{\log x,0\}\). The Nevanlinna counting function is defined as
\[N(r,f):=\int_{0}^{r}\frac{n(t,f)-n(0,f)}{t}\,dt+n(0,f)\log r,\]
where \(n(r,f)\) counts the poles of \(f\) in \(|z|<r\) according to their multiplicities. The Nevanlinna characteristic function is then defined as
\[T(r,f):=m(r,f)+N(r,f).\]
For more details about classical Nevanlinna theory see for example [3, 5, 10].
In order to state the inverse problem, we also need to define the defect and index of multiplicity for meromorphic functions. The defect for a meromorphic function \(f\) is defined as
\[\delta(a,f)=1-\limsup_{r\to\infty}\frac{N\left(r,\frac{1}{f-a}\right)}{T(r,f)}\]
for \(a\in\mathbb{C}\) and
\[\delta(a,f):=1-\limsup_{r\to\infty}\frac{N\left(r,f\right)}{T(r,f)}\]
for \(a=\infty\). The index of multiplicity is defined as
\[\theta(a,f):=\liminf_{r\to\infty}\frac{N\left(r,\frac{1}{f-a}\right)-\overline {N}\left(r,\frac{1}{f-a}\right)}{T(r,f)},\]
for \(a\in\mathbb{C}\) and
\[\theta(a,f):=\liminf_{r\to\infty}\frac{N\left(r,f\right)-\overline{N}\left(r, f\right)}{T(r,f)},\]
for \(a=\infty\). Here \(\overline{N}(r,f)\) is the truncated counting function, which counts the poles of \(f\) without taking multiplicity into account. Using the second main theorem one can prove that
\[0\leq\delta(a,f)+\theta(a,f)\leq 1\]
for any meromorphic function \(f\) and a constant \(a\in\mathbb{C}\cup\{\infty\}\) and
\[\sum_{a\in\mathbb{C}\cup\{\infty\}}\delta(a,f)+\theta(a,f)\leq 2.\]
The inverse problem in classical Nevanlinna theory is then as follows. For \(1\leq i<N\leq\infty\) let sequences \(\{\delta_{i}\},\ \{\theta_{i}\}\) of non-negative numbers be assigned such that
\[0<\delta_{i}+\theta_{i}\leq 1\]
for all \(1\leq i<N\) and
\[\sum_{i}(\delta_{i}+\theta_{i})\leq 2.\]
Let \(\{a_{i}\},\ 1\leq i<N\) be a sequence of distinct complex numbers. Does there exists a meromorphic function \(f\) such that
\[\delta(a_{i},f)=\delta_{i},\,\theta(a_{i},f)=\theta_{i}\]
for all \(1\leq i<N\) and
\[\delta(a,f)=\theta(a,f)=0\]
for all \(a\not\in\{a_{i}\}\)? Drasin [4] has shown using quasiconformal mappings that such a meromorphic function always exists.
### Tropical Nevanlinna theory
The tropical semiring is defined as \(\mathbb{T}=\mathbb{R}\cup\{-\infty\}\) where addition and multiplication are defined as
\[a\oplus b=\max\{a,b\}\]
and
\[a\otimes b=a+b.\]
Additive and multiplicative neutral elements are \(0_{\circ}=-\infty\) and \(1_{\circ}=0\). Here \(\mathbb{T}\) is a semiring, because not all elements have an additive inverse element. For example there is no \(x\in\mathbb{T}\) such that \(2\oplus x=0_{\circ}\). For this reason subtraction is not defined on
the tropical semiring. Tropical division is defined as \(a\oslash b=a-b\) and exponentiation as \(a^{\otimes\alpha}=\alpha a\) for \(\alpha\in\mathbb{R}\). For more details about tropical geometry see for example [17].
Next we will define tropical meromorphic functions.
**Definition 5.1** ([9, 14]).: A continuous piecewise linear function \(f:\mathbb{R}\to\mathbb{R}\) is said to be tropical meromorphic.
We say that \(x\) is a pole of \(f\) if
\[\omega_{f}(x):=\lim_{\varepsilon\to 0^{+}}(f^{\prime}(x+\varepsilon)-f^{\prime}( x-\varepsilon))<0\]
and a root if \(\omega_{f}(x)>0\). The multiplicity of a root or a pole of a tropical meromorphic function \(f\) at \(x\) is \(\tau_{f}(x):=|\omega_{f}(x)|\). A tropical meromorphic function is called a tropical entire function, if it has no poles. A tropical polynomial is a tropical entire function with finitely many roots and a tropical rational function is a tropical meromorphic function that has finitely many roots and poles. Tropical polynomials can be written in the form
\[\bigoplus_{n=1}^{k}c_{n}\otimes x^{\otimes t_{n}}=\max_{n=1}^{k}\{c_{n}+t_{n}x\}\]
and tropical entire functions can be written in the form
\[\bigoplus_{n=1}^{\infty}c_{n}\otimes x^{\otimes t_{n}}=\max_{n=1}^{\infty}\{c_ {n}+t_{n}x\},\]
where \(c_{n}\in\mathbb{T}\) and \(t_{n}\in\mathbb{R}\)[11]. Every tropical meromorphic function \(h\) can be written in the form
\[h=\frac{f}{g}\oslash, \tag{5.1}\]
where \(f\) and \(g\) are tropical entire functions which do not share any roots [12, Proposition 3.3]. If \(f\) and \(g\) in (5.1) are polynomials, then \(h\) is a tropical rational function.
Next we will define the tropical Nevanlinna functions. The tropical proximity function is defined as
\[m(r,f)=\frac{f(r)^{+}+f(-r)^{+}}{2},\]
where \(f(x)^{+}=\max\{f(x),0\}\). The tropical counting function is defined as
\[N(r,f)=\frac{1}{2}\int_{0}^{r}n(t,f)dt=\frac{1}{2}\sum_{|b_{\nu}|<r}\tau_{f}(b _{\nu})(r-|b_{\nu}|),\]
where \(n(r,f)\) counts the poles of \(f\) in \((-r,r)\) according to their multiplicities. The tropical Nevanlinna characteristic function is then defined as
\[T(r,f):=m(r,f)+N(r,f).\]
The tropical Nevanlinna characteristic is a positive, convex, continuous, non-decreasing piecewise linear function of \(r\)[9, Lemma 3.2].
The order and hyper-order of tropical meromorphic function \(f\) are defined as
\[\rho(f) =\limsup_{r\to\infty}\frac{\log T(r,f)}{\log r}\] \[\rho_{2}(f) =\limsup_{r\to\infty}\frac{\log\log T(r,f)}{\log r}.\]
As a special case of the tropical Poisson-Jensen formula (Theorem 2.4) we have the tropical Jensen formula when \(x=0\)
\[f(0) =\frac{1}{2}(f(r)+f(-r))-\frac{1}{2}\sum_{|a_{\mu}|<r}\tau_{f}(a_ {\mu})(r-|a_{\mu}|)+\frac{1}{2}\sum_{|b_{\nu}|<r}\tau_{f}(b_{\nu})(r-|b_{\nu}|)\] \[=m(r,f)-m\left(r,\frac{1_{\circ}}{f}\oslash\right)+N(r,f)-N\left(r,\frac{1_{\circ}}{f}\oslash\right)\] \[=T(r,f)-T\left(r,\frac{1_{\circ}}{f}\oslash\right).\]
We can list some basic properties for the Nevanlinna functions [11, Lemma 3.2].
1. If \(f\leq g\), then \(m(r,f)\leq m(r,g)\).
2. Given a positive real number \(\alpha\), then \[m(r,f^{\otimes\alpha}) =\alpha m(r,f),\] \[N(r,f^{\otimes\alpha}) =\alpha N(r,f),\] \[T(r,f^{\otimes\alpha}) =\alpha T(r,f).\]
3. Given tropical meromorphic functions \(f,g\), then \[m(r,f\otimes g) \leq m(r,f)+m(r,g),\] \[N(r,f\otimes g) \leq N(r,f)+N(r,g),\] \[T(r,f\otimes g) \leq T(r,f)+T(r,g),\]
4. and similarly, \[m(r,f\oplus g) \leq m(r,f)+m(r,g),\] \[N(r,f\oplus g) \leq N(r,f)+N(r,g),\] \[T(r,f\oplus g) \leq T(r,f)+T(r,g).\]
There also exists a tropical counterpart to the lemma on the logarithmic derivative. Cao and Zheng proved the following version of the tropical lemma on the logarithmic derivative.
**Theorem 5.2** ([2]).: _Let \(c\in\mathbb{R}\backslash\{0\}\). If \(f\) is a tropical meromorphic function on \(\mathbb{R}\) with_
\[\limsup_{r\to\infty}\frac{\log T(r,f)}{r}=0,\]
_then_
\[m\left(r,\frac{f(x+c)}{f(x)}\oslash\right)=o(T(r,f)),\]
_where \(r\) runs to infinity outside of a set of zero upper density measure \(E\), i.e._
\[\overline{\operatorname{dens}}E=\limsup_{r\to\infty}\frac{1}{r}\int_{E\cap[1,r ]}dt=0.\]
Next we will introduce tropical hyper-exponential functions. Tropical hyper-exponential functions are reminiscent of hyper-exponential functions \(\exp(z^{c})\) over the usual algebra.
**Definition 5.3** ([14]).: Let \(\alpha,\beta\) be real numbers such that \(|\alpha|>1\) and \(|\beta|<1\). Define functions \(e_{\alpha}(x)\) and \(e_{\beta}(x)\) on \(\mathbb{R}\) by
\[e_{\alpha}(x):=\alpha^{[x]}(x-[x])+\sum_{j=-\infty}^{[x]-1}\alpha^{j}=\alpha^ {[x]}\left(x-[x]+\frac{1}{\alpha-1}\right)\]
and
\[e_{\beta}(x):=\sum_{j=[x]}^{\infty}\beta^{j}-\beta^{[x]}(x-[x])=\beta^{[x]} \left(\frac{1}{1-\beta}-x+[x]\right).\]
The tropical hyper-exponential functions have the following properties.
**Lemma 5.4** ([14]).: _Let \(\alpha,\beta\) be real numbers such that \(|\alpha|>1\) and \(|\beta|<1\). The function \(e_{\alpha}(x)\) is tropical meromorphic on \(\mathbb{R}\) satisfying_
1. \(e_{\alpha}(m)=\frac{\alpha^{m}}{\alpha-1}\) _for each_ \(m\in\mathbb{Z}\)_;_
2. \(e_{\alpha}(x)=x+\frac{1}{\alpha-1}\) _for any_ \(x\in[0,1)\)_;_
3. _the functional equation_ \(y(x+1)=y(x)^{\otimes\alpha}\) _on the whole_ \(\mathbb{R}\)_._
_Similarly the function \(e_{\beta}(x)\) is tropical meromorphic on \(\mathbb{R}\) satisfying_
1. \(e_{\beta}(m)=\frac{\beta^{m}}{1-\beta}\) _for each_ \(m\in\mathbb{Z}\)_;_
2. \(e_{\beta}(x)=-x+\frac{1}{1-\beta}\) _for any_ \(x\in[0,1)\)_;_
3. _the functional equation_ \(y(x+1)=y(x)^{\otimes\beta}\) _on the whole_ \(\mathbb{R}\)_._
There is a connection between \(e_{\alpha}(x)\) and \(e_{\beta}(x)\). Suppose \(\alpha\neq\pm 1\). Then \(e_{\alpha}(-x)=\frac{1}{\alpha}e_{\frac{1}{\alpha}}(x)\) for all \(x\in\mathbb{R}\).
The tropical hyper-exponential function is of infinite order and hyper-order \(1\)[14, Proposition 8.5].
### Tropical linear algebra
First we will introduce tropical matrices. The operations of addition \(\oplus\) and multiplication \(\otimes\) for the \((n+1)\times(n+1)\) matrices \(A=(a_{ij})\) and \(B=(b_{ij})\) are defined as
\[A\oplus B=(a_{ij}\oplus b_{ij})\]
and
\[A\otimes B=\left(\bigoplus_{k=0}^{n}a_{ik}\otimes b_{kj}\right),\]
respectively. The matrix \(A\) is called regular if it contains at least one element different from \(0_{\circ}\) in each row. The tropical determinant \(|A|_{\circ}\) of \(A\) is defined as [12, 20]
\[|A|_{\circ}=\bigoplus a_{0\pi(0)}\otimes a_{1\pi(1)}\otimes\cdots\otimes a_{ n\pi(n)},\]
where the tropical sum is taken over all permutations \(\{\pi(0),\pi(1),\ldots,\pi(n)\}\) of \(\{0,1,\ldots,n\}\).
Now we can define the tropical Casorati determinant. Let \(g(x)\) be a tropical entire function, \(n\in\mathbb{N}\) and \(c\in\mathbb{R}\setminus\{0\}\). For brevity we denote
\[g(x)\equiv g,\quad g(x+c)\equiv\overline{g},\quad g(x+2c)\equiv\overline{ \overline{g}}\quad\text{and}\quad g(x+nc)\equiv\overline{g}^{[n]}.\]
Now the tropical Casorati determinant of tropical entire functions \(g_{0},\dots,g_{n}\) is defined by
\[C_{\circ}(g_{0},g_{1},\dots,g_{n})=\bigoplus\overline{g}_{0}^{[\pi(0)]} \otimes\overline{g}_{1}^{[\pi(1)]}\otimes\dots\otimes\overline{g}_{n}^{[\pi(n )]},\]
where the tropical sum is taken over all permutations \(\{\pi(0),\pi(1),\dots,\pi(n)\}\) of \(\{0,1,\dots,n\}\). The tropical Casoratian has the following properties.
**Lemma 5.5** ([12]).: _If \(g_{0},g_{1},\dots,g_{n}\) and \(h\) are tropical entire functions, then_
1. \(C_{\circ}(g_{0},g_{1},\dots,g_{i},\dots,g_{j},\dots,g_{n})=C_{\circ}(g_{0},g_ {1},\dots,g_{j},\dots,g_{i},\dots,g_{n})\) _for all_ \(i,j\in\{0,\dots,n\}\) _such that_ \(i\neq j\)_._
2. \(C_{\circ}(1_{\circ},g_{1},\dots,g_{n})\geq C_{\circ}(\overline{g}_{1},\dots, \overline{g}_{n})\)_._
3. \(C_{\circ}(0_{\circ},g_{1},\dots,g_{n})=0_{\circ}\)_._
4. \(C_{\circ}(g_{0}\otimes h,g_{1}\otimes h,\dots,g_{n}\otimes h)=h\otimes \overline{h}\otimes\dots\otimes\overline{h}^{[n]}\otimes C_{\circ}(g_{0},g_{1}, \dots,g_{n})\)_._
Next we will define tropical linear combinations and linear independence.
**Definition 5.6** ([12]).: If \(g_{0},\dots,g_{n}\) are tropical meromorphic functions and \(a_{0},\dots,a_{n}\in\mathbb{T}\), then
\[f=\bigoplus_{\nu=0}^{n}a_{\nu}\otimes g_{\nu}=\bigoplus_{i=0}^{j}a_{k_{i}} \otimes g_{k_{i}}\]
is called a tropical linear combination of \(g_{0},\dots,g_{n}\) over \(\mathbb{T}\), where the index set \(\{k_{0},\dots,k_{j}\}\subset\{0,\dots,n\}\) is such that \(a_{k_{i}}\in\mathbb{R}\) for all \(i\in\{0,\dots,j\}\), while \(a_{\nu}=0_{\circ}\) if \(\nu\not\in\{k_{0},\dots,k_{j}\}\).
With the tropical linear combinations we can define linear independence in the way of Gondran and Minoux.
**Definition 5.7** ([6, 7, 12]).: Tropical meromorphic functions \(f_{0},\dots,f_{n}\) are linearly dependent (respectively independent) in the Gondran-Minoux sense if there exist (respectively do not exist) two disjoint subsets \(I\) and \(J\) of \(K:=\{0,\dots,n\}\) such that \(I\cup J=K\) and
\[\bigoplus_{i\in I}\alpha_{i}\otimes f_{i}=\bigoplus_{j\in J}\alpha_{j}\otimes f _{j},\]
where the constants \(\alpha_{0},\dots,\alpha_{n}\in\mathbb{T}\) are not all equal to \(0_{\circ}\).
**Example 5.8**.: Let \(g_{0}(x)=x-5,g_{1}(x)=|x|,g_{2}(x)=1-x,g_{3}(x)=2x\). Then \(g_{0},g_{1},g_{2},g_{3}\) are linearly dependent in the Gondran-Minoux sense because
\[5\otimes g_{0}(x)\oplus(-1)\otimes g_{2}(x)=1_{\circ}\otimes g_{1}(x)\oplus 0 _{\circ}\otimes g_{3}(x).\]
If you do not include the function \(g_{1}\), then \(g_{0},g_{2},g_{3}\) are linearly independent in the Gondran-Minoux sense.
Next we will define the notion of degeneracy.
**Definition 5.9** ([12]).: Let \(G=\{g_{0},\dots,g_{n}\}(\neq\{0_{\circ}\})\) be a set of tropical entire functions linearly independent in the Gondran-Minoux sense, and let
\[\mathcal{L}_{G}=\operatorname{span}\langle g_{0},\dots,g_{n}\rangle=\left\{ \bigoplus_{k=0}^{n}a_{k}\otimes g_{k}:(a_{0},\dots,a_{n})\in\mathbb{T}^{n+1}\right\}\]
to be their linear span. The collection \(G\) is called the spanning basis of \(\mathcal{L}_{G}\). The shortest length of the representation of \(f\in\mathcal{L}_{G}\setminus\{0_{\circ}\}\) is defined by
\[\ell(f)=\min\left\{j\in\{1,\ldots,n+1\}:f=\bigoplus_{i=1}^{j}a_{k_{i}}\otimes g_ {k_{i}}\right\},\]
where \(a_{k_{i}}\in\mathbb{R}\) with integers \(0\leq k_{1}<k_{2}<\cdots<k_{j}\leq n\), and the dimension of \(\mathcal{L}_{G}\) is
\[\dim(\mathcal{L}_{G})=\max\{\ell(f):f\in\mathcal{L}_{G}\setminus\{0_{\circ}\}\}.\]
**Definition 5.10** ([12]).: Let \(G=\{g_{0},\ldots,g_{n}\}(\neq\{0_{\circ}\})\) be a set of tropical entire functions linearly independent in the Gondran-Minoux sense, and let \(f\) be a tropical linear combination of \(g_{0},\ldots,g_{n}\). If \(\ell(f)=n+1\), then \(f\) is said to be complete.
**Definition 5.11** ([12]).: Let \(G=\{g_{0},\ldots,g_{n}\}\) be a set of tropical entire functions, linearly independent in the Gondran-Minoux sense, and let \(Q\subset\mathcal{L}_{G}\) be a collection of tropical linear combinations of \(G\) over \(\mathbb{T}\). The degree of degeneracy of \(Q\) is defined to be
\[\operatorname{ddg}(Q)=\operatorname{card}(\{f\in Q:\ell(f)<n+1\}).\]
If \(\operatorname{ddg}(Q)=0\), then \(Q\) is called non-degenerate.
**Example 5.12**.: Let \(g_{0}(x)=x,g_{1}(x)=2x,g_{2}(x)=3x\). The tropical polynomials \(g_{0},g_{1},g_{2}\) are linearly independent in the Gondran-Minoux sense so we can ask if their tropical linear combinations are complete or not. Consider the following tropical linear combinations
\[f_{0} =0_{\circ}\otimes g_{0}\oplus 5\otimes g_{1}\oplus 1_{\circ} \otimes g_{2}\] \[f_{1} =1_{\circ}\otimes g_{0}\oplus 5\otimes g_{1}\oplus 1_{\circ} \otimes g_{2}\] \[f_{2} =1_{\circ}\otimes g_{0}\oplus 1_{\circ}\otimes g_{1}\oplus 1_{ \circ}\otimes g_{2}.\]
Any tropical linear combination where at least one coefficient is \(0_{\circ}\) is not complete, which means that \(f_{0}\) is not complete. The tropical linear combination \(f_{2}\) can be written in the form \(f_{2}=1_{\circ}\otimes g_{0}\oplus 1_{\circ}\otimes g_{2}\), which means that it is not complete either. The tropical linear combination \(f_{1}\) is of the form
\[f_{1}(x)=\begin{cases}x,&x<-5,\\ 2x,&-5\leq x\leq 5,\\ 3x,&x>5.\end{cases}\]
Therefore \(f_{1}\) is complete and \(\operatorname{ddg}(\{f_{0},f_{1},f_{2}\})=2\).
### Tropical holomorphic curves and hypersurfaces
We will now define the tropical projective space \(\mathbb{TP}^{n}\). The space \(\mathbb{TP}^{n}\) is defined in the following way. Let the equivalence relation \(\sim\) be defined so that
\[(a_{0},a_{1},\ldots,a_{n})\sim(b_{0},b_{1},\ldots,b_{n})\]
if and only if
\[(a_{0},a_{1},\ldots,a_{n})=\lambda\otimes(b_{0},b_{1},\ldots,b_{n}):=(\lambda \otimes b_{0},\lambda\otimes b_{1},\ldots,\lambda\otimes b_{n})\]
for some \(\lambda\in\mathbb{R}\). We denote by \([a_{0}:a_{1}:\cdots:a_{n}]\) the equivalence class of \((a_{0},a_{1},\ldots,a_{n})\). The tropical projective space is now defined as the quotient space
of \(\mathbb{T}^{n+1}\setminus\{\mathbf{0}_{\circ}\}\) by the equivalence relation \(\sim\), where \(\mathbf{0}_{\circ}=(0_{\circ},\ldots,0_{\circ})\) is the zero element of \(\mathbb{T}^{n+1}\). The one dimensional tropical projective space \(\mathbb{TP}^{1}\) can be identified with the completed tropical semiring \(\mathbb{T}\cup\{\infty\}\) by the map
\[[1_{\circ}:a]\mapsto a\oslash 1_{\circ}=a,\quad a\in\mathbb{T},\] \[[0_{\circ}:a]\mapsto a\oslash 0_{\circ}=\infty,\quad a\in\mathbb{R}.\]
We can now define the holomorphic curve.
**Definition 5.13** ([12]).: Let \([a_{0}:\cdots:a_{n}]\in\mathbb{TP}^{n}\) be the equivalence class of \((a_{0},\ldots,a_{n})\in\mathbb{T}^{n+1}\setminus\{\mathbf{0}_{\circ}\}\), and let
\[f=[g_{0}:\cdots:g_{n}]:\mathbb{R}\to\mathbb{TP}^{n}\]
be a tropical holomorphic map where \(g_{0},\ldots,g_{n}\) are tropical entire functions that do not have any roots which are common to all of them.
Denote
\[\mathbf{f}=(g_{0},\ldots,g_{n}):\mathbb{R}\to\mathbb{R}^{n+1}.\]
Then \(\mathbf{f}\) is called a reduced representation of the tropical holomorphic curve \(f\) in \(\mathbb{TP}^{n}\). Next we will define the Cartan characteristic function for tropical holomorphic curves.
**Definition 5.14** ([12]).: If \(f:\mathbb{R}\to\mathbb{TP}^{n}\) is a tropical holomorphic curve with a reduced representation \(\mathbf{f}=(g_{0},\ldots,g_{n})\), then
\[T_{\mathbf{f}}(r)=\frac{1}{2}\left(\|\mathbf{f}(r)\|+\|\mathbf{f}(-r)\|\right),\quad\|\mathbf{f}(x)\|=\max\{g_{0}(x),\ldots,g_{n}(x)\}\]
is said to be the tropical Cartan characteristic function of \(f\).
It has been shown that the tropical Cartan characteristic does not depend on the reduced representation [12, Proposition 4.3] so we will denote the tropical Cartan characteristic function \(T_{\mathbf{f}}(r)\) simply as \(T_{f}(r)\).
Any tropical meromorphic function \(f\) can always be represented as a quotient of two entire functions which do not share any common roots \(f=h\oslash g\)[12, Proposition 3.3]. If \(f\) is now represented also in the form \(f=[g,h]\), then in the case of one dimensional tropical projective space \(\mathbb{TP}^{1}\) it has been shown that \(T_{f}(r)\) is up to a constant equal to \(T(r,f)\)[12, Proposition 4.4]. The tropical Cartan characteristic then shares many of the properties of the tropical Nevanlinna characteristic.
We will now define tropical hypersurfaces.
**Definition 5.15** ([2]).: Let \(P\) be a homogeneous tropical polynomial in \(n\)-dimensional tropical projective space \(\mathbb{TP}^{n}\). The set of roots of \(P\) is called a tropical hypersurface. Denote the tropical hypersurface by \(V_{P}\).
Set \(M=\binom{n+d}{d}-1\). Now the composition of a tropical holomorphic curve \(f=[f_{0}:f_{1}:\cdots:f_{n}]\) and a tropical homogeneous polynomial in \(\mathbb{TP}^{n}\) of dimension \(d\) can be written in the form
\[P\circ f=\bigoplus_{I_{i}\in\mathcal{J}_{d}}c_{I_{i}}\otimes f^{I_{i}}= \bigoplus_{i=0}^{M}c_{I_{i}}\otimes f^{I_{i}},\]
where \(\mathcal{J}_{d}\) is the set of all \(I_{i}=(i_{0},i_{1},\ldots,i_{n})\in\mathbb{N}_{0}^{n+1}\) such that \(i_{0}+i_{1}+\cdots+i_{n}=d\) and \(f^{I_{i}}:=f_{0}^{\otimes i_{0}}\otimes\cdots\otimes f_{n}^{\otimes i_{n}}\). We can see that \(P\circ f\) is a tropical algebraic combination. From this we have the following definition.
**Definition 5.16** ([2]).: Tropical meromorphic functions \(f_{0},\ldots,f_{n}\) are algebraically dependent (respectively independent) in the Gondran-Minoux sense if \(f^{I_{0}},\ldots,f^{I_{M}}\) are linearly dependent (respectively independent) in the Gondran-Minoux sense.
**Definition 5.17** ([2]).: Let \(G=\{f_{0},\ldots,f_{n}\}(\neq\{0_{\circ}\})\) be a set of tropical entire functions algebraically independent in the Gondran-Minoux sense, and let
\[\hat{\mathcal{L}}_{G}=\operatorname{span}\langle f^{I_{0}},f^{I_{1}},\ldots,f ^{I_{M}}\rangle=\left\{\bigoplus_{k=0}^{n}a_{k}\otimes f^{I_{k}}:(a_{0},\ldots,a_{n})\in\mathbb{T}^{M+1}\right\}\]
be their linear span. The collection \(G\) is called the spanning basis of \(\hat{\mathcal{L}}_{G}\). The shortest length of the representation of \(f\in\hat{\mathcal{L}}_{G}\setminus\{0_{\circ}\}\) is defined as
\[\hat{\ell}(F)=\min\left\{j\in\{1,\ldots,M+1\}:F=\bigoplus_{i=1}^{j}a_{k_{i}} \otimes f^{I_{k}}\right\},\]
where \(a_{k_{i}}\in\mathbb{R}\) with integers \(0\leq k_{1}<k_{2}<\cdots<k_{j}\leq n\), and the dimension of \(\hat{\mathcal{L}}_{G}\) is
\[\dim(\hat{\mathcal{L}}_{G})=\max\{\hat{\ell}(F):F\in\hat{\mathcal{L}}_{G} \setminus\{0_{\circ}\}\}.\]
**Definition 5.18** ([2]).: Let \(G=\{f_{0},\ldots,f_{n}\}(\neq\{0_{\circ}\})\) be a set of tropical entire functions linearly independent in the Gondran-Minoux sense, and let \(F\) be a tropical algebraic combination of \(f_{0},\ldots,f_{n}\). If \(\hat{\ell}(F)=M+1\), then \(F\) is said to be complete.
Algebraic nondegeneracy is defined in an analogous way to classical algebraic geometry.
**Definition 5.19** ([2]).: Let \(f=[f_{0}:f_{1}:\cdots:f_{n}]:\mathbb{R}\to\mathbb{TP}^{n}\) be a tropical holomorphic curve. If for any tropical hypersurface \(V_{P}\) in \(\mathbb{TP}^{n}\) defined by a homogeneous tropical polynomial \(P\) in \(\mathbb{R}^{n+1}\), \(f(\mathbb{R})\) is not a subset of \(V_{P}\) then we say that \(f\) is tropical algebraically nondegenerate.
There is a link between algebraic nondegeneracy and algebraic independence in the Gondran-Minoux sense as follows.
**Lemma 5.20** ([2]).: _A tropical holomorphic curve \(f:\mathbb{R}\to\mathbb{TP}^{n}\) with reduced representation \(f=(f_{0},f_{1},\ldots,f_{n})\) is tropical algebraically nondegenerate if and only if \(f_{0},\ldots,f_{n}\) are algebraically independent in the Gondran-Minoux sense._
|
2303.09437 | Physically Consistent Multiple-Step Data-Driven Predictions Using
Physics-based Filters | (Extended Version) Data-driven control can facilitate the rapid development
of controllers, offering an alternative to conventional approaches. In order to
maintain consistency between any known underlying physical laws and a
data-driven decision-making process, preprocessing of raw data is necessary to
account for measurement noise and any inconsistencies it may introduce. In this
paper, we present a physics-based filter to achieve this and demonstrate its
effectiveness through practical applications, using real-world datasets
collected in a building on the Ecole Polytechnique Federale de Lausanne (EPFL)
campus. Two distinct use cases are explored: indoor temperature control and
demand response bidding. | Yingzhao Lian, Jicheng Shi, Colin N. Jones | 2023-03-16T16:12:46Z | http://arxiv.org/abs/2303.09437v3 | # Physically Consistent Multiple-Step Data-Driven Predictions Using Physics-based Filters
###### Abstract
Data-driven control can facilitate the rapid development of controllers, offering an alternative to conventional approaches. In order to maintain consistency between any known underlying physical laws and a data-driven decision-making process, preprocessing of raw data is necessary to account for measurement noise and any inconsistencies it may introduce. In this paper, we present a physics-based filter to achieve this and demonstrate its effectiveness through practical applications, using real-world datasets collected in a building on the Ecole Polytechnique Federale de Lausanne (EPFL) campus. Two distinct use cases are explored: indoor temperature control and demand response bidding.
## I Introduction
Data-driven control can improve the speed and quality of controller design and deployment via an end-to-end solution from I/O data to a functional controller. However, it is often crucial to ensure that the data-driven control should respect the known physical laws in order to make a meaningful decision. However, due to measurement noise present in the data, a direct use of raw data1 may lead to incorrect conclusions or predictions. Such inconsistencies were spotted by [1], where minor perturbations in the input were shown to significantly deteriorate prediction accuracy [2].
Footnote 1: Raw data in this work indicates the data without preprocessing.
The incorporation of physical laws in data-driven and machine learning methods has been an active area of research for decades. In fact, this idea has been used to solve partial differential equations since the 1990s [3]. The idea of incorporating a physical rule in a parametric model is referred to as "physics-guided" or "physics-informed" in the literature [4]. This can involve using the physical rule to define the loss function and to confine the model's parameters to a subset that is consistent with known physical rules. Researchers have applied this idea to various architectures, such as enforcing a positive correlation between indoor temperature and heating power consumption in neural networks [5], and using a similar approach in linear parametric models [6]. While the aforementioned methods are important, preprocessing data can be a more direct approach to improve consistency. The methods falling in this category are highly related to robust optimization, where algorithms similar to scenario approaches have been successfully employed in natural language processing [7] and computer vision [8].
In this work, we propose a physics-based filter that is tailored to data-driven control schemes based on Willems' fundamental lemma [9]. Willems' fundamental lemma offers a direct characterization of the system responses of linear-time-invariant (LTI) systems given an informative historical dataset. Such a characterization has been used in data-driven methods, and has been deployed in output prediction [10, 11], input reconstruction [11, 12], and in controller design [13, 14, 15, 16, 17]. The main contribution lies in showing that some a priori knowledge can be integrated into Willems' fundamental lemma by robust optimization. The proposed scheme remains a non-parametric prediction structure, which differentiates it from other parametric schemes [5, 6].
In order to present the proposed method with a more intuitive exposition, the idea presented in this paper will be motivated and related to building applications. In the following, the Willems' fundamental lemma and its corresponding prediction problem is reviewed in Section II, after which the physics-based filter is investigated in Section III. The efficacy of the proposed scheme is validated on an indoor temperature control problem and a demand response bidding problem, with data collected from a building on the EPFL campus.
**Notation:**\(I_{n}\in\mathbb{R}^{n\times n}\) denotes a \(n\)-by-\(n\) identity matrix, similarly, we denote the zero matrix by \(\mathbf{O}\). \(\mathbf{0}\) and \(\mathbf{1}\) respectively denotes a zero vector and a one vector. \(\mathrm{blkdiag}(A_{1},\ldots,A_{n})\) generates a block-diagonal matrix whose diagonal blocks are \(A_{1},\ldots,A_{n}\) accordingly. \(x:=\{x_{i}\}_{i=1}^{T}\) denotes a sequence of size \(T\) indexed by \(i\). \(x_{i}\) denotes the measurement of \(x\) at time \(i\), and \(x_{1:L}:=[x_{1}^{\top},x_{2}^{\top}\ldots x_{L}^{\top}]^{\top}\) denotes a concatenated sequence of \(x_{i}\) ranging from \(x_{1}\) to \(x_{L}\), and we drop the index to improve clarity if the intention is clear from the context.
## II Preliminaries
**Definition 1**: _A Hankel matrix of depth \(L\) associated with a vector-valued signal sequence \(s:=\{s_{i}\}_{i=1}^{T}\), \(s_{i}\in\mathbb{R}^{n_{s}}\) is_
\[\mathfrak{H}_{L}(s):=\begin{bmatrix}s_{1}&s_{2}&\ldots&s_{T-L+1}\\ s_{2}&s_{3}&\ldots&s_{T-L+2}\\ \vdots&\vdots&&\vdots\\ s_{L}&s_{L+1}&\ldots&s_{T}\end{bmatrix}.\]
A linear time-invariant (LTI) system is defined by \(x_{i+1}=Ax_{i}+Bu_{i}\,\ y_{i}=Cx_{i}+Du_{i}\), dubbed \(\mathfrak{B}(A,B,C,D)\). Its order is \(n_{x}\) with \(n_{u}\), \(n_{y}\) denoting its input and output dimensions respectively. An \(L\)-step trajectory generated by this system is \(\begin{bmatrix}u_{1:L}&y_{1:L}\end{bmatrix}:=\begin{bmatrix}u_{1}^{\top}& \ldots&u_{L}^{\top}&y_{1}^{\top}&\ldots&y_{L}^{\top}\end{bmatrix}^{\top}\). The set of all possible \(L\)-step trajectories generated by \(\mathfrak{B}(A,B,C,D)\) is denoted by \(\mathfrak{B}_{L}(A,B,C,D)\). For the sake |
2303.11278 | Bayesian Pseudo-Coresets via Contrastive Divergence | Bayesian methods provide an elegant framework for estimating parameter
posteriors and quantification of uncertainty associated with probabilistic
models. However, they often suffer from slow inference times. To address this
challenge, Bayesian Pseudo-Coresets (BPC) have emerged as a promising solution.
BPC methods aim to create a small synthetic dataset, known as pseudo-coresets,
that approximates the posterior inference achieved with the original dataset.
This approximation is achieved by optimizing a divergence measure between the
true posterior and the pseudo-coreset posterior. Various divergence measures
have been proposed for constructing pseudo-coresets, with forward
Kullback-Leibler (KL) divergence being the most successful. However, using
forward KL divergence necessitates sampling from the pseudo-coreset posterior,
often accomplished through approximate Gaussian variational distributions.
Alternatively, one could employ Markov Chain Monte Carlo (MCMC) methods for
sampling, but this becomes challenging in high-dimensional parameter spaces due
to slow mixing. In this study, we introduce a novel approach for constructing
pseudo-coresets by utilizing contrastive divergence. Importantly, optimizing
contrastive divergence eliminates the need for approximations in the
pseudo-coreset construction process. Furthermore, it enables the use of
finite-step MCMC methods, alleviating the requirement for extensive mixing to
reach a stationary distribution. To validate our method's effectiveness, we
conduct extensive experiments on multiple datasets, demonstrating its
superiority over existing BPC techniques. | Piyush Tiwary, Kumar Shubham, Vivek V. Kashyap, Prathosh A. P | 2023-03-20T17:13:50Z | http://arxiv.org/abs/2303.11278v2 | # Constructing Bayesian Pseudo-Coresets using Contrastive Divergence
###### Abstract
Bayesian Pseudo-Coreset (BPC) and Dataset Condensation are two parallel streams of work that construct a synthetic set such that, a model trained independently on this synthetic set, yields the same performance as training on the original training set. While dataset condensation methods use non-bayesian, heuristic ways to construct such a synthetic set, BPC methods take a bayesian approach and formulate the problem as divergence minimization between posteriors associated with original data and synthetic data. However, BPC methods generally rely on distributional assumptions on these posteriors which makes them less flexible and hinders their performance. In this work, we propose to solve these issues by modeling the posterior associated with synthetic data by an energy-based distribution. We derive a contrastive-divergence-like loss function to learn the synthetic set and show a simple and efficient way to estimate this loss. Further, we perform rigorous experiments pertaining to the proposed method. Our experiments on multiple datasets show that the proposed method not only outperforms previous BPC methods but also gives performance comparable to dataset condensation counterparts.
## 1 Introduction
Modern deep learning models have shown impressive performance on a variety of applications such as computer vision, natural language processing, and speech processing [1, 18, 29, 38, 1, 11]. Large training datasets and heavy computational infrastructure have played a pivotal role in achieving such performance. Moreover, the availability of large training datasets is critical to improving the performance of these models. This reliance on large datasets not only increases the computational complexity required to train these models but also adds to the total training time required to achieve acceptable accuracy. Typically, training time for larger datasets requires hundreds of GPU hours leading to enormous amounts of carbon emission which has adverse environmental impacts [24].
There have been several attempts by researchers to reduce the reliance on large datasets. A naive approach to overcome this issue could be to randomly sample a subset from the original dataset and use it for training. However, such subsets need not necessarily reflect the diversity and information captured by original datasets. Another common approach has been that of Core-Subset or 'Coreset' selection [3, 40, 15, 39, 15, 47], which attempts to sample a small (_informative_) subset of the original data that can produce results comparable to the original dataset. However, finding an optimal solution to such a problem is NP-hard and often results in subpar performance. Further, it is shown that coreset methods scale poorly with the dimension of dataset leading to suboptimal subsets in higher dimensions [36].
To address these issues, [36] proposed 'Bayesian Psuedo-Coreset' (BPC), a new technique for generating synthetic images that can scale to high-dimensional datasets. The general idea of BPC is to use gradient based optimization to reduce the divergence between the parameter posterior of the original dataset and the parameter posterior associated with the synthetic dataset.
A parallel and closely related approach with same motivation is that of dataset condensation [52, 53, 54, 37, 3, 50, 3, 54]. The goal of these methods is same as that of bayesian pseudo-coreset. However, these methods take a non-bayesian approach to generate the synthetic dataset by optimizing heuristic objectives based on features [52, 45], gradients [53], training trajectory [7], and performance matching [37, 54] between synthetic data samples and the original data points.
Recently, [25] tried to bridge the gap between bayesian pseudo-coreset (BPC) methods and dataset condensation methods by analyzing various divergence metrics. In particular, they showed that under certain assumptions, dataset condensation methods can be seen as special cases of BPC methods with different divergence metrics. However, on high dimensional datasets such as images, dataset condensation methods often outperform the performance of BPC methods.
We argue that this drop in performance is attributed to the stringent form of parameter posterior of original dataset used by previous methods. In this work, we relax this condition and propose a more flexible framework to work with such posteriors. In particular, we don't assume any form for parameter posterior associated with original dataset and use an energy-based distribution to model the posterior associated with synthetic data. Then, we derive contrastive divergence like loss function required to minimize the forward KL-Divergence between these posteriors. Our method allows flexibility to use various energy function without worrying about parameter posterior of original dataset. We experimentally observe that our method not only outperforms bayesian pseudo-coreset methods, but also gives performance comparable with that of dataset condensation methods. To the best of our knowledge, this is the first work that bridges the performance gap between BPC and dataset condensation methods. Our contributions can be summarized as follows :
* We propose a flexible framework for construction of bayesian pseudo-coreset where we don't assume any form for parameter posterior associated with original dataset and use a energy-based distribution to model the posterior associated with synthetic data.
* Our method allows one to use various energy functions to model the parameter posterior associated with synthetic data without having to worry about posterior associated with real data.
* We derive a contrastive-divergence like loss function to minimize the forward KL divergence between the posteriors; further, we also show a simple and efficient way to estimate this loss and learn the pseudo-coreset.
* We rigorously test our method against state-of-the-art BPC methods as well as dataset condensation methods. We observe that our method not only outperforms BPC methods but also gives performance comparable to that of dataset condensation, hence, bridging the performance gap between the two paradigms.
## 2 Related Work
### Coreset
The idea of using less amount of data to achieve performance comparable to that obtained by using the entire dataset was first manifested by Coresets [3, 6, 15, 40, 47, 39]. The underlying idea of coreset methods is to select a subset of the training dataset that can achieve performance comparable to the original dataset. Herding based coreset methods [3, 6, 39, 47] select such samples by minimizing the distance between the feature centroid for the coreset and the feature centroid for the complete dataset. [47] proposed a greedy technique for constructing such a coreset by progressively and greedily adding data points that reduce the distance between these centroids. This strategy ensures that the most representative samples of the entire dataset are included in the subset, which is then used to train the model. However, herding based coresets often fails to sample a diverse group of data points, which impacts generalization capability of the model. In contrast to herding-based methods, K-center based coreset techniques [40, 15] pick the most diverse and representative samples by optimizing a minimax facility location-based submodular function [15]. Such method have also been explored in generative models to select the most representative sample during the learning process [43]. Contrary to K-center and herding based coreset selection methods, forgetting based coreset [44] removes the easily forgettable samples from the training dataset. This ensures that the coreset sampling process considers the uncertainty associated with the given model. Apart from the given methods, dataset subset selection has been explored in other fields like continual learning [5], active learning [27] and hyperparameter optimization [24].
### Dataset Condensation
While coreset methods select a diverse and rich subset of data points from the training set, the basic heuristics used for subset selection often produce suboptimal results. To overcome this barrier [46] proposed Dataset Distillation to 'learn' a set of informative synthetic samples from a large dataset. Rather than selecting a subset of datapoints from the training set, these methods create an artificial dataset with a lower cardinality that, when trained independently, produces accuracy comparable to the original model.
In particular, the authors of [46] learn a model by using the synthetic set and test the model on original dataset. The synthetic set is optimized to improve the performance of the model on the original dataset. [26] further extended this idea by generating multiple synthetic dataset for training but with limited storage capacity. Inspired by these works, [53] proposed gradient matching where the authors proposed to learn the synthetic samples by aligning the gradients of models trained using original dataset and the syn
thetic dataset. Despite the simplicity of gradient matching, it treats the gradients of each class independently, hence neglecting the class-discriminative features. To ensure that efficient gradients are calculated during the training process, some studies [22, 33, 49] have proposed different gradient comparison metrics that penalise the model for overfitting on the small dataset. Further, [50] proposed differentiable siamese augmentation to avoid overfitting and boost the performance of any dataset condensation method in general.
Further, methods like [9, 46] attempt to directly optimize the validation loss over samples of original dataset using the optimal parameters generated by the synthetic dataset. The validation loss is backpropagated through the unrolled gradient descent steps used for finding the optimal parameters. The overall idea is similar to that of meta-learning [16], where a bi-level optimization is performed to optimize (outer-loop) the meta-test loss in the original dataset using a model that has been meta-trained (inner-loop) on synthetic data. However, the computation graph for such methods increases with the number of steps in the inner-loop, which is computationally expensive and requires large GPU memory. To overcome this bottleneck, there have been methods that try to reduce the computational burden due to inner-loop. For eg. [37] proposes to approximate the inner-loop optimization by a convex optimization problem which can be solved using kernel ridge regression (KRR). [37] uses a neural tangent kernel (NTK) [21] to perform KRR. Further, to reduce the complexity involved in computing NTK, [35] uses a neural network gaussian process kernel instead. Further, [54] proposes to decompose the network optimized in the inner-loop into a feature-extractor and a linear classifier; they show that it is enough to update the linear classifier in the inner-loop while the feature extractor can be updated on synthetic set separately.
Contrary to bi-level optimization approaches that focus on short-horizon inner-loop trajectories, [7] proposes to focus on long-horizon trajectories, ensuring that the models learn comparable trajectories during optimization for both the synthetic set and the original dataset. To do this, the parameters of the model generated during training with synthetic data and with the original data are compared at different epochs. As a follow-up work, [34] showed that matching all the parameters has a negative impact on the performance of the model and the performance can be boosted by comparing a pruned set of parameters. Similarly, [12] proposed a regularizer to reduce the accumulated error in the long horizon trajectories. However, even though the trajectory based methods show high performance on several datasets, they suffer from same issues as that of meta-learning methods, i.e, the computational graph of such methods can be very large, leading to a high GPU memory requirement.
While the above mentioned methods resort to bi-level optimization to learn the synthetic set, Distribution Matching methods [51, 45, 52] try to generate a condensed synthetic set with a similar feature distribution as the original dataset, hence, completely avoiding the bi-level optimization. For eg. [52] proposed to use maximal mean discrepancy (MMD) to match the distribution of the synthetic dataset with the original dataset using the classifier layer without the last linear layer. [45] further proposed CAFE to match feature statistics of different layers and used a discriminator to maximise the probability for a given class.
### Bayesian Psuedo-Coreset
Recently, there has been a surge in methods that try to re-interpret the existing deep learning methods from a bayesian perspective. For eg. [23] showed that most of the machine learning algorithms and practices are special instances of a generic algorithm, namely, the Bayesian Learning Rule. Following this line of thought, [36] proposed a bayesian perspective to dataset condensation. Particularly, they formulate dataset condensation as a divergence minimization problem between the parameter posteriors associated with synthetic set and the original dataset. The synthetic set obtained using such methods is termed as 'Bayesian Pseudo-Coreset' (BPC). Compared to cosrets, these methods scale more efficiently with data dimensions and achieve a better posterior approximation. Furthermore, a bayesian formulation of the given problem further enhances the understanding of the given field.
The main idea proposed in BPC-methods is to minimize the divergence between the posterior associated with psuedocorest and the original training dataset. [36] formalized the given problem by minimising the reverse-KL divergence between posterior of real data with the posterior of synthetic data. On similar lines, [25] demonstrated that other divergence metric, such as wasserstein distance and forward-KL divergence, can also be used to generate comparable accuracy. Further, [25] showed that under certain assumptions, dataset condensation methods can be looked into as special instances of BPC-methods. For eg. MTT [7] can be shown as BPC-method with wasserstein distance as the divergence metric. Similarly, gradient matching [53] can be shown as a BPC-method under reverse-KL divergence as the divergence metric. Further, [25] proposes to use forward KL divergence as the divergence metric as it encourages the model to cover the entire target distribution. However, despite the theoretical support for these methods, there are lot of stringent assumptions involved in formulation of posteriors associated with synthetic set and original dataset. Moreover, there is still a significant performance gap between BPC methods and dataset condensation methods, which we have attempted to bridge through our work.
### Energy Based Models
Energy Based Models or EBMs are a class of density estimation models that assume the required density to have the form of an energy-based distribution. In particular, the desired density (\(p(x)\)) is approximated by a parametric density of the form \(p_{\theta}(x)=\exp{(-E_{\theta}(x))}/{Z_{\theta}}\) where \(E_{\theta}(\cdot)\) is the negative log of un-normalized density also called the energy function and \(Z_{\theta}=\int_{x}\exp{(-E_{\theta}(x))}dx\) is the normalizing constant also known as the partition function. Generally, the goal of EBMs is to learn the parameters \(\theta\) of the energy function that minimizes the KL divergence between the desired density and \(p_{\theta}(x)\). There have been several lines of work that try to train the EBMs efficiently [2, 13, 14, 17] however, the most simple and commonly used approach is the one proposed in [14]. Particularly, the contrastive-divergence loss used to learn the parameters of the energy function as shown in [20] is given by:
\[\mathcal{L}=\mathbb{E}_{x^{+}\sim p(x)}[E_{\theta}(x^{+})]- \mathbb{E}_{x^{-}\sim p_{\theta}(x)}[E_{\theta}(x^{-})] \tag{1}\]
The above loss function ensures that the model learns to assign low energy to the samples associated with the real data points and high energy to samples obtained from the parametric density. The first expectation in above expression is approximated using the samples present in the dataset. However, we cannot directly get samples from \(p_{\theta}(\cdot)\) to approximate the second expression, as the partition function is intractable and sampling would be very inefficient. To overcome this bottleneck, we resort to gradient-based markov-chain monte-carlo (MCMC) methods such as langevin dynamics to sample from \(p_{\theta}(\cdot)\).
In the following work, we approximate the parameter posterior associated with synthetic set by an energy-based distribution, however, in our case the goal is not to learn the parameters of the energy function but to learn the synthetic set itself. Hence, we fix the energy function and derive a loss function (similar to contrastive-divergence) to learn the synthetic samples instead. The rest of the paper is organized as follows: In Section 3.1, we setup the notations and formalize our problem statement, in Section 3.2 we derive the loss function used to construct the pseudo-coreset and describe the proposed method. In Section 4, we provide the implementation details of the proposed method. Lastly, in Section 5, we present our experimental findings and compare it with previous baselines.
## 3 Proposed Method
### Overview
Consider the training set \(\mathcal{D}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{|\mathcal{D}|}\) where, the cardinality of the dataset is \(|\mathcal{D}|\). The goal of bayesian pseudo-coreset is to construct a sythetic dataset \(\tilde{\mathcal{D}}=\{\tilde{\mathbf{x}}_{i},\tilde{y}_{i}\}_{i=1}^{|\tilde{ \mathcal{D}}|}\) such that \(\{y_{i}\}\) and \(\{\tilde{y}_{i}\}\) share the same label space and \(|\tilde{\mathcal{D}}|<<|\mathcal{D}|\); further, \(\tilde{\mathcal{D}}\) should provide classification performance that is comparable to that of original training set \(\mathcal{D}\). For this, consider the space of parameters of a discriminative / classification model (\(\Theta\)). Now, let \(\mathbf{p}(\theta|\mathcal{D})\) and \(\mathbf{q}(\theta|\tilde{\mathcal{D}})\) be the density of optimal parameters induced due to the original training set (\(\mathcal{D}\)) and the synthetic set (\(\tilde{\mathcal{D}}\)) respectively. In particular, let \(\mathcal{F}\) be the space of all classification loss functions. Now, let
\[\mathcal{M}_{\ell}=\left\{\theta\in\Theta:\underset{\theta}{\arg \min}\frac{1}{|\mathcal{D}|}\sum_{i=1}^{|\mathcal{D}|}\ell(y_{i},f_{\theta}( \mathbf{x}_{i}))\right\} \tag{2}\]
be the set of all parameters that minimize the empirical risk w.r.t some loss function \(\ell(\cdot)\in\mathcal{F}\) and classifier \(f_{\theta}(\cdot)\) manifested by \(\theta\). Then, \(\mathbf{p}(\theta|\mathcal{D})\) can be seen as the distribution induced on union of all such sets, i.e,
\[\bigcup_{\ell\in\mathcal{F}}\mathcal{M}_{\ell}\sim\mathbf{p}(\theta| \mathcal{D}) \tag{3}\]
Similarly, we can define \(\tilde{\mathcal{M}}_{\ell}\) to be the set of parameters that minimize the empirical risk for synthetic set \(\tilde{\mathcal{D}}\), and \(\mathbf{q}(\theta|\tilde{\mathcal{D}})\) can be seen as the distribution induced on union of all such sets:
\[\bigcup_{\ell\in\mathcal{F}}\tilde{\mathcal{M}}_{\ell}\sim\mathbf{q} (\theta|\tilde{\mathcal{D}}) \tag{4}\]
Note that we don't know the closed-form expressions for the above densities. Previous methods [25, 36] assume some form for \(\mathbf{p}(\theta|\mathcal{D})\) and \(\mathbf{q}(\theta|\tilde{\mathcal{D}})\), however, this is not desirable as these densities can be very complex and assuming distributional form for these densities would reduce the flexibility for constructing the synthetic set of pseudo-coreset. Hence, in this work, we don't assume any form for \(\mathbf{p}(\theta|\mathcal{D})\), however, we assume an energy-based distribution for \(\mathbf{q}(\theta|\tilde{\mathcal{D}})\), i.e, \(\mathbf{q}(\theta|\tilde{\mathcal{D}})=\exp{(-E(\theta,\tilde{\mathcal{D}}))}/{Z( \tilde{\mathcal{D}})}\). Note that, here the energy function \(E(\theta,\tilde{\mathcal{D}})\) is fixed and is not learnable, however, \(\tilde{\mathcal{D}}\) is a learnable quantity. First, we will show the loss for any generic energy function, later we analyze and discuss the effect of different choices for energy functions.
While dataset condensation methods (cf. Sec. 2.2) resort to heuristics to construct the synthetic set; bayesian pseudo-coresets explicitly minimize a divergence metric between \(\mathbf{p}(\theta|\mathcal{D})\) and \(\mathbf{q}(\theta|\tilde{\mathcal{D}})\). In our method, we propose to construct the pseudo-coresets by solving the following optimization problem:
\[\tilde{\mathcal{D}}^{*}=\underset{\tilde{\mathcal{D}}}{\arg\min} \leavevmode\nobreak\ \leavevmode\nobreak\ D_{KL}\left(\mathbf{p}(\theta|\mathcal{D})||\mathbf{q}(\theta| \tilde{\mathcal{D}})\right) \tag{5}\]
where, \(D_{KL}\) is the forward-KL divergence, \(\mathbf{p}(\theta|\mathcal{D})\) is parameter posterior associated with original training set and \(\mathbf{q}(\theta|\tilde{\mathcal{D}})\) is the parameter posterior associated with the pseudo-coreset which has a form of generic energy-based distribution.
### Problem Formulation
As mentioned in the previous section, our aim is to minimize the forward-KL divergence between \(\mathbf{p}(\theta|\mathcal{D})\) and \(\mathbf{q}(\theta|\tilde{\mathcal{D}})\) w.r.t \(\tilde{\mathcal{D}}\), where \(\mathbf{p}(\theta|\mathcal{D})\) can have any distributional form and \(\mathbf{q}(\theta|\tilde{\mathcal{D}})=\exp{(-E(\theta,\tilde{\mathcal{D}}))}/{Z( \tilde{\mathcal{D}})}\). To obtain the synthetic set that minimizes the above divergence metric, we take the gradient of above expression w.r.t \(\tilde{\mathcal{D}}\):
\[\nabla_{\tilde{\mathcal{D}}}D(\mathbf{p}(\theta|\mathcal{D})||\mathbf{q}( \theta|\tilde{\mathcal{D}}))=\nabla_{\tilde{\mathcal{D}}}\underset{\mathbf{p}( \theta|\mathcal{D})}{\mathbb{E}}\left[\log\left(\frac{\mathbf{p}(\theta|\mathcal{D })}{\mathbf{q}(\theta|\tilde{\mathcal{D}})}\right)\right] \tag{6}\] \[=-\nabla_{\tilde{\mathcal{D}}}\underset{\mathbf{p}(\theta|\mathcal{D} )}{\mathbb{E}}\left[\log\left(\frac{\exp{(-E(\theta,\tilde{\mathcal{D}}))}}{Z (\tilde{\mathcal{D}})}\right)\right]\] (7) \[=-\nabla_{\tilde{\mathcal{D}}}\left[\int\left(-E(\theta,\tilde{ \mathcal{D}})-\log(Z(\tilde{\mathcal{D}}))\right)\mathbf{p}(\theta|\mathcal{D})d\theta\right]\] (8) \[=\int\nabla_{\tilde{\mathcal{D}}}E(\theta,\tilde{\mathcal{D}})\bm {p}(\theta|\mathcal{D})d\theta+\int\nabla_{\tilde{\mathcal{D}}}\log(Z(\tilde{ \mathcal{D}}))\mathbf{p}(\theta|\mathcal{D})d\theta\] (9) \[=\underset{\mathbf{p}(\theta|\mathcal{D})}{\mathbb{E}}\left[\nabla_{ \tilde{\mathcal{D}}}E(\theta,\tilde{\mathcal{D}})\right]-\underset{\mathbf{q}( \theta|\tilde{\mathcal{D}})}{\mathbb{E}}\left[\nabla_{\tilde{\mathcal{D}}}E( \theta,\tilde{\mathcal{D}})\right] \tag{10}\]
Taking the monte-carlo approximation of the above loss function, the final loss function comes out be:
\[\mathcal{L}=\underset{\theta\sim\mathbf{p}(\theta|\mathcal{D})}{\mathbb{E}}[E( \theta^{+},\tilde{\mathcal{D}})]-\underset{\theta\sim\mathbf{q}(\theta|\tilde{ \mathcal{D}})}{\mathbb{E}}[E(\theta^{-},\tilde{\mathcal{D}})] \tag{11}\]
As it can be seen, the above loss function looks similar to the contrastive-divergence loss in Eq. (1). However, instead of learning the parameters of an energy-based model [14], we use it to learn the pseudo-coreset.
Now, to estimate the expectation terms in Eq. (11) we need to sample the parameters (\(\theta\)) from the posteriors \(\mathbf{p}(\theta|\mathcal{D})\) and \(\mathbf{q}(\theta|\tilde{\mathcal{D}})\). As mentioned earlier, we don't assume any distributional form for \(\mathbf{p}(\theta|\mathcal{D})\), hence we use the parameters obtained by training a model on the original training set (\(\mathcal{D}\)) to approximate the first expectation. Specifically, we train a classification model with parameters \(\theta\) on the training set \(\mathcal{D}\) with some classification loss \(\ell_{\mathcal{D}}\in\mathcal{F}\). Next, to evaluate the second expectation, we resort to langevin dynamics to sample \(\theta\) from \(\mathbf{q}(\theta|\tilde{\mathcal{D}})\). Langevin dynamics is an iterative gradient-based MCMC sampling methods given by the following update rule:
\[\theta^{(t+1)}=\theta^{(t)}-\frac{\alpha}{2}\nabla_{\theta^{(t)}}E(\theta, \tilde{\mathcal{D}})+\sqrt{\alpha}\eta,\ \eta\sim\mathcal{N}(0,I) \tag{12}\]
From an implementation perspective, the above is similar to running a noisy gradient descent on parameters to minimize the chosen energy function.
### Choice of Energy Function
Till now, we have worked with a generic energy function \(E(\cdot)\). In this section, we motivate the choices for energy function. It can be seen from Eq. (12) that the sampled \(\theta\) will be such that it minimizes the chosen energy function, \(E(\cdot)\). However, from Eq. (4) we also know that the set of desired \(\theta\in\Theta\) are such that they minimize a class of loss functions. Hence, a logical choice for the energy function is a classification loss function itself! In other words we can choose the energy function \(E(\cdot)\) to be a classification loss \(\ell_{\tilde{\mathcal{D}}}\in\mathcal{F}\) so that, sampling from \(\mathbf{q}(\theta|\tilde{\mathcal{D}})\) is equivalent to training a network with parameters \(\theta\) on \(\tilde{\mathcal{D}}\) with loss function \(\ell_{\tilde{\mathcal{D}}}\). Further, with this choice of energy function, we can see that Eq. (11) is nothing but difference between average loss (w.r.t \(\ell_{\tilde{\mathcal{D}}}\)) incurred on parameters obtained from training on original training set and parameters obtained from training on the synthetic set.
To this end, we propose to use the categorical cross-entropy as the energy function, i.e,
\[E(\theta,\tilde{\mathcal{D}})=-\frac{1}{|\tilde{\mathcal{D}}|}\sum_{i=1}^{| \tilde{\mathcal{D}}|}\sum_{j=1}^{C}\tilde{y}_{i}^{(j)}\log f_{\theta}(\tilde{ x}_{i})^{(j)} \tag{13}\]
where, \(f_{\theta}(\cdot)\) is the classifier manifested by parameters \(\theta\) and \(\tilde{y}_{i}^{(j)}\) is the indicator for the \(j^{th}\) true class. One can also choose other classfication losses such as Focal loss and Multi-Margin Loss. We explore the effect of such choices in Section 5.5.
## 4 Implementation
The implementation of the proposed method is inspired from MTT [7]. Our entire training process can be divided into two parts. Firstly, we generate a set of parameters associated with the posterior defined on the real dataset, (\(\mathbf{p}(\theta|\mathcal{D})\)). To do this, we train multiple instances of networks with different initialization to minimize the empirical risk w.r.t to loss, \(\ell_{\mathcal{D}}\) on the original dataset (\(\mathcal{D}\)). We save a copy of the optimal parameters after each epoch for a given initialization. A sequence of such parameters is referred to as a trajectory (similar to MTT [7]) in our discussion. We store multiple such trajectories in a buffer to estimate the first expectation in Eq. (11).
Secondly, we sample parameters from the posterior associated with the synthetic set i.e., \(\mathbf{q}(\theta|\tilde{\mathcal{D}})\). For this, we first sample a trajectory from the buffer obtained above, \(\tau_{i}\sim\tau\). Next, we choose an instantiation of parameters at a randomly chosen epoch associated with \(\tau_{i}\), (\(\theta_{k}^{+}\sim\tau_{i}\)). Further, we use the \(T\)-step horizon from \(\theta_{k}^{+}\) as a proxy for \(\theta^{+}\) in Eq. (11) i.e., \(\theta^{+}=\theta_{k+T}^{+}\). To sample \(\theta^{-}\), we use \(L\)-step langevin dynamics updates as shown in Eq. (12), with \(\theta^{(0)}=\theta_{k}^{+}\). After this, we pass the synthetic set (\(\tilde{\mathcal{D}}\)) through the networks manifested by \(\theta^{+}\) and \(\theta^{-}\) to calculate the respective energies, \(E(\theta^{+},\tilde{\mathcal{D}})\) and \(E(\theta^{-},\tilde{\mathcal{D}})\) to further calculate the contrastive-divergence (Eq. (11)). The gradients of the contrastive-divergence are then backpropagated to up
date the synthetic set \(\tilde{\mathcal{D}}\). We plan to make our codebase public post acceptance for better reproducibility.
## 5 Experiments
In this section, we present the performance and experimental findings pertaining to the proposed method. We evaluate our method both quantitatively and qualitatively on several BPC-benchmark datasets with different compression ratios, i.e., the number of images generated per class (ipc). In particular, we perform our experiments on six different datasets, namely, CIFAR10 [28], SVHN [41], MNIST [32] and FashionMNIST [48]. We also test our method on some difficult datasets such as CIFAR100 [28] and Tiny Imagenet [31]. We set ipc=1/10/50 for each of the above datasets. Further, we use a standard ConvNet architecture for all the experiments unless mentioned otherwise.
Next, since our method relies on posteriors of parameters and the parameters are associated with a particular network architecture, the cross-architecture analysis becomes essential. We show cross-architecture performance for our method and compare it with other state-of-the-art BPC methods. Further, we explore different choices for the energy function used in our method and show the effect of such choices. We also investigate how the training time scales under each epoch while there is an increase in images per class for the CIFAR10 dataset. We also examine the consumption of GPU memory on individual methods when the synthetic set expands.
### Baselines
Our proposed method primarily falls under the category of Bayesian Pseudo-Coreset due to the formulation shown in Section 3. Hence, we consider the previous BPC methods for comparison. In particular, we include the BPC formulation using reverse-KL Divergence (BPC-rKL) [36], forward-KL Divergence (BPC-fKL) [25] and Wasserstein metric (BPC-W) [25] for comparison. Additionally, we also show that our method gives best if not second best performance compared to other baselines. Hence, we include eight dataset-condensation baselines for comparison. Particularly, we include Dataset Distillation (DD) [46], Flexible Dataset Distillation (LD) [4], Gradient Matching (DC) [53], Differentiable Siamese Augmentation (DSA) [50], Distribution Matching (DM) [52], Neural Ridge Regression (KIP) [37], Condensed data to align features (CAFE) [45] and Matching Training Trajectories (MTT) [7] for comparison. In addition, for the sake of completeness, we include the major Coreset selection baselines such as Herding [8], k-Center and Forgetting [44] for comparison.
### Performance on Low-Resolution Datasets
Firstly, we present the results on low-resolution datasets like MNIST, FashionMNIST (FMNIST), SVHN, and CIFAR10. Our findings are listed in Table 1. In particular, we observe that the proposed method significantly outperforms all the BPC methods by large margins, for e.g. we observe a gain of \(11.3\%\), \(6.54\%\) and \(21.01\%\) on CIFAR10 with an ipc of 1, 10 and 50 respectively compared to the best performing BPC baselines. Similarly, on SVHN we observe gain of \(18.72\%\), \(6.83\%\) and \(8.92\%\) respectively compared to the BPC counterparts. A similar trend can be seen for MNIST and FMNIST as well. We attribute this boost in performance to the flexible formulation of the proposed method.
Further, we observe that our method not only outperforms BPC methods but also outperforms SoTA dataset condensation methods in most of the cases. We find that our performance is better than almost all the dataset condensation baselines whereas MTT stands out to be a close second in most of the cases. For e.g. we see a gain of \(0.79\%\) and \(0.67\%\) on CIFAR10 with ipc of 1 and 50 respectively when compared to MTT, however, there is a loss of about \(7.88\%\) with ipc of 10. Further, there is a gain of \(9.19\%\), \(3.93\%\) and \(4.68\%\) on SVHN dataset with ipc of 1, 10 and 50 respectively when compared to the corresponding best performance of dataset condensation methods. We can observe similar trends for MNIST as well as FMNIST datasets. This shows that our method although falling under the category of bayesian pseudo-coreset, achieves a performance which is comparable to that of heuristic dataset condensation.
We present the qualitative visualizations for MNIST, FMNIST, SVHN, and CIFAR10 datasets with 1 and 10 images per class. It can be seen that, for one image per class, our method can do an excellent job condensing the whole dataset and achieving formidable results. The constructed pseudo-coreset is identifiable but inherits some artifacts due to the constraints on the dataset size. As the number of images per class increases, the model can induce more variations across all the classes and thus produce a diverse pseudo-coreset. This can be observed in Figure 2 where we show the synthetic set generated with 10 images per class.
### Performance on larger datasets
Next, we show the efficacy of proposed method on larger datasets with relatively high-resolution. For this, we choose the standard benchmark datasets of CIFAR100 and Tiny ImageNet. CIFAR100 contains images of size \(32\times 32\) under 100 diverse classes such as aquatic mammals, fish, flowers, food containers, household furniture, insects, etc. Further, Tiny ImageNet is a subset of the famous ImageNet dataset with each image of size \(64\times 64\) under 200 classes. The large number of classes for these datasets make the condensation
and creation of pseudo-coreset difficult.
Our findings are presented in Table 2. Our observation is in line with that on low-resolution dataset discussed in previous section. In particular, our method outperforms previous SoTA BPC methods by large margin. For e.g. we see an increment of \(11.78\%\) on CIFAR100 with an ipc of 1. Further, our method achieves performance comparable to that of dataset condensation methods. It can be seen from Table 2 that our method outperforms MTT on CIFAR100 as well as Tiny ImageNet with an ipc of 1. However, MTT gives a gain of \(8.54\%\) compared to our method on CIFAR100 with ipc 10, which ranks us the third best amongst the baselines. Further, on Tiny ImageNet with ipc of 10, MTT gives a boost of \(2.29\%\) relative to our performance, ranking us the second best in this setting.
### Cross Architecture Analysis
As discussed in previous sections, BPC methods rely on divergence minimization between posteriors associated with original training set and synthetic set. Hence, a natural concern that arises is that of generalization of such synthetic set across different network architecture. While this aspect of synthetic set is explored in dataset condensation literature, BPC literature have not addressed this concern. For this reason, we show the cross-architecture generalization of synthetic sets generated using BPC methods. For this, we construct the pseudo-coreset for CIFAR10 (ipc=10) by using a ConvNet architecture mentioned in previous section, and train different networks like ResNet [19], VGG [42] and AlexNet [30] on the pseudo-coreset to test the generalization of the network. Our findings are listed in Table 3. It can be seen that previous BPC methods lack to generalize across
\begin{table}
\begin{tabular}{c|c c c|c c c|c c c|c c c} \multicolumn{1}{c}{} & \multicolumn{3}{c}{**MNIST**} & \multicolumn{3}{c}{**PANIST**} & \multicolumn{3}{c}{**SVHN**} \\ \hline
**ImageCs** & 1 & 10 & 30 & 1 & 10 & 30 & 10 & 30 & 1 & 30 & 1 & 10 & 50 \\ \hline
**ImageUs** & 0.017 & 0.17 & 0.03 & 0.017 & 0.17 & 0.03 & 0.014 & 0.14 & 0.17 & 0.012 & 0.1 & 1 \\ \hline
**D01** [24] & 0.16 & 0.28 & 0.29 & 0.23 \(\pm\) 0.3 & - & - & - & - & - & - & - & 23.0 \(\pm\) 0.2 & 23.5 \(\pm\) 0.2 \\ \hline
**D10** [24] & 0.16 & 0.26 & 0.37 \(\pm\) 0.3 & 94.8 \(\pm\) 0.2 & 0.21 \(\pm\) 0.9 & 0.11 \(\pm\) 0.7 & 0.17 \(\pm\) 0.8 & 20.3 \(\pm\) 0.3 & 22.6 \(\pm\) 0.8 & 21.5 \(\pm\) 1.2 & 34.5 \(\pm\) 0.7 & 23.3 \(\pm\) 1.0 \\
**Sc-Center100** & 0.23 & 0.15 & 0.17 & 0.18 & 0.29 \(\pm\) 0.3 & 0.20 \(\pm\) 0.1 & 0.17 \(\pm\) 0.7 & 17.0 \(\pm\) 0.8 & 20.3 \(\pm\) 0.3 & 22.6 \(\pm\) 0.8 & 21.5 \(\pm\) 1.1 & 24.5 \(\pm\) 0.7 & 23.5 \(\pm\) 1.0 \\
**Feng14** [24] & 0.35 & 0.33 & 0.15 & 0.31 & 0.31 & 0.31 & 0.31 & 0.31 & 0.31 & 0.31 & 0.31 & 0.31 \\
**Random10** & 0.16 & 0.28 & 0.29 & 0.24 & 0.33 & 0.31 & 0.31 & 0.31 & 0.31 & 0.31 & 0.31 & 0.31 & 0.31 \\
**2000** [25] & 0.38 & 0.17 & 0.27 & 0.18 & 0.29 & 0.10 & 0.18 & 0.17 & 0.19 & 0.18 & 0.19 & 0.
network architecture whereas our method is able to generalize well on other architectures as well. For instance, the performance of BPC-fKL and BPC-rKL drop by \(34.19\%\) and \(24.42\%\) respectively on ResNet resulting in random predictions with an accuracy of almost \(10\%\), whereas our method observes a drop of only \(14.74\%\) while performing with an accuracy of \(41.65\%\). The same pattern can be seen with other architectures, demonstrating that our approach is generalizable even when other BPC methods fail.
### Effect of choice of Energy function
As discussed in previous sections, our method requires creation of training trajectories from original dataset to sample from posterior associated with original dataset. This requires training of network parameters using a certain loss function, call it \(\ell_{\mathcal{D}}(\cdot)\). Next, we use langevin dynamics to sample from posterior associated with synthetic set. As discussed in Section 3.3, this can be seen as learning of network parameters via noisy gradient descent on energy function. And as noted, it is logical to choose any loss function, call it \(\ell_{\hat{\mathcal{D}}}(\cdot)\), as the energy function. Hence, a natural question to follow is how does the choice of \(\ell_{\mathcal{D}}\) and \(\ell_{\hat{\mathcal{D}}}\) affect the performance of pseudo-coreset. For this, we do a 'cross-loss' analysis for our method, where we observe the effect of using different choices of \(\ell_{\mathcal{D}}\) and \(\ell_{\hat{\mathcal{D}}}\). For this analysis, we use three candidate classification loss functions: Cross-entropy loss, Focal loss and Multi-margin classification loss. Our observations for CIFAR10 (ipc=10) are listed in Table 4. We observe that our method performs best for a certain loss function when \(\ell_{\mathcal{D}}=\ell_{\hat{\mathcal{D}}}\). This can be attributed to the fact that the posterior estimates are closer when same loss function is used to obtain the trained parameters. This can also be seen from the observation that when different losses are used i.e, when \(\ell_{\mathcal{D}}\neq\ell_{\hat{\mathcal{D}}}\), there is a drop in performance of pseudo-coresets. Further, we observe that amongst all the choices, cross-entropy loss provides the best result.
## 6 Conclusion
Dataset condensation and BPC methods address the issue of over reliance on large training datasets by generating a small synthetic set with comparable performance as the original training dataset. These methods not only reduces the training costs for downstream tasks, but also result in lower carbon emissions. There is, however, a significant performance gap between the BPC and dataset condensation techniques. This gap in performance for BPC methods can be attributed to the various assumptions made about the form of the posterior distribution. In our work, we address this issue by using an energy-based distribution to model the posterior associated with the synthetic set without making any assumption about the form of posterior for real dataset. We also derive a contrastive divergence-based loss to minimize the KL divergence between the posteriors associated with real and synthetic datasets. Our formulation not only outperforms other BPC methods but also bridges the performance gap between BPC and dataset condensation methods. A better understanding of theoretically grounded works such as BPC can not only improve the performance of standard classification tasks but also pave the way to extend these methods to other resource-intensive tasks. In the future, we intend to investigate BPC and dataset condensation techniques for large generative models.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline \(\ell_{\mathcal{D}}(\cdot)\)\(\ell_{\hat{\mathcal{D}}}(\cdot)\) & **Cross Entropy** & **Focal** & **Margin** \\ \hline
**Cross Entropy** & \(\mathbf{56.39\pm 0.7}\) & \(55.05\pm 0.28\) & \(45.68\pm 0.72\) \\
**Focal** & \(37.79\pm 0.12\) & \(\mathbf{54.24\pm 0.34}\) & \(43.62\pm 0.11\) \\
**Margin** & \(52.09\pm 0.86\) & \(52.44\pm 0.82\) & \(\mathbf{53.91\pm 0.76}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Effect of choosing different training loss and energy function for construction of pseudo-coreset.
\begin{table}
\begin{tabular}{c|c c|c c} \hline \hline & \multicolumn{2}{c|}{**CIFAR100**} & \multicolumn{2}{c}{**TinyImagened**} \\ \hline
**Img/Ck** & 1 & 10 & 1 & 10 \\
**Ratio/\%** & 0.2 & 2 & 0.2 & 2 \\ \hline
**LD [4]** & \(11.5\pm 0.4\) & - & - & - \\ \hline
**Random** & \(4.2\pm 0.3\) & \(14.6\pm 0.5\) & \(1.4\pm 0.1\) & \(5.0\pm 0.2\) \\
**Forgetting [44]** & \(4.5\pm 0.2\) & \(15.1\pm 0.3\) & \(1.6\pm 0.1\) & \(5.1\pm 0.2\) \\
**K-Center [15, 40]** & \(8.3\pm 0.3\) & \(7.1\pm 0.2\) & \(4.09\pm 0.0\) & \(11.38\pm 0.0\) \\
**Heeting [6, 47]** & \(8.4\pm 0.3\) & \(17.3\pm 0.3\) & \(2.8\pm 0.2\) & \(6.3\pm 0.2\) \\ \hline
**BPC-rKL(logheme) [25, 34]** & \(3.56\pm 0.4\) & - & - & - \\
**BPC-fKL(logheme) [25]** & \(1.27\pm 0.16\) & - & - & - \\
**BPC-fKL(logheme) [25]** & \(1.29\pm 0.2\) & - & - & - \\
**BPC-w(logheme) [25]** & \(12.19\pm 0.2\) & - & - & - \\ \hline
**LD [53]** & \(12.65\pm 0.3\) & \(25.28\pm 0.29\) & \(5.27\pm 0.0\) & \(12.83\pm 0.0\) \\
**DSA [50]** & \(13.88\pm 0.29\) & \(32.34\pm 0.4\) & \(5.67\pm 0.0\) & \(16.43\pm 0.0\) \\
**BM [52]** & \(11.38\pm 0.18\) & \(29.38\pm 0.26\) & \(3.82\pm 0.0\) & \(13.51\pm 0.0\) \\
**KIP [57]** & \(12.04\pm 0.0\) & \(29.04\pm 0.0\) & - & - \\
**CAFE [45]** & \(12.9\pm 0.3\) & \(27.8\pm 0.3\) & - & - \\
**CAFE-PCA(54)** & \(14.0\pm 0.3\) & \(31.82\pm 0.2\) & - & - \\
**MTT [7]** & \(26.02\pm 0.63\) & \(34.08\pm 0.16\) & \(8.27\pm 0.0\) & \(20.11\pm 0.0\) \\ \hline
**Ours** & \(29.79\pm 0.11\) & \(28.42\pm 0.24\) & \(8.39\pm 0.07\) & \(17.82\pm 0.39\) \\ \hline
**Whole Dataset** & \(56.2\pm 0.3\) & - & \(39.33\pm 0.0\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: CIFAR100 and TinyImagenet results for 1 and 10 ipc on all the above methods. The best performer for each set of methods is denoted by an underline (\(\underline{x}\pm\underline{s}\)). The best performer across all methods is denoted in bold (\(\mathbf{x}\pm\mathbf{s}\)). For ease of comparison, we color the best performer with green color and the second best performer with orange color.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline & **ConvNet** & **ResNet** & **VGG** & **AlexNet** \\ \hline
**Ours** & \(\mathbf{56.39\pm 0.7}\) & \(\mathbf{41.65\pm 1.03}\) & \(\mathbf{47.51\pm 0.89}\) & \(\mathbf{30.58\pm 1.43}\) \\
**BPC-fKL** & \(44.34\pm 1.11\) & \(10.15\pm 0.21\) & \(10.43\pm 0.33\) & \(10.0\pm 0.0\) \\
**BPC-rKL** & \(34.48\pm 0.48\) & \(10.06\pm 0.08\) & \(10.26\pm 0.35\) & \(10.0\pm 0.0\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of BPC methods on cross-architecture generalization. For this, we construct the pseudo-coreset for CIFAR10 by using ConvNet architecture, and test it on other architectures like ResNet [19], VGG [42] and AlexNet [30]. The best performer for each architecture is denoted in bold symbols. |
2310.11340 | Contextualized Machine Learning | We examine Contextualized Machine Learning (ML), a paradigm for learning
heterogeneous and context-dependent effects. Contextualized ML estimates
heterogeneous functions by applying deep learning to the meta-relationship
between contextual information and context-specific parametric models. This is
a form of varying-coefficient modeling that unifies existing frameworks
including cluster analysis and cohort modeling by introducing two reusable
concepts: a context encoder which translates sample context into model
parameters, and sample-specific model which operates on sample predictors. We
review the process of developing contextualized models, nonparametric inference
from contextualized models, and identifiability conditions of contextualized
models. Finally, we present the open-source PyTorch package ContextualizedML. | Benjamin Lengerich, Caleb N. Ellington, Andrea Rubbi, Manolis Kellis, Eric P. Xing | 2023-10-17T15:23:00Z | http://arxiv.org/abs/2310.11340v1 | # Contextualized Machine Learning
###### Abstract
We examine Contextualized Machine Learning (ML), a paradigm for learning heterogeneous and context-dependent effects. Contextualized ML estimates heterogeneous functions by applying deep learning to the meta-relationship between contextual information and context-specific parametric models. This is a form of varying-coefficient modeling that unifies existing frameworks including cluster analysis and cohort modeling by introducing two reusable concepts: _a context encoder_ which translates sample context into model parameters, and _sample-specific model_ which operates on sample predictors. We review the process of developing contextualized models, nonparametric inference from contextualized models, and identifiability conditions of contextualized models. Finally, we present the open-source PyTorch package ContextualizedML.
## 1 Introduction
Contextualized ML (Figure 1) aims to learn the meta-effects of contextual information on parametric context-specific models by estimating a context encoder which translates sample context into sample-specific models. By embracing the heterogeneity and context-dependence of natural phenomena, contextualized ML provides representational capacity while retaining the glass-box nature of statistical modeling. Contextualized models can be learned by simple end-to-end backpropagation because they are composed of differentiable building blocks. In the following, we study this paradigm, analyze identifiability and nonparametric inference through Contextualized ML, and provide a Python toolkit.
### Motivation
Modern applications of artificial intelligence are often characterized by training unconstrained ML models on large datasets. These datasets are composed of overlapping groups of samples, either explicitly (e.g. the large dataset is created by combining multiple datasets) or implicitly (e.g. the samples belong to latent sub-populations). To be generalizable, population models tend to prefer global patterns over localized effects, a problem when localized effects are critical to understanding complex processes such as in applications to computational biology (e.g. samples comprise latent cell types) and precision medicine (e.g. patients comprise latent disease subtypes). When faced with a localized effect, population-level models can either ignore the effect or encode the localized effect as an interaction of input variables. Neither of these solutions are attractive: ignoring the effect is high-bias while fitting unrestricted interactions is high-variance.
Thus, we propose to use meta-models to generate context-specific parametric models. In this way, we can reason about the context-specific parameters and summarize meta-phenomena as explicit meta-models. This strategy often allows one to tackle ML challenges with more interpretable models, rather than trying to improve results by gathering more data or by opting for more complex models.
Towards Precision MedicinePrecision medicine seeks to understand the patterns of differentiation between patients such that appropriate care can be provided for each individual. However, cohort-level models estimate the same effects for all patients in a cohort, ignoring sub-cohort heterogeneity. Since patients have different histories, environments, and disease sub-types, cohort-level models cannot appropriately model the patient journeys. As [1] found in clinical evaluation of a predictive model: "Some [doctors] voiced strong concerns that using [a ML model] was the same as applying 'populational statistics' to individual patient decision making. They felt this was unethical." Thus, we seek to estimate models which adapt to patient context and drive personalized understanding. By estimating model parameters as functions of sample context, we can make principled sample-specific inferences.
Figure 1: Contextualized paradigm. Rather than using a single population model which operates identically for all contexts, Contextualized ML estimates a locally-optimal model for each context. The heterogeneity of the population is captured by an encoder that translates sample context into sample-specific parameter values. The contextualized model has the advantage of sharing information across the population while reducing model bias.
Towards Intelligible Artificial IntelligenceSome applications of AI are limited due to strict requirements for intelligible and transparent decisions. Large population-level models with many implicit interaction effects can be difficult to interpret, and while post-hoc procedures to approximate the large model with locally-interpretable models [2] can provide approximations of the model, such approximations do not guarantee capturing the exact behavior of the population model. We propose to approach the same endpoint more directly: by learning contextualized models from the beginning, we achieve direct interpretability without requiring post-hoc interpretation of a black-box model.
Motivating ExampleLet us review the motivating example of [3]: understanding election outcomes at the local level. Given candidate representations, we wish to predict and understand the factors driving the candidate's vote proportion in a particular locality (e.g. county, township, district, etc.). One approach would be to partition the dataset into similar localities and then estimate cohort-specific models for each partition. Unfortunately, by building independent models for each county, we would fail to share information between related counties, forcing us to pool together some localities with fewer samples even though they may have distinct characteristics. This simultaneous loss of power and predictive accuracy is typical of modeling large, heterogeneous datasets with homogeneous models.
Alternatively, instead of seeing these localities as discrete groups, we may embrace the data heterogeneity by modeling the \(i\)th county using a regression model \(f(X_{i};\Phi(C_{i}))\), where \(\Phi(\cdot)\) is a parameter-generating function. This _contextualized_ modeling allows us to train accurate models using only a single sample from each county--this is useful in settings where collecting more data may be expensive (e.g. biology and medicine) or impossible (e.g. elections and marketing). By allowing the context to be sample-specific, \(f\) no longer needs to be complex, and simple linear and logistic regression models will suffice, providing useful and interpretable models for each sample.
## 2 Contextualized Machine Learning
Contextualized ML estimates heterogeneous effects as distributions that adapt to context:
\[Y|X\sim\mathbb{P}_{\Phi(C)}. \tag{1}\]
That is, contextual data \(C\) is transformed into a conditional distribution by a learnable function \(\Phi\). For example, in this notation, the linear varying-coefficient model \(Y|X\sim\text{N}(X\beta C^{T},\sigma^{2})\)[4] becomes the contextualized regression model:
\[\Phi(C):=\beta C^{T},\qquad\mathbb{P}_{\Phi}=\text{N}(X\Phi(C),\sigma^{2}),\]
with \(\beta\in\mathbb{R}^{p\times m}\) transforms context \(C^{T}\in\mathbb{R}^{m\times 1}\) into sample-specific parameters. The free parameters are \(\beta\) and \(\sigma^{2}\), values of which can be estimated by backpropagation. This can be extended to accommodate heteroskedastic noise, e.g. by modeling noise as a separate function of context
\[\Phi(C):=(\beta C^{T},\phi C^{T}),\qquad P_{\Phi}=\text{N}(X\Phi(C)_{1},\Phi(C )_{2}),\]
or uncertainty in the sample-specific parameters, e.g. by a simple mixture
\[\Phi(C):=(\beta_{1}C^{T},\dots,\beta_{m}C^{T}),\qquad P_{\Phi}=\sum_{i=1}^{m} \text{N}(X\Phi(C)_{i},\sigma^{2}).\]
In this way, Contextualized ML amplifies the varying-coefficient paradigm by applying the power of deep learning and auto-differentiation.
### Contextualizing Models
The general approach to designing contextualized versions of cohort-based estimators is summarized in Figure 2. There are two potentially difficult steps in this process: defining a differentiable objective function, and designing a context encoder which operates on a tractable model solution space. While differentiable objective functions are problem-specific, there are a few general tricks which can often be used to improve the learnability of the deep context encoder \(\Phi(C)\).
Restrictive Context EncodersUsing a smaller class of context encoders can improve estimation. In practice, surprisingly simple forms of models can often be effectively used. For example, neural additive models [5], which are differentiable forms of additive models, can be used as context encoders to eliminate interaction effects between contextual features and enable feature-specific interpretability of context-parameter links.
Archetype-Based ModelingArchetype-based modeling can reduce the dimensionality of the output of \(\Phi\). By representing sample-specific models as weightings of \(k\) archetypes the context encoder only needs to output a vector of size \(k\), rather than the full model parameterization.
\[\Phi(C):=\sum_{k=1}^{K}\phi(C)_{k}A_{k}\]
Figure 2: How to contextualize a cohort-based estimator. **(1)** Define a differentiable objective function for each sample-specific model (red). **(2)** Define a differentiable context encoder to generate sample-specific parameters (blue). **(3)** Re-parameterize the context encoder to reduce solution space (yellow). **(4)** Optimize end-to-end (green).
Furthermore, by restricting the archetype weightings to be non-negative and sum to 1 (e.g. by applying a softmax to \(\phi\)), the sample-specific models are a convex combination of the archetypes and can be interpreted as subtype probabilities with archetypes corresponding to subtype extrema.
Regularizing Toward Population ModelsBy simultaneously modeling all contexts, contextualized models can be encouraged to stay closer to the population model. Let \(t\in T\) index a task-specific distribution \(\mathbb{P}_{t}(Y|X)\). Multitask learning [6; 7] seeks to improve the estimation of each \(\mathbb{P}_{t}(Y|X)\) by sharing power between distinct tasks \(t\). Theorem 2 of [8] shows that the task-specific distribution is the sum of the overall distribution and a task-specific pure interaction effect:
\[Y|X,t=Y|X+\rho(Y|X,t) \tag{2}\]
where \(\rho(Y|X,t)\) is a pure interaction effect [9]. Thus, as sample context is an implicit task specifier, we see that context-specific estimators \(Y|X,C\) also provide estimates of \(Y|X\) and \(\rho(Y|X,C)\) and so by regularizing against interactions between \(X\) and \(C\) (e.g. with Dropout [10]), we can encourage similarity in the context-specific distributions and encourage them to be closer to the population model. Finally, we can use the purification [9] to recover the task-specific interaction and the main effects from \(Y|X,C\).
### Related Work
One of the earliest ways to model sample-specific parameters as the output of a learnable function was the linear varying-coefficients (VC) model [4] in which regression parameters are produced by a linear function of covariate values, e.g. \(f(x;z)=\langle x,\theta z\rangle\), with \(\theta\in\mathbb{R}^{P\times K}\) for \(x\in\mathbb{R}^{P}\), \(z\in\mathbb{R}^{K}\). In short, contextualized ML combines the adaptability of the VC model with the power of modern ML architectures by using deep neural network as context-encoders. This combined approach was first proposed to improve interpretability of deep learning models [11] and has achieved good performance on varied tasks including survival prediction [12] and language modeling [13]. There have also been attempts to provide a nonparametric parameter-generating function by distance-matching regularization [3; 14] which proposes that there is a distance metric on contextual information which approximates Euclidean distance between sample parameters (i.e. \(\|\theta_{i}-\theta_{j}\|\approx d_{C}(C_{i},C_{j})\) for samples \(i,j\)). While this regularization-based scheme provides extra flexibility by obviating the requirement of a parameteric context encoder, it also precludes end-to-end training due to the lack of a differentiable context encoder.
#### 2.2.1 Alternative Approaches
Sample-Specific Models as DeviationsRecent work has also developed sample-specific estimators as independent deviations from a population model [15; 16; 17; 18]. This is particularly useful for structured models in which prior knowledge of the graph structure can enable efficient testing of sample-specific deviations. However, estimating sample-specific models as deviations requires \(\mathcal{O}(n)\) estimation procedures for \(n\) samples and does not share power between the estimators. As a result, these approaches are more applicable to domains with fewer samples and less informative contextual data.
Heterogeneous SamplesStatistical tests [19; 20; 21] can identify whether a cohort contains heterogeneous samples, enabling the identification of partitions that induce accurate group-based models. This perspective is well-suited to situations with a small numbers of groups or pre-defined partitions. However, if there are many groups relative to the number of samples, group-based modeling becomes high-variance and if samples arise from a continuous combination rather than discrete partitions, group-based models become high-bias. In such situations, higher resolution via multitask learning is required.
Post-Hoc Model InterpretationsMethods of post-hoc interpretation often seek to explain complex models by estimating local approximations [22, 23, 24, 25]. For example, Local-Interpretable Model-Agnostic Explanation (LIME) [2] constructs local interpretations for each sample by training a linear model to approximate the outputs of a black-box model in a particular neighborhood. These local models are interpretable and approximate the output of any model, but are constrained to explain only a fixed black-box population model. In contrast, contextualized regression directly estimates local models, enabling dynamic collections of models that retain local interpretability.
### Benefits of Contextualized ML
In the following, we demonstrate a few benefits of contextualized ML. More details and reproducible demos for these perspectives are available in this Jupyter notebook.
Contextualized ML Enables High-Resolution HeterogeneityBy sharing information between all contexts, contextualized learning is able to estimate heterogeneity at fine-grained resolution (Figure 3). Cluster or cohort-based models treat every partition independently, limiting heterogeneity to coarse-grained resolution where there are large enough cohorts for independent estimation. For example, this ability was exploited for context-specific Bayesian networks [26] (while cohorts models would require \(\mathcal{O}(p^{2})\) samples in each cohort) to reconstruct patient-specific gene expression networks.
Contextualized ML Interpolates Between Observed ContextsBy learning to translate contextual information into model parameters, contextualized models learn about the meta-distribution of contexts (Figure 4). As a result, contextualized models can adapt to contexts which were never observed in the training data, either interpolating between observed contexts or extrapolating to new contexts for which the meta-relationship between context and local parameters holds the same as in the training data.
Contextualized ML Enables Analysis of Latent ProcessesCluster or cohort models which are inferred by partitioning samples into groups make assumptions of IID data within each group. This approach works well when contexts are discrete, low-dimensional, and every context-specific population is well observed, but in many complex processes, contexts are continuous, high dimensional, and
Figure 3: By sharing power between samples, contextualized ML recovers heterogeneous effects at resolutions which are finer-grained than can be done by partition-based cohort models.
sparsely observed. When cluster or cohort approaches are applied in these circumstances, downstream modeling tasks are distorted by mis-specification, where many non-IID samples are funneled into a single model. Consequently, theoretical guarantees about how well a cluster or cohort model can represent IID populations often do not apply in light of real-world heterogeneity. In contrast, contextualized learning provides a way to estimate latent, non-IID models for all samples with minimal assumptions about the grouping or clustering of these samples (Figure 5). Samples can then be grouped on the basis of model parameters and distributional differences to produce clusters in the latent model space underlying each sample. Contextualized ML intuitively recovers latent structures underlying data generation in a way _a priori_ clustering cannot. Allowing downstream models to determine the grouping of samples rather than upstream contexts replaces traditional cluster analysis with contextualized analysis clusters.
Figure 4: By learning the meta-relationship between context and model parameters, contextualized ML enables interpolation between observed contexts.
Figure 5: By estimating a contextualized model for each sample, contextualized ML uncovers important factors and latent processes in heterogeneous populations.
### Python Package
Contextualized GLMs are implemented in ContextualizedML with easy interfaces. These GLMs take the form
\[\mathbb{E}[Y|X,C]=f\left(X\Phi(C)\right), \tag{3}\]
where \(\Phi(C)\) is a deep context encoder, For example, contextualized linear regression:
\[\mathbb{E}[Y|X,C]=X\Phi(C), \tag{4}\]
is available by the ContextualizedRegressor class:
```
fromcontextualized.easyimportContextualizedRegressor model=ContextualizedRegressor() model.fit(C_train,X_train,Y_train)
```
Similarly, contextualized logistic regression:
\[\mathbb{E}[\text{Pr}(Y=1)|X,C]=\sigma(X\Phi(C)) \tag{5}\]
is available by the ContextualizedClassifier class:
```
fromcontextualized.easyimportContextualizedClassifier model=ContextualizedClassifier() model.fit(C_train,X_train,Y_train)
```
Common constructor keywords include:
* n_bootstraps: Number of bootstrap resampling trajectories to use.
* encoder_type: mlp, ngam, or linear, which type of model to make as context encoder. Alternatively, users may pass in their own encoder as a PyTorch module.
* loss_fn: A function to calculate loss.
* alpha: non-negative float, regularization strength.
* mu_ratio: float in range (0.0, 1.0), governs how much the regularization applies to context-specific parameters or context-specific offsets.
* l1_ratio: float in range (0.0, 1.0), governs how much the regularization penalizes \(\ell_{1}\) vs \(\ell_{2}\) parameter norms.
Common fitting keywords include:
* max_epochs: positive number, the maximum number of epochs to fit. Early stopping is turned on by default.
* learning_rate: positive float, default is 1e-3.
* val_split: float in range (0.0, 1.0), how much of the data to use for validation (early stopping).
## 3 Nonparametric Inference from Contextualized Models
Contextualized ML provides a framework to estimate nonparametric densities by viewing the composite densities as combinations of local parametric distributions. Let us consider a regression \(Y|X\sim p(f(X))\). This regression may be considered nonparametric in two non-exclusive respects:
* the transmission function \(f\) may not be well-represented by a parametric family, or
* the distribution \(p\) may not be well-represented by a parametric family.
Contextualized linear models can be used to recover either of these forms of nonparametric models.
Contextualized Linear Models Represent Nonparametric Transmission FunctionsFirst, contextualized linear models can represent nonparametric transmission functions by allowing coefficients to vary with context. As \(\Phi(C)=\mathbb{E}_{X|C}[\frac{\partial\mathbb{E}[Y|X,C]}{\partial X}]\), we can view \(\Phi\) as a differential expression describing \(\frac{\partial\mathbb{E}[Y|X,C]}{\partial X}\) and reconstruct smooth, differentiable transmission functions by stitching together context-specific linear transmission functions (Figure 6). This approach approximates a conditional mixing distribution \(\Gamma(x)=\sum_{k=1}^{K}\lambda_{k}(x)\gamma_{k}(x)\) of \(K\) true mixtures by fitting an overfitted mixture of \(L\gg K\) atoms and then clustering these \(L\) atoms into \(K\) groups such that each group approximates \(\gamma_{k}\). Based on this clustering, we can define a new mixing measure whose atoms are close to some \(\gamma_{k}\) for each \(k\). This mixing measure will converge to \(\Gamma\) as \(L\rightarrow\infty\), allowing us to approximate \(\Gamma\) to arbitrary precision. This framework is illustrated in Figure 6 and retains identifiability of the nonparametric transmission functions under reasonable assumptions of component separation [27].
Contextualized Models Represent Non-Gaussian OutcomesSecond, contextualized models can represent non-Gaussian outcomes by summing context-specific Gaussian distributions. As locally-Gaussian distributions are universal approximators [27], any outcome distribution can be constructed by combining context-specific Gaussian distributions. If \(Y|X,C\) is not well-approximated as a Gaussian distribution, we can _pseudo-sample_ extra noise variables \(Z\) which localize the distribution such that \(Y|X,C,Z\) is well-approximated as a Gaussian (Figure 7). In an extreme case, each value of \(Z\) can identify an individual training sample with corresponding locally-Gaussian outcome distributions that sum to form a meaningful composite distribution. As with many latent variable problems, in test samples we cannot identify which value of \(Z\) would be most correct; by integrating over all pseudo-sampled values of \(Z\) we can reconstruct the nonparametric uncertainty.
Figure 6: **(A)** We observe data arising from a mixture of nonparametric context-specific densities \(\gamma_{1}\) and \(\gamma_{2}\), then fit context-specific regression functions. **(B)** These context-specific atoms can be clustered and **(C)** smoothed into component to produce a nonparametric mixture model. The clustering recovers \(\gamma_{1}\), \(\gamma_{2}\) if the components are well-separated.
## 4 Identifiability of Contextualized Models
When seeking to understand contextualized models, we are interested in questions of identifiablity: how many sets of sample-specific models could equivalently recapitulate the observed data? For example, we know that both population and group-level linear models are identifiable in common conditions [28; 29; 30], but sample-specific models without covariates or constraints are not identifiable. Does the process of generating contextualized models from a shared context encoder induce identifiability? Here, we present an informal, graphical view of identifiability of contextualized models that suggests that identifiability is influenced by the flexibility of _both_ the context encoder and the sample-specific models.
NotationLet us consider a sample-specific model class parameterized by \(\theta\in\mathcal{H}\subset\mathbb{R}^{p}\) which induces solution space \(s(x,y)=\{\theta\in\mathcal{H}:h(x;\theta)=y\}\) for sample \(x,y\). A dataset \(\mathcal{D}=C,X,Y=[(C_{1},X_{1},Y_{1}),\ldots,(C_{n},X_{n},Y_{n})]\) induces a list of solution spaces \(S(\mathcal{D})=[s(X_{1},Y_{1}),\ldots,s(X_{n},Y_{n})]\). For context encoders parameterized by \(\phi\in\mathcal{G}\subset R^{m}\), let \(G(\mathcal{D})=\{\phi\in\mathcal{G}:g(C_{i};\phi)\in s(X_{i},Y_{i})\ \forall\ i\in[1, \ldots,n]\}\) be the set of allowable context encoders for this dataset. When \(|G(\mathcal{D})|\leq 1\), there is at most one context encoder which maps each sample's context observation to its corresponding solution space, and we can say that the contextualized models are identifiable for this dataset.
Identifiability of Population ModelsAs a comparison, let us first consider population models from this perspective. Population models, which share \(\theta\) for all samples, can be seen as constant context encoders: \(g(c;\phi)=\phi\). Thus, the set of allowable context encoders for a dataset is \(G_{\text{pop}}(\mathcal{D})=\{\theta\in\mathcal{H}:h(X_{i};\theta)=Y_{i}\ \forall\ i\ \in[1, \ldots,n]\}=\bigcap_{i=1}^{n}s(X_{i},Y_{i})\), i.e. identifiability of a population model is defined by the size of the intersection of the sample-specific solution spaces. For example, identifiability of a linear regression model is determined by how many sample-specific solution spaces (hyperplanes) coincide and how many intersect: if \(p\) solution spaces intersect, the linear model of \(p\) variables is identifiable. For linear regression with \(p=2\), \(n\geq 2\) is sufficient to provide identifiability (Figure 8A). For linear regression with \(p=3\), the sample-specific solution spaces have \(2\) degrees of freedom and hence 2 samples can only constrain \(G_{\text{pop}}(\mathcal{D})\) to a 1-dimensional subspace (Figure 8B).
Identifiability of Contextualized ModelsFor contextualized models, we are interested in the set of allowable context encoders for each sample: \(\phi^{*}(c,s)=\{\phi\in\mathcal{G}:g(c;\phi)\in s\}\). The intersection of these sample-specific sets of allowable context encoders determines the allowable context encoders for the data: \(G_{\text{contextualized}}(\mathcal{D})=\bigcap_{i=1}^{n}\phi^{*}(C_{i},s(X_{i},Y_{i}))\). The dimension of \(\phi^{*}(c,s)\) is upper-bounded by the product of the dimension of \(s\) and a measure of the redundancy in the context encoder (how many
Figure 7: Pseudo-sampling procedure for representing nonparametric distributions. **(A)** The density \(Y|X,C\) may not be well-approximated by a Gaussian distribution. **(B)** To overcome this, we can extend context by introducing noise variable \(Z\) to psuedo-sample localized overfitted distributions centered at each sample observation (red vertical tick marks along the horizontal axis). **(C)** By integrating over the introduced noise variable, we approximate the nonparametric distribution. **(D)** Approximation improves as the number of observations increases.
ways can each solution be generated). Ignoring pathological collinearity, this suggests a simple heuristic for contextualized identifiability: \(n>d_{g}d_{s}\), where \(d_{g}\) is the degree of redundancy in the context encoder class and \(d_{s}\) is the number of degrees of freedom in each solution space \(s\).
A few examples may make this heuristic more concrete. For population models, \(d_{g}=1\) (only a constant function can return the same value for all inputs), and hence identifiability of population models are determined by the number of degrees of freedom in the solution space. For contextualized linear models, \(d_{s}=p-1\), suggesting that \(n>d_{g}(p-1)\) is a useful criterion for identifiability of contextualized linear models. For linear varying-coefficients models, this criterion becomes \(n>m(p-1)\), which can be compared to traditional identifiability criteria for linear varying-coefficients models [31; 32; 33; 34; 35]. With \(m=1\) and \(p=2\) (Figure 8C), 2 samples are sufficient for identifiability. Note that \(m=1\) means that the context encoder operates on a single contextual variable; this single contextual variable is typically a vector of ones to accommodate offsets. For either \(m=p=2\) (Figure 8D, left) and \(m=1,p=3\) (Figure 8D, right), at least 3 samples are required for identifiability.
## 5 Discussion
We have examined Contextualized ML, a paradigm for context-specific inference of differentiable models. This framework provides a principled method for sample-specific inference and analysis of heterogeneous effects, and we have presented the package ContextualizedML to make standard tasks of
Figure 8: Graphical depiction of identifiablity. **(A-B)** Population models are defined by the intersection of sample-specific solution spaces. In each pane, we have two solution spaces \(s(X_{1},Y_{1})\) and \(s(X_{2},Y_{2})\) with their intersection marked in yellow. If each sample-specific solution space has \(p\) dimensions of freedom, then identifiability requires \(p\) intersecting solution spaces. **(C-D)** Contextualized models are defined by the intersection of the allowable context encoder spaces, and hence can lose identifiability for either of two reasons: excess flexibility in the context encoder \(\phi\) or excess flexbility in the sample-specific solution spaces \(s\).
context-specific regression and context-specific network inference accessible to Python users.
Several research directions remain open. While deep learning-based context encoders and auto-differentiation libraries are useful to circumvent requirements of parametric assumptions and analytical solutions, there is no guarantee that this learning scheme is optimal. In addition, these methods rely on contextual data to accurately represent latent phenomena; extending methods to generate sample representations from more diverse data sources (e.g. foundation models) could improve the learned models. Beyond questions of estimation procedures, there are also open questions regarding the analysis of estimated sample-specific models. Once we have estimated sample-specific model parameters, what is the best way to summarize these new representations: should we cluster the estimated parameters, or is it best to present these models to users as sample-specific models? These questions scratch the surface of the wide potential that contextualized ML unlocks for improved methods of data analysis.
## Acknowledgements
We thank Wesley Lo, Jannik Deuschel, Juwayni Lucman, Alyssa Lee, and Aaron Alvarez for their contributions to the development and use of the Python package. We are also very grateful to Bryon Aragam, Maruan Al-Shedivat, Avinava Dubey, Amir Alavi, and Rich Caruana for valuable discussions.
|
2302.01342 | Curriculum-Guided Abstractive Summarization | Recent Transformer-based summarization models have provided a promising
approach to abstractive summarization. They go beyond sentence selection and
extractive strategies to deal with more complicated tasks such as novel word
generation and sentence paraphrasing. Nonetheless, these models have two
shortcomings: (1) they often perform poorly in content selection, and (2) their
training strategy is not quite efficient, which restricts model performance. In
this paper, we explore two orthogonal ways to compensate for these pitfalls.
First, we augment the Transformer network with a sentence cross-attention
module in the decoder, encouraging more abstraction of salient content. Second,
we include a curriculum learning approach to reweight the training samples,
bringing about an efficient learning procedure. Our second approach to enhance
the training strategy of Transformers networks makes stronger gains as compared
to the first approach. We apply our model on extreme summarization dataset of
Reddit TIFU posts. We further look into three cross-domain summarization
datasets (Webis-TLDR-17, CNN/DM, and XSum), measuring the efficacy of
curriculum learning when applied in summarization. Moreover, a human evaluation
is conducted to show the efficacy of the proposed method in terms of
qualitative criteria, namely, fluency, informativeness, and overall quality. | Sajad Sotudeh, Hanieh Deilamsalehy, Franck Dernoncourt, Nazli Goharian | 2023-02-02T11:09:37Z | http://arxiv.org/abs/2302.01342v2 | # Curriculum-Guided Abstractive Summarization
# Curriculum-Guided Abstractive Summarization
Sajad Sotudeh
Work partially done during the internship at Adobe Research.
Hanieh Deilamsalehy
Work partially done during the internship at Adobe Research.
Franck Dernoncourt
Work partially done during the internship at Adobe Research.
Nazli Goharian
Work partially done during the internship at Adobe Research.
###### Abstract
Recent Transformer-based summarization models have provided a promising approach to abstractive summarization. They go beyond sentence selection and extractive strategies to deal with more complicated tasks such as novel word generation and sentence paraphrasing. Nonetheless, these models have two shortcomings: (1) they often perform poorly in content selection, and (2) their training strategy is not quite efficient, which restricts model performance. In this paper, we explore two orthogonal ways to compensate for these pitfalls. First, we augment the Transformer network with a sentence cross-attention module in the decoder, encouraging more abstraction of salient content. Second, we include a curriculum learning approach to reweight the training samples, bringing about an efficient learning procedure. Our second approach to enhance the training strategy of Transformers networks makes stronger gains as compared to the first approach. We apply our model on _extreme_ summarization dataset of _Reddit TIFU_ posts. We further look into three cross-domain summarization datasets (_Webis-TLDR-17_, _CNN/DM_, and _XSum_), measuring the efficacy of curriculum learning when applied in summarization. Moreover, a human evaluation is conducted to show the efficacy of the proposed method in terms of qualitative criteria, namely, fluency, informativeness, and overall quality.
## 1 Introduction
Text summarization systems aim to condense a piece of text to a shorter form that preserves the major information within the original text. This task is broadly done in two ways: (1) extractive (Nallapati et al., 2017; Xiao and Carenini, 2019; Xu et al., 2020) which assembles the salient sentences directly from the source text, and (2) abstractive (Celikyilmaz et al., 2018; Lebanoff et al., 2018; Liu and Lapata, 2019; Zou et al., 2020; Sotudeh et al., 2021) that involves paraphrasing, and generating novel words that are not present in the source text. Recent efforts have also tackled the task using hybrid models (See et al., 2017; Hsu et al., 2018; Chen et al., 2019; MacAvaney et al., 2019).
Over the last decade, neural summarization models based on RNN (Hochreiter and Schmidhuber, 1997), and Transformers (Vaswani et al., 2017) have achieved promising results on text summarization. The recent success of pre-trained language models on a wide variety of downstream tasks, including summarization, has driven the current state-of-the-art to a new level. While such approaches generate fluent summaries, a few studies have recognized _content selection_ as their pitfall (Gehrmann et al., 2018; Narayan et al., 2020), restricting the model performance at generating _informative_ summaries. In this research, we aim to extend the decoder by inducing sentential information as a _saliency signal_ that can come in handy when summarizing the source.
Large-scale deep neural models are often hard to train; leaning on intricate heuristic set-ups which can be time-consuming and expensive to tune (Gong et al., 2019; Chen et al., 2021). This is specially the case for the Transformers which have been shown to consistently outperform the RNN networks when rigorously tuned (Popel and Bojar, 2018), but also require heuristics such as specialized learning rates and large-batch training (Platanios et al., 2019). In this paper, we attempt to overcome the mentioned problem by introducing a _curriculum learning (CL)_ strategy for training the summarization model, leading to improved convergence time, and performance. Inspired by humans' teaching style, _curriculum learning_ suggests to move the teaching process from easier samples to more difficult ones, and dates back to the nineties (Elman, 1993). The driving idea behind this approach is that networks can accomplish bet
ter task learning when the training instances are exposed to the network in a specific order, from easier samples to more difficult ones Chang et al. (2021). In the context of neural networks, this process can be thought as a technique that makes the network robust to getting stuck at local optima, which is more likely in the early stages of training process. Systems equipped with curriculum learning have been reported to show strong generalization, faster convergence time, and even improved model performance Platanios et al. (2019).
After identifying the two drawbacks above (i.e., content selection and inefficient training) of Transformer-based networks, we develop our summarization framework to address these shortcomings. To remedy the first drawback, we propose to augment the transformer decoder with a _sentence cross-attention layer_, encouraging the decoder to pay more attention to salient sentences of the source text while generating the abstractive summary. For the second, we supply the summarization model with curriculum learning objectives. We, specifically, utilize the SuperLossCastells et al. (2020) function that falls into the family of confidence-aware curriculum learning techniques, introducing a new parameter called confidence (i.e., \(\sigma\)) to the network. While learning this parameter is inefficient, especially on the abundance of training instances such as in summarization, SuperLoss bridges this limitation by directly using the converged value of confidence at a specific learning state. We validate our model on the _extreme summarization_ task Narayan et al. (2018), where the aim is to produce a one-sentence summary in extreme compression and high abstraction. To this end, we make use of _Reddit TIFU_Kim et al. (2019) and Webis-TLDR-17 Volske et al. (2017) datasets containing 42k, and 4M instances, respectively, with each pair including a Reddit post along with its Tldr1 summary. To measure our model's cross-domain performance, we further report model performance on CNN/DM See et al. (2017) and XSum Narayan et al. (2018) large-scale news datasets. We show that the inclusion of curriculum learning allows for a remarkable performance of neural Transformer-based summarizers. We further carry out a comprehensive human evaluation to examine the efficacy of our model in terms of three qualitative metrics: fluency, informativeness, and overall quality.
Footnote 1: TLDR is the abbreviation of “Too Long, Didn’t Read”.
## 2 Related Work
**Pre-trained Language Modeling.** During the last few years, self-supervised pre-trained language models have gained increased attention from research community due to their considerable improvements in a variety of NLP tasks. Different variants of such models are pre-trained on a large amount of unlabeled data Devlin et al. (2019); an (2019); Peters et al. (2018), each with various pre-training objectives. While such models are inherently proposed to perform language modeling task, it has been made possible to fine-tune them on a wide range of downstream NLP tasks, summarization being one of them. Liu and Lapata (2019) were the first to fine-tune Bert for summarization task. They specifically proposed three variants of BertSum including BertSumExt for extractive summarization, BertSumAbs for abstractive summarization, and BertSumExtAbs which is a two-stage fine-tuning approach, exploiting extractive and abstractive objectives. Following up this line of research, Zhang et al. (2020) proposed Pegasus with pre-training objectives specific for text summarization and achieved state-of-the-art results on 12 downstream summarization tasks. In a parallel line, Lewis et al. (2020) proposed Bart and showed its efficacy on language generation tasks such as text summarization. Unlike BertSum that uses merely pre-trained Bert encoder, Pegasus and Bart exploit both pre-trained encoder and decoder for language generation.
**Sentence-guided Summarization.** Using sentence representations as extractive signals along with token embeddings in neural sequence-to-sequence models has a recent history. This idea is inspired by the fact that while the general encoder-decoder frameworks produce fluent targets, they often fall short in content selection Sotudeh et al. (2020). A few works have noted that this problem can be addressed by combining extractive and abstractive objectives Gehrmann et al. (2018). While there have been numerous efforts in combining such objectives in traditional RNN networks See et al. (2017); Chen et al. (2019); Lebanoff et al. (2018), a few studies have explored their efficacy in Transformer-based networks. For instance, Liu and Lapata (2019) proposed BertSumExtAbs to utilize extractive objectives from BertSumExt model and further incorporate it with the Transformers decoder to perform abstractive summarization. More recently, Akiyama et al. (2021) pro
posed Hie-Bart that adds a self-attention layer for incorporating sentence importance in the encoder. While they augment the encoder with sentential information in the encoder side, the incorporation of such module in the decoder has not been explored in literature. To the best of our knowledge, we are the first to explore this direction on Transformers-based networks in our paper.
**Curriculum Learning.** Curriculum Learning (CL) [10] has gained growing interest from the research communities during the last decade [13, 14, 15], although its teaching approach (i.e., learning from easy instances to more difficult ones) known as _incremental learning_ dates back to nineties [11]. The underlying idea of this technique is to provide a training strategy that flows the learning process from easy samples to the harder ones, which results in improving model performance, decreasing training time, and enhancing the model's generalization ability [1]. bengio2015learning were the first to apply this strategy in the context of sequence prediction using RNN networks through their _scheduled sampling_ approach, which gently changes the training process from ground truth tokens to model generated ones during decoding. platanios2019learning proposed a CL-based NMT framework that decides to visit training samples based on their difficulty and competence state of the model. Sample's _difficulty_ is a key concept in this scheme as it is used to distinguish easy examples from the difficult ones. Researchers have used many textual features as the "difficulty measure" including n-gram frequency [12], word rarity and sentence length [13]. Recent works [23, 1] have made use of confidence-aware approaches that learn the difficulty of training samples and dynamically reweight samples in training process.
## 3 Our Approach
In this section, we describe the details of our proposed models, including (1) extension of Bart model in which a cross-attention layer is added into Bart decoder; and (2) our curriculum learning architecture added on the Bart's Transformer-based framework which upweights easier training samples; hence, increasing their contribution in learning stage. Both of these extensions can be added to the Bart's Transformers network and trained either in joint or independently.
### Sentence-guided Bart
While Bart has been shown to be promising in producing abstractive summaries in virtue of its powerful pre-trained encoder and decoder, it suffers from a major pitfall that restricts model efficacy in content selection. Figure 1 shows a Reddit post along with Bart's generated and ground truth Tldr, demonstrating such a shortcoming. As observed, while the generated summary appears to be well-written and fluent, it ignores salient source regions and focuses on less important parts of the source. Considering the effectiveness of combining extractive and abstractive objectives, we extend Bart by adding a cross-attention layer to induce sentences' importance at decoding time. To this end, we first define a sequence labelling task, where the goal is to predict sentences' _relative importance_ score (i.e., \(\mathbf{y}\)) such that Bart is fine-tuned to learn sentential saliency. The _relative importance_ score is casted as the normalized mean of Rouge-2 and Rouge-L scores of source sentences with respect to the Tldr summary:
\[\mathbf{y}=\text{relative importance}(s_{i})=\frac{\text{RG}_{\text{2+L}}(s_{i})}{ \sum\limits_{s_{i}\in R}\text{RG}_{\text{2+L}}(s_{i})} \tag{1}\]
where \(s_{i}\) is the sentence in \(i\)th position, \(R\) is the set of post's sentences, and \(\text{RG}_{\text{2+L}}(.)\) is a function that takes in a source sentence and outputs the mean of its Rouge-2 and Rouge-L scores. For adapting sequence classification task to the sequential sentence tagging problem, we insert </s> tokens (i.e., E05 in Bart vocabulary) to the end of each input sentence and then feed it into Bart network, similar to liu2019learning for BertSum. As Bart encodes each input token through its network, the encodings associated with </s> tokens, specifically, represent input sentences' features pre
Figure 1: Bart’s shortcoming in content selection. Yellow: picked by the ground truth, but skipped by Bart, Green: picked by both Bart and ground truth, and Red: picked by Bart, but skipped by ground truth
ceding them. This is due to the fact that Bart uses </s> tokens' representations as the classification head Lewis et al. (2020). After obtaining representations associated with </s> tokens, we process them through a linear layer with Sigmoid classifier to output probabilities as the sentences' importance scores. Formally, let \(\mathbf{P}\) be a Reddit post containing sentences \(\mathbf{P}=[sent_{1},sent_{2},...,sent_{i},...,sent_{n}]\) and \(sent_{i}=[x_{i1},x_{i2},...,x_{ij},...,x_{im}]\). We frame input \(\mathbf{P}\) by adding </s> tokens to the end, as well as <s> to the start of each sentence. In this sense, the modified input to Bart network is \(\mathbf{P}^{\prime}=[\)**<s>\(sent_{1}\)**</s><s>\(sent_{2}...\)**</s>\(sent_{n}\)**</s>\(]\)** which is then processed through the Bart network. The network is trained to predict the importance score (i.e., y in Eq 1). By training such a sequence tagger network, we aim to inject an inductive bias to the Bart encoder and decoder to get fused up about the source sentences' importance, which will come in handy when generating abstractive summaries.
At the next stage, we design our framework as demonstrated in Figure 2 by extending the decoder with an additional cross-attention layer (i.e., Sentence Multi-Head Attention inside the decoder). We use a two-stage fine-tuning approach to this end, where firstly, we fine-tune the encoder module, as well as the Sentence Multi-head Attention on sequence tagging problem. Second, we further fine-tune it on abstractive summarization task. We separate the optimizers of the pre-trained part (i.e., the encoder and Sentence Multi-Head Attention modules) and the decoder. That is, the pre-trained part should be fine-tuned with a lower learning rate, so it is trained with more accurate gradients as the decoder becomes stable. We name this model as SentSum in our experiments.
### Curricular Learner for Bart
Curriculum learning (CL) Bengio et al. (2009) is a training paradigm to improve the performance and generalization of the learner models based on the idea that easy samples should be visited before the difficult ones during the training Castells et al. (2020). This is due to the fact that when the model starts off with easier samples on early stages of training, the risk of getting stuck in local optima is reduced as most loss functions in deep neural networks are highly non-convex Chang et al. (2021), and hard to converge. Considering the applicability of curriculum learning in training large-scale networks, we aim to use it in our summarization framework. Before incorporating the curriculum learning strategy into our model's training stage, we first need to define the _difficulty_ metric to distinguish the hardness of samples.
In practice, estimating a prior difficulty for each sample is considered a complex task, so we propose to discriminate the samples with progressive signals --such as the respective sample loss at each training iteration-- in training process. In this context, CL is achieved by predicting the difficulty of each sample at the training iterations in the form of a weight, such that difficult samples receive lower weights during the early stages of training and vice versa. To model the curriculum, we propose to use SuperLossCastells et al. (2020) which is a generic loss criterion built upon the task loss function. More specifically, SuperLoss is a task-agnostic confidence-aware loss function that takes in two parameters: (1) the task loss \(\mathcal{L}_{i}=\ell(y_{i},\widehat{y}_{i})\), where \(y_{i}\) is neural network's (i.e., Bart's generated summary) output and \(\widehat{y}_{i}\) is the gold label (i.e., ground-truth summary); and (2) \(\sigma_{i}\) as the confidence parameter of the \(i\)th sample. SuperLoss is framed as \(\text{L}_{\lambda}(\mathcal{L}_{i},\sigma_{i})\) and computed as follows,
\[\text{L}_{\lambda}(\mathcal{L}_{i},\sigma_{i})=(\mathcal{L}_{i}-\tau)\sigma_{i }+\lambda(\log\sigma_{i})^{2} \tag{2}\]
in which \(\lambda\) is the regularization parameter, and \(\tau\) is the running or static average of task loss (i.e., \(\mathcal{L}\)) during the training. While SuperLoss provides a well-defined approach to curriculum learning strategy, learning \(\sigma\) parameter is not tractable for tasks with abundant training instances such as text summarization. To circumvent this issue and hinder imposing new learnable parameters, SuperLoss suggests using the converged value of \(\sigma_{i}\) at the limit,
Figure 2: Overview of our summarization model
\[\sigma^{*}_{\lambda}(\ell_{i}) =\operatorname*{arg\,min}_{\sigma_{i}}\operatorname{L_{\lambda}}( \ell_{i},\sigma_{i})\] \[\text{SL}_{\lambda}(\ell_{i}) =\operatorname{L_{\lambda}}(\ell_{i},\sigma^{*}_{\lambda}(\ell_{i },\sigma_{i}))=\min_{\sigma_{i}}\operatorname{L_{\lambda}}(\ell_{i},\sigma_{i}), \tag{3}\]
Using this technique, the confidence parameters are not required to be learned during the training. Castells et al. (2020) found out that \(\sigma^{*}_{\lambda}(\ell_{i})\) has a closed-form solution, computed as follows,
\[\sigma^{*}_{\lambda}(\ell_{i})=e^{-W(\frac{1}{2}\max(-\frac{2}{ \epsilon},\beta))},\beta=\frac{\ell_{i}-\tau}{\lambda} \tag{4}\]
in which \(W\) is the Lambert W function. With this in mind, SuperLoss upweights easier samples dynamically during the training; hence, providing a curriculum learning approach to summarization. We call this model CurRSum. We also experiment with a combination of SentSum and CurRSum models and name it CurRentSum throughout our experiments.
## 4 Experimental Setup
### Datasets
We use two Reddit summarization datasets including Reddit TIFU dataset Kim et al. (2019) and Webis-TLDR-17 Volske et al. (2017), as well as two well-known news summarization datasets including CNN/DM See et al. (2017), and XSum Narayan et al. (2018) throughout our experiments. Reddit datasets are gathered from Reddit discussion forums containing 42k (Reddit TIFU) and 4M (Webis-TLDR-17) instances with source (i.e., post's text) and a Tldr summary written by human. We use 80% (33,705)-10% (4,214)-10% (4,214), and 99% (3,771,432)-1% (38,483)-1% (38,484) random train-val-test splits for Reddit TIFU and Webis-TLDR-17, respectively. For the news summarization datasets, we use the splits suggested by their original papers.
### Comparison
We compare our model against various extractive and abstractive state-of-the-art baselines. The description of each baseline is outlined below.
**BertSumExt**Liu and Lapata (2019): the extractive variant of BertSum that inserts external [CLS] tokens between sentences. The representations are further used with a Sigmoid classifier to compute sentence extraction probabilities.
**BertSumAbs**Liu and Lapata (2019): the abstractive variant of BertSum that uses a Transformer-based encoder-decoder architecture. The encoder is Bert, but decoder is pre-trained from scratch.
**BertSumExtabs**Liu and Lapata (2019): a two-stage fine-tuning approach, for which BertSumExtAbs first fine-tunes encoder on extractive summarization, then it is fine-tuned along with decoder for abstractive summarization task.
**MatchSum**Zhong et al. (2020): an extractive framework which matches source text with candidate summaries in a semantic space. Unlike commonly used extractive summarizers, MatchSum does not rely on extracting sentences individually, instead it selects a set of sentences (i.e., candidate summary) that has the maximum semantic similarity with the ground-truth summary.
**Pegasus**Zhang et al. (2020): an abstractive model that defines a new pre-training task as Gap Sentence Generation (GSG), in which key source sentences are masked out, and the network learns to generate the missing sentences.
**Bart**Lewis et al. (2020): an abstractive model that uses a pre-trained encoder-decoder architecture, unlike Bert that only utilizes a pre-trained encoder.
**NeuTopicSumm**Nguyen et al. (2021): an abstractive baseline that incorporates neural topic model into summarization framework.
**BART+R3F**Aghajanyan et al. (2021): a summarization method that proposes a new fine-tuning technique; R3F method replaces previously proposed adversarial objectives with parametric noise, and discourages representation change during fine-tuning without degrading the performance.
**BART+MUPPET**Aghajanyan et al. (2021): an architecture that introduces an additional large-scale learning stage (i.e., pre-finetuning) between the pre-training and fine-tuning stages, using a large-scale collection of datasets proposed on different tasks. MUPPET is designed to encourage representation learning for increasing the generalization of language models.
### Implementation details
We extend the Huggingface's Transformer's library 2 Wolf et al. (2020) to implement our models, and make the code publicly available to expedite future
research 3. We train all of our models for 8 epochs (Reddit TIFU) and 5 epochs (Webis-TLDR-17, CNN/DM, and XSum) and use the checkpoint that achieves the best Rouge-L score in the validation for the inference. AdamW optimizer (Loshchilov and Hutter, 2019) initialized with learning rate of \(3e-5\), (\(\beta_{1},\beta_{2})=(0.9,0.98)\), and weight decay of 0.01 is used for all of our summarization models, as well as for Bart. Specifically, to train SentSum, we use a lower learning rate of \(1e-5\) for the pretrained part. Cross-entropy loss is used for all models, except for pre-training SentSum where we use the Mean Squared Error (MSE) loss function. For BertSum variants, we use the main codebase 4 and default hyper-parameters suggested by the original paper (Liu and Lapata, 2019). To keeping track of the learning process, we use Weights & Biewald, 2020) toolkit.
Footnote 3: _HTTP_
Footnote 4: [https://github.com/nlpyang/PreSumm](https://github.com/nlpyang/PreSumm)
## 5 Results
**Automatic Evaluation.** Table 1 reports the performance of the baseline models along with our models' in terms of Rouge score variants (Lin, 2004) over (a) Reddit TIFU dataset. As indicated, our best model is CurrSum that uses SuperLoss directly on top of the Bart model and is a clear improvement over most of the baselines across all metrics. Specifically, CurrSum outperforms its ground baseline that has no curriculum (i.e., Bart) by relative improvements of 5.2%, 11.22%, 5.4% for Rouge-1, Rouge-2, Rouge-L, respectively, on Reddit TIFU. While CurrSum achieves competitive performance in terms of Rouge-1 with BART+R3F and BART+MUPPET, it lags behind them on Rouge-2 and Rouge-L. These differences may be explained by the fact that BART+R3F is computationally more intensive than our curriculum-based model 5, and BART+MUPPET performs the best presumably due to its additional large-scale learning stage (i.e., pre-finetuning) that is not considered in Bart and CurrSum 6. Interestingly, while augmenting the decoder with sentence cross-attention module (i.e., SentSum) is marginally better than the Bart baseline, training it with curriculum learning SuperLoss function (i.e., CurrSentSum) further improves the performance. This finding provides a compelling evidence for the usefulness of curricular training strategy applied in summarization task. Comparing abstractive vs. extractive models, we observe a noticeable performance gap denoting that Reddit TIFU includes rather abstractive summaries than extractive.
Footnote 5: BART+R3F adds an additional forward pass (FP) to compute symmetric KL divergence term for measuring parametric noise.
We further investigate the validation performance of the summarization models in the presence and absence of the curriculum learning strategy. As shown in Figure 3, models trained with curriculum strategy (i.e., CurrSum and CurrSentSum) tend to converge faster, and perform better in comparison with their respective baselines (i.e., Bart and SentSum) that are without curriculum learning. Looking at Figure 3, the efficiency of curriculum strategy is quite remarkable considering the scores and convergence steps (i.e., vertical red
\begin{table}
\begin{tabular}{l c c c} \hline \hline Model & RG-1 & RG-2 & RG-L \\ \hline BertSumExt (2019) & 20.32 & 4.81 & 13.77 \\ BertSumAns (2019) & 21.92 & 4.99 & 14.21 \\ BertSumExtAbs (2019) & 22.14 & 6.01 & 14.66 \\ MatchSum (2020) & 25.09 & 6.17 & 20.13 \\ Pegasus (2020a) & 26.63 & 9.01 & 21.60 \\ Bart (2020) & 28.80 & 9.02 & 23.02 \\ NeuToiSumm (2021) & 27.96 & 9.43 & 23.08 \\ BART+R3F (2021b) & **30.31** & 10.98 & 24.74 \\ BART+MUPPET (2021a) & **30.30** & **11.25** & **24.94** \\ SentSum (Ours) & 29.09 & 9.14 & 23.39 \\ CurrSum (Ours) & **30.32** & 10.16 & 24.27 \\ CurrSentSum (Ours) & 29.57 & 9.81 & 23.61 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Rouge results on test set of Reddit TIFU dataset. We bold the best numbers and the numbers within 0.15 of the best.
Figure 3: Plots demonstrating the validation performance of various models with increasing steps of training of Reddit TIFU dataset. Blue lines show the performance when no curriculum is used. Red lines represent the performance when the curriculum is added. Vertical lines indicate the step where the models achieve the Rouge-L score that the baselines attain at convergence.
lines) of the curriculum-equipped models.
**What is the effect of curriculum learning on cross-domain datasets?** To measure the cross-domain performance of models supplied with curriculum learning strategy, we compare Bart's performance (as a transformer-based state-of-the-art summarizer) vs. a variant of Bart with curriculum strategy (i.e., our CurrSum model) on three summarization datasets: Webis-TLDR-17, CNN/DM, and XSum. Results, summarized in Table 2, are a strong indication of effectiveness of curriculum learning on datasets from various domains (i.e., Social Media and News) as we observe a consistent improvement compared to the Bart baseline.
**Human evaluation.** A few studies have recognized the limitation of widely adopted Rouge metric as it is biased toward surface lexical similarities (Ng and Abrecht, 2015; Cohan and Goharian, 2016). To get insights into the qualities of our proposed models, we performed a human evaluation over a random set of system-generated summaries. To this end, we randomly sampled 200 cases from Reddit TIFU's test set, each consisting of the post's source text along with blinded Tldr's from (1) author-written references; (2) Bart baseline; and (3) our CurrSentSum model. The choice of CurrSentSum lies in the fact that we want to evaluate the effect of both sentence attention and curriculum learning in our human evaluation process. To prevent potential bias, we randomly shuffled the ordering of Tldrs provided to evaluators. Following prior work (Grusky et al., 2018; Zhang et al., 2020; Cho et al., 2021), we define three metrics: (1) **Fluency:** is the Tldr well-written and easy to understand? (2) **Informativeness:** does the Tldr provide useful information about source? (3) **Overall quality:** overall, is the Tldr of good quality in terms of content (both fluency and informativeness), and correctness? We then had five human evaluators score each of the provided examples on a scale of [1-5] (worst to best) in terms of the criteria mentioned above. The evaluators were familiar with data science and annotation and were hired through the Upwork 7 freelancing platform.
Footnote 7: [https://www.upwork.com](https://www.upwork.com)
Table 3 shows the average score gained by each system in terms of the aforementioned qualitative criteria. Comparing our model against other systems, we find that: (1) in terms of fluency, our model and Bart are quite comparable, with human summaries being the best; (2) we calculated the average Tldr length (in tokens) of the test set and obtained 22.9 (Human), 23.6 (Bart), and 19.8 (CurrSentSum). Interestingly, despite generating shorter summaries, our model outperforms both Bart and human-written Tldrs on informativeness which shows that it efficiently selects the useful information from the original text, generating them in a comparably shorter text. Looking into the outperformed cases on informativeness, we noticed that the annotators tend to give a higher score to a summary if it provides the most important information within a concise text (i.e., providing only to the point information akin to the definition of Tldr), which is generally achieved by our model. (3) our model is also more preferable in terms of overall quality compared to Bart and human summaries, with relatively large gap. Overall, it is interesting that summarization models are becoming comparable (sometimes even more preferable) to Human Tldrs on informativeness and overall quality, substantiating the recent success of pretrained language models as also shown by Fabbri et al. (2021). We further computed the Fleiss' Kappa (Fleiss, 1971) inter-rater agreement for the qualitative metrics, and obtained 10%, 27%, 22%
\begin{table}
\end{table}
Table 2: Results when applying curriculum learning for Bart on cross-domain datasets.
\begin{table}
\end{table}
Table 3: Results of the human evaluation comparing three systems in terms of Fluency, Informativeness, and Overall quality. Winning scores are shown in bold.
correlation scores for fluency, informativeness, and overall quality, respectively. These correlations are considered to be "slight" for fluency and "fair" for informativeness, and overall quality with regard to the Fleiss' range interpretation Landis and Koch (1977). Table 4 shows the system-wise Fleiss' agreement over three metrics, expressing that agreement rates on our system summaries are stronger than the others.
Inspired by MacAvaney et al. (2019); Sotudeh et al. (2020), we plot histograms and arrow plots for our human evaluation in Figure 4. The histograms show the score distributions gained by each model, and arrow plots demonstrate the score transition (i.e., how scores changed) on the provided samples. The head of each arrow shows our system's score, which makes a transition to its head showing the score gained by the other systems (either human or Bart). The count of the samples that have made a specific score transition is shown next to the arrows' tail. As indicated, it is observable that our system makes a strong gain in informativeness and overall quality metrics over the other two systems while staying competitive in terms of fluency. The improvement is specifically considerable in enhancing scores from 4 to 5 in informativeness and overall quality metrics.
**Qualitative analysis.** In order to provide insights into qualities of our model vs baselines, we did further evaluation over the annotated samples. We found out that (1) our model performs superior at collecting the key information from the source when there is an overload of important information in the source text, while most gold Tldrs contain just a few sentences (less than 3) that only include the most important information. This behaviour of our model enables it to take advantage of informativeness and overall quality metrics. (2) human-written Tldrs receive relatively a lower score compared to our model's in terms of informativeness and overall quality when the human-written Tldr contains entailment/conclusion from the source, although it might not be present in the source text. (3) interestingly, as the system Tldrs become lengthy, the annotators tend to give a lower score in terms of informativeness and overall quality metrics. This might be due to the fact that longer summaries encompass a high proportion of source information regardless of their saliency.
## 6 Conclusion
While neural transformer-based summarization models have shown to be promising, they suffer from two shortcomings, namely _content selection_, and _inefficient training process_. In this paper, we explore two approaches to address these issues. Firstly, we propose to tackle the content selection problem by augmenting the decoder via a sentence cross-attention layer such that the decoder becomes aware of sentence saliency. Secondly, we incorporate a confidence-aware curriculum learning approach to the summarization framework in the hope of increasing model's generalization, achieving faster convergence, and ultimately improving model performance. Our automatic evaluations over various data collections from different domains and human evaluations show the effectiveness of our model.
\begin{table}
\begin{tabular}{l c c c} \hline \hline System & Fluency & Info. & Overall \\ \hline Bart & 12\% & 26\% & 21\% \\ CurrSentSum & **14\%** & **33\%** & **25\%** \\ Human & 11\% & 24\% & 20\% \\ \hline \hline \end{tabular}
\end{table}
Table 4: System-wise Fleiss’ kappa agreement
Figure 4: Histograms and arrow plots depicting score transition between 200 manually-scored Tldr summaries. (a–c) and (d–f) show our system’s comparison with human and Bart baseline system, respectively. Although comparable in terms of fluency metric with the baseline and human parity, our model makes a strong gain to improve summary’s informativeness and overall quality. |
2304.03803 | Cosmology in $R^2$-gravity: Effects of a Higher Derivative Scalar
Condensate Background | A well known extension of Einstein General Relativity is the addition of an
$R^2$-term, which is free of ghost excitations and in the linearized framework,
reduces Einstein General Relativity and an additional higher derivative scalar.
According to \cite{Chakraborty:2020ktp}, the above scalar sector can sustain a
Time Crystal-like minimum energy state, with non-trivial time dependence.
Exploiting previous result that the scalar can sustain modes with periodic time
dependence in its lowest energy, we consider this condensate as a source and
study the Friedmann-Lema\^{i}tre-Robertson-Walker (FLRW) cosmology in this
background. The effect of the $R^2$-term is interpreted as a back reaction. A
remarkable consequence of the condensate is that, irrespective of open or close
geometry of the Universe, for an appropriate choice of parameter window, the
condensate can induce a decelerating phase before the accelerated expansion
starts and again, in some cases, it can help to avoid the singularity in the
deceleration parameter (that is present in conventional FLRW Cosmology). | Raj Kumar Das, Aurindam Mondal, Subir Ghosh, Supriya Pan | 2023-04-07T18:12:37Z | http://arxiv.org/abs/2304.03803v2 | # Cosmology in a Time-Crystal Background
###### Abstract
We investigate the effects of a Time Crystal-like Condensate on cosmological dynamics. It is well known that quadratic gravity reduces to Einstein gravity along with a decoupled higher derivative dynamical scalar [1]. According to [2], the above scalar sector can sustain a Time Crystal-like minimum energy state, with non-trivial time dependence. In the present work we treat the Time Crystal-like state as the background (that replaces the classical Minkowski vacuum) and study cosmic evolution on this "dynamic" ground state. In the first part we re-derive [2], in a covariant and more systematic way, the frequencies that characterize the oscillator like Time crystalline condensate and interpret it as a background energy-momentum tensor simulating a matter-like effect. Importantly, no external matter is introduced here and the condensate, consists of a combination of the metric field \(g_{\mu\nu}\) and is generated due to the \(R^{2}\)-term (\(R\) is the Ricci scalar) in quadratic gravity [1]. In a way the spurious degrees of freedom of \(R^{2}\)-gravity turns into a useful component. The second part comprises of new effects where the cosmology in Friedmann-Lemaitre-Robertson-Walker (FLRW) universe is studied in presence of the energy-momentum tensor characterizing the Time Crystal Condensate. Under certain approximations, the scale factor of the FLRW universe is analytically obtained for any spatial geometry. We also find that the Time Crystal Condensate contributes as a new matter candidate having radiation-like behavior in the universe. Additionally, irrespective of the spatial geometry of the universe, the Time Crystal condensate generates a decelerating phase before the early acceleration starts. This is an indication of a contracting phase of the universe before its accelerated expansion.
## I Introduction
Although the idea of Time Crystal (TC) is only a decade old, it has managed to create a fair amount of interest in physics community. Wilczek [3] in quantum framework, Shapere and Wilczek [4] in a classical framework, and in a parallel alternative formulation, Ghosh [5], one of the present authors, showed that it might be possible to conjure up dynamical systems where the lowest energy state (or, the ground state) can have a non-trivial spacetime dependence. For a generic Hamiltonian system, this sounds impossible since minimization of the energy (or, equivalently, the Hamiltonian \(H(q,p)\) with \(q,p\) being the coordinate and momentum, respectively) requires, \(\partial H/\partial q=0,\ \partial H/\partial p=0\). On the other hand, Hamilton's equations of motion demand that, \(\partial H/\partial q=\dot{p},\ \partial H/\partial p=-\dot{q}\), so that combining with the minimization condition, the minimum energy state should satisfy \(\dot{q}=0,\ \dot{p}=0\), and hence can not have non-trivial space or time dependence. However, the caveat is that the unusual nature of the dynamical models of [3; 4] allows cusps in the relations where the Hamiltonian equations of motion are not valid, thereby giving rise to TC ground states. Interestingly, with some essential modifications of the original idea presented in [3], physical quantum models have been constructed where TC-like behaviour has been demonstrated experimentally [6]. In the classical context, some of us have shown [7] that in General Relativity, generalized to non-commutative space, it is possible to obtain TC-like ground state. Related works in TC cosmology can be found in Refs. [8; 9; 10; 11; 12; 13; 14; 15; 16]
However, the framework developed in [5] is less exotic and requires higher time derivatives in a quadratic action that exhibits a Spontaneous Symmetry Breaking (SSB) in _momentum space_ that induces a non-zero Fourier mode for the lowest energy state. This happens due to the simultaneous presence of \(|K|^{4}\) and \(|K|^{2}\) terms in the energy and momentum space, \(|K|\) being the momentum or energy. In fact, the phenomenon is quite similar to conventional SSB in coordinate space where quartic and quadratic potentials can generate a condensate with lower energy.
An interesting playground for the above ideas to be implemented is the Quadratic Gravity (QG), the simplest example of \(f(R)\) gravity [17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31]. Having no ghosts and supporting inflation (among other nice features), QG is a popular extension of Einstein gravity [1]. Following [5] we have already shown [2] that the lowest energy state in QG can be a spacetime with ripples, that is having a non-trivial spacetime dependence, thereby introducing a new dimensional scale. This leads to a spacetime dependent ground state \(-\) a Time Crystal Condensate (TCC). In Ref. [1],
it has been shown that, linearized QG in de Sitter (dS) or anti de Sitter (AdS) background, decouples into conventional spin-2 graviton and a _higher derivative scalar mode_ with the latter inducing the TCC [1].
In the present work we follow up the natural next step: direct effects of this TCC in cosmology. We compute the energy-momentum tensor for the TCC and treat it as a source for Einstein's General Relativity (GR), with the \(R^{2}\)-term giving rise to a back reaction. How will this TC condensate impact on the cosmological scenario? To answer this question we set up the Friedmann-Lemaitre-Robertson-Walker (FLRW) equations in presence of this TCC source. indeed, we expect and reveal qualitatively new phenomena since the TCC is dynamical in time with an explicit expression for frequency as derived here.
The article has been structured as follows. In section II we introduce the gravitational action for the quadratic gravity and the emergence of the TC condensate. In section III we discuss the minimization of the energy-momentum tensor arising from the quadratic gravity and the TC ground states. In section IV we present the Friedmann equations in a homogeneous and isotropic background for the current theoretical framework and present the analytical solutions for the cosmological parameters. Then in section V we describe the cosmological dynamics in the TC background in terms of the evolution of the key cosmological parameters and the implications of the results. Finally, in section VI, we close the present article with a brief summary of the entire work, major conclusions and future directions.
## II Quadratic gravity and time crystal condensate
Before proceeding towards the QG model, let us consider a generic form of higher derivative action for a scalar field \(\phi\) in curved background (see for example [32])
\[S=-\frac{1}{2}\int d^{4}x\ \sqrt{-g}\bigg{[}(\partial^{2}\phi)^{2}+(m_{1}^{2 }+m_{2}^{2})g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi+m_{1}^{2}m_{2}^{2 }\phi^{2}\bigg{]}. \tag{1}\]
The variational principle
\[\delta S=-\frac{1}{2}\int d^{4}x\ \sqrt{-g}\bigg{[}2(\partial^{2}\phi) \partial^{2}\delta\phi+2\left(m_{1}^{2}+m_{2}^{2}\right)g^{\mu\nu}\partial_{ \mu}\phi\partial_{\nu}\delta\phi+2m_{1}^{2}m_{2}^{2}\phi\delta\phi\bigg{]}=0, \tag{2}\]
yields the equation of motion
\[\left((\partial^{2})^{2}\phi-(m_{1}^{2}+m_{2}^{2})g^{\mu\nu} \partial_{\mu}\partial_{\nu}\phi+m_{1}^{2}m_{2}^{2}\phi\right)\delta\phi=0 \rightarrow(\partial^{2}-m_{1}^{2})(\partial^{2}-m_{2}^{2})\phi=0. \tag{3}\]
The energy-momentum tensor corresponding to (1) is given by
\[T_{\mu\nu}=-(\partial_{\mu}\partial^{2}\phi)\partial_{\nu}\phi- (\partial_{\nu}\partial^{2}\phi)\partial_{\mu}\phi+g_{\mu\nu}g^{\alpha\beta}( \partial_{\alpha}\partial^{2}\phi)\partial_{\beta}\phi+\] \[\frac{1}{2}g_{\mu\nu}(\partial^{2}\phi)^{2}+(m_{1}^{2}+m_{2}^{2}) (\partial_{\mu}\phi\partial_{\nu}\phi-\frac{1}{2}g_{\mu\nu}g^{\alpha\beta} \partial_{\alpha}\phi\partial_{\beta}\phi)-\frac{m_{1}^{2}m_{2}^{2}}{2}g_{\mu \nu}\phi^{2}. \tag{4}\]
This expression will play an essential role in the subsequent analyses. The QG action we focus on is given by [1]
\[A=\frac{c^{4}}{16\pi G}\int d^{4}x\ \sqrt{-g}\bigg{[}R+\alpha R^{2}-2 \Lambda\bigg{]}. \tag{5}\]
In order to avoid tachyonic excitations, conventionally \(\alpha\) is taken to be positive. In Ref. [1], the authors have shown that this QG action decouples to conventional gravity theory (with a spin 2 graviton) along with a higher derivative scalar sector (modulo surface terms). Very interestingly, higher derivative nature of the initial QG action becomes confined to the decoupled scalar sector given below
\[A/\Sigma=-\frac{1}{2}\int d^{4}x\ \sqrt{-\widetilde{g}}\bigg{[}(\widetilde{ \partial}^{2}\phi)^{2}+\left(\frac{\widetilde{R}}{3}-\frac{1}{6\alpha}\right) \phi\widetilde{\partial}^{2}\phi-\frac{\widetilde{R}}{18\alpha}\phi^{2}\bigg{]} \tag{6}\]
where \(\Sigma=(-\frac{9\alpha c^{4}}{8.16\pi G})\), \(\widetilde{g}_{\mu\nu}\) is the arbitrary background metric (although in our case we restrict to dS or AdS), \(\widetilde{\partial^{2}}\) is defined accordingly and \(\widetilde{R}\) is the constant curvature corresponding to \(\widetilde{g}_{\mu\nu}\). Exploiting (4) the energy-momentum tensor for (6) becomes
\[T_{\mu\nu}/\Sigma=-(\partial_{\mu}\partial^{2}\phi)\partial_{\nu }\phi-(\partial_{\nu}\partial^{2}\phi)\partial_{\mu}\phi+g_{\mu\nu}g^{\alpha \beta}(\partial_{\alpha}\partial^{2}\phi)\partial_{\beta}\phi+\] \[\frac{1}{2}g_{\mu\nu}(\partial^{2}\phi)^{2}+\left(-\frac{ \widetilde{R}}{3}+\frac{1}{6\alpha}\right)\left(\partial_{\mu}\phi\partial_{ \nu}\phi-\frac{1}{2}g_{\mu\nu}g^{\alpha\beta}\partial_{\alpha}\phi\partial_{ \beta}\phi\right)+\left(\frac{1}{2}\right)\frac{\widetilde{R}}{18\alpha}g_{\mu \nu}\phi^{2}, \tag{7}\]
where we identify the parameters as
\[(m_{1}^{2}+m_{2}^{2})=\frac{1}{6\alpha}-\frac{\widetilde{R}}{3};\quad m_{1}^{2}m _{2}^{2}=-\frac{\widetilde{R}}{18\alpha}. \tag{8}\]
It is straightforward to check the on-shell conservation law \(\nabla_{\mu}T^{\mu\nu}=0\). For convenience, let us note the unit system used, \(l\) (length), \(m\) (mass), \(t\) (time) and with \([A]\) denoting dimension of \(A\) we find
\[[G]=l^{3}/(mt^{2}),\quad[R]=[\Lambda]=1/l^{2},\quad\alpha=l^{2},\quad[\alpha \widetilde{R}]=\mbox{dimensionless},\]
\[[T_{\mu\nu}]=m/(lt^{2})=(ml^{2}/t^{2})(1/l^{3})=\mbox{energy density}.\]
In order to have real values for \(m_{1},m_{2}\), and for \(\alpha>0\), we impose \(\widetilde{R}=-|R|\) which represents an AdS background, and we obtain
\[m_{1}=\sqrt{\frac{1}{6\alpha}},\quad m_{2}=\sqrt{\frac{|R|}{3}}. \tag{9}\]
Since \(\phi\) is decoupled from the Einstein gravity sector, we propose to interpret \(T_{\mu\nu}(\phi)\) as a source, and using the Einstein's gravitational equations, cosmology in presence of \(T_{\mu\nu}(\phi)\) can be explored. The main novelty of our scheme is that this "matter" sector is not introduced from outside but it is actually generated by the \(\alpha R^{2}\) term in the QG action. Thus, the Einstein's equations for this set-up read
\[R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R=\frac{8\pi G}{c^{4}}T_{\mu\nu} \tag{10}\]
where the left hand side is provided by the Einstein's GR action \(\sim\int\sqrt{-g}R\), \(T_{\mu\nu}(\phi)\) in the right hand side, will be given explicitly by using the specific TC solution for \(\phi\) in the next section.
## III Minimization of the energy-momentum tensor and TC ground states
We quickly recapitulate the previous work [2] (involving one of the present authors) to find the lowest energy TC condensate solutions. In particular, this means that _the vacuum is replaced by the TC condensate with the latter acting as a stage for Einstein gravity_. Explicit form of the TC parameters will be obtained by minimizing the energy \(\widetilde{T}_{00}\) in momentum space. Let us consider a superposition of plane waves of frequency \(\omega\) and wave number \(K\),
\[\widetilde{\phi}(K)=\int\frac{d^{4}x}{(2\pi)^{4}}\exp{i(-\omega\eta+K.x)\phi( x)}. \tag{11}\]
Using the above, \(T_{00}\) from eqn. (7) in the \(K\)-space is given by
\[\frac{\widetilde{T}_{00}}{\Sigma}=f(\omega,K)|\widetilde{\phi}(K)|^{2}, \tag{12}\]
where \(f(\omega,K)\) takes the following form
\[f(\omega,K)=-\alpha\bigg{[}\frac{(-3\omega^{4}+2c^{2}K^{2}\omega^{2}+c^{4}K^ {4})}{2c^{2}a^{2}}+\frac{(m_{1}^{2}+m_{2}^{2})(\omega^{2}+c^{2}K^{2})}{2}+ \frac{c^{2}a^{2}(m_{1}m_{2})^{2}}{2}\bigg{]}. \tag{13}\]
In the following we impose two dispersion relations.
* _Dispersion relation I:_ From eqns. (3) and (9), the first dispersion relation is given by \[\omega^{2}=c^{2}\bigg{(}K^{2}+\frac{|R|a^{2}}{3}\bigg{)}.\] (14) Using this dispersion relation the expression \(f_{I}(k)\) becomes \[f_{I}(K)=-\alpha\bigg{[}K^{2}\bigg{(}\frac{1}{6\alpha}-\frac{|R|}{3}\bigg{)}+ \frac{|R|a^{2}}{6}\bigg{(}\frac{1}{3\alpha}-\frac{2|R|}{3}\bigg{)}\bigg{]},\] (15)
and alternatively \(f_{I}(\omega)\) is given by \[f_{I}(\omega)=-\alpha\bigg{[}\bigg{(}\omega^{2}-\frac{|R|a^{2}}{3}\bigg{)}\bigg{(} \frac{1}{6\alpha}-\frac{|R|}{3}\bigg{)}+\frac{|R|a^{2}}{6}\bigg{(}\frac{1}{3 \alpha}-\frac{2|R|}{3}\bigg{)}\bigg{]}.\] (16) Minimization conditions for \(\widetilde{T}_{00}\) are \[\frac{d}{dK}\left(\frac{\widetilde{T}_{00}}{\Sigma}\right)=0,\quad\frac{d^{2}} {dK^{2}}\left(\frac{\widetilde{T}_{00}}{\Sigma}\right)>0,\] (17) which lead to the following TC parameters \[K_{0}=0,\quad\omega_{0}^{2}=\frac{c^{2}|R|a^{2}}{3},\] (18) subject to the constraints \(\alpha>0,|R|>0,(1-2\alpha|R|)<0\). Substituting the above results, minimum energy density for the condensate appears as \[\frac{\widetilde{T}_{00}|_{min}}{\Sigma}=-\alpha\bigg{[}K_{0}^{2}\left(\frac{ 1}{6\alpha}-\frac{|R|}{3}\right)+\frac{|R|a^{2}}{6}\left(\frac{1}{3\alpha}- \frac{2|R|}{3}\right)\bigg{]}|\widetilde{\phi}|^{2}=-\frac{|R|a^{2}}{18}\left( 1-2\alpha|R|\right)|\widetilde{\phi}|^{2},\] (19) which is positive since \((1-2\alpha|R|)\) is negative. Thus, we reveal the important result that in quadratic gravity, the \(R\) and \(R^{2}\) terms can conspire to generate a _stable lowest energy condensate state_\(-\) a TC condensate.
* _Dispersion relation II:_ In a similar way, dispersion relation II yields the following results \[\omega^{2}=c^{2}\bigg{(}K^{2}+\frac{a^{2}}{6\alpha}\bigg{)},\quad f_{II}(K)=c ^{2}\alpha\left(K^{2}+\frac{a^{2}}{6\alpha}\right)\left(\frac{1}{6\alpha}+ \frac{R}{3}\right),\quad f_{II}(\omega)=\alpha\omega^{2}\left(\frac{1}{6\alpha }-\frac{|R|}{3}\right).\] (20) The minimization condition for \(\widetilde{T}_{00}\) gives \(K_{0}=0,\omega_{0}^{2}=\frac{c^{2}a^{2}}{6\alpha}\) with the parameters satisfying \(\alpha>0,|R|>0,(1-2\alpha|R|)>0\). Once again we recover a minimum positive energy condensate \[\frac{\widetilde{T}_{00}|_{min}}{\Sigma}=\alpha\omega^{2}\left(\frac{1}{6 \alpha}-\frac{|R|}{3}\right)|\widetilde{\phi}|^{2}=\frac{a^{2}}{36\alpha}\left( 1-2\alpha|R|\right)|\widetilde{\phi}|^{2}.\] (21)
## IV Friedmann equations in TC background
In this section we discuss the non-perturbative cosmic dynamics in TC background. To begin with we consider the homogeneous and isotropic universe characterized by the Friedmann-Lemaitre-Robertson-Walker(FLRW) metric written in terms of the co-moving coordinates \((t,r,\theta,\phi)\) as \(ds^{2}=-c^{2}dt^{2}+a^{2}(t)\left[dr^{2}/(1-kr^{2})+r^{2}(d\theta^{2}+\sin^{2} \theta d\phi^{2})\right]\), where \(a(t)\) is the expansion scale factor of the FLRW universe and \(k\) describes the spatial geometry of the universe which may take three distinct values to represent three distinct geometries, namely, \(k=0\) (flat universe), \(k=-1\) (open universe) and \(k=+1\) (closed universe). In terms of the conformal time \(\eta\), the metric, the FLRW metric turns out to be
\[ds^{2}=a^{2}(\eta)\bigg{[}-c^{2}d\eta^{2}+\frac{dr^{2}}{1-kr^{2}}+r^{2}(d \theta^{2}+\sin^{2}\theta d\phi^{2})\bigg{]}. \tag{22}\]
Now plugging the metric of eqn. (22) into the Einstein's gravitational equations (10), one obtains the Friedmann equations as follows
\[\frac{{a^{\prime}}^{2}}{a^{2}} = -\frac{8\pi Ga^{2}}{3}T_{0}^{0}+\frac{c^{2}\Lambda}{3}a^{2}-kc^{2}, \tag{23}\] \[\frac{{a^{\prime\prime}}}{a}-\left(\frac{{a^{\prime}}}{a}\right) ^{2} = \frac{4\pi Ga^{2}}{3}(T_{0}^{0}-3T_{i}^{i})+\frac{a^{2}c^{2}\Lambda }{3}. \tag{24}\]
where \(\phi^{\prime}\equiv d\phi/d\eta\). Components of the energy-momentum tensor appearing in eqns. (23, 24) are
\[\widetilde{T}_{0}^{0}=\Sigma\Bigg{[}\frac{1}{c^{6}a^{8}}\left(\frac{\phi^{\prime \prime 2}}{2}+4\phi^{\prime}\phi^{\prime\prime}\frac{a^{\prime}}{a}+\frac{11\phi^{ \prime 2}}{2}\frac{a^{\prime 2}}{a^{2}}-\phi^{\prime 2}\frac{a^{\prime\prime}}{a}- \phi^{\prime}\phi^{\prime\prime\prime}\right)-\left(\frac{m_{1}^{2}+m_{2}^{2}} {2c^{4}a^{4}}\right)\phi^{\prime 2}-\left(\frac{m_{1}^{2}m_{2}^{2}}{2c^{2}}\right)\phi^{2} \Bigg{]}, \tag{25}\]
\[\widetilde{T}_{i}^{i}=\Sigma\Bigg{[}\frac{1}{c^{6}a^{8}}\left(\frac{\phi^{ \prime\prime 2}}{2}-2\phi^{\prime}\phi^{\prime\prime}\frac{a^{\prime}}{a}- \frac{9\phi^{\prime 2}}{2}\frac{a^{\prime 2}}{a^{2}}+\phi^{\prime 2}\frac{a^{ \prime\prime}}{a}+\phi^{\prime}\phi^{\prime\prime\prime}\right)+\left(\frac{m _{1}^{2}+m_{2}^{2}}{2c^{4}a^{4}}\right)\phi^{\prime 2}-\left(\frac{m_{1}^{2}m_{2}^{2}}{2c^{2}} \right)\phi^{2}\Bigg{]}. \tag{26}\]
We follow standard procedure and arrange eqns. 23 and 24 in a quadratic equation
\[\frac{a^{\prime 2}}{a^{2}}-\frac{3\alpha\phi^{\prime}\phi^{\prime\prime}}{4c^{2 }a^{6}}\left(\frac{a^{\prime}}{a}\right)-\Gamma=0 \tag{27}\]
where \(\Gamma\) is given by
\[\Gamma=\bigg{(}\frac{c^{2}\Lambda a^{2}}{3}-\frac{\phi^{\prime 2}}{64a^{2}}+ \frac{c^{2}a^{2}R\phi^{2}}{192}-kc^{2}\bigg{)}G(\alpha), \tag{28}\]
in which
\[G(\alpha)=\Bigg{[}1-\frac{3\alpha}{16c^{2}a^{6}}\ \left(\frac{\frac{13\phi^{ \prime 4}}{128}+\frac{\phi^{\prime 2}}{2}(9kc^{2}-Rc^{2}a^{4}-\frac{7c^{2} \Lambda a^{2}}{3})+\frac{c^{2}a^{2}R\phi^{2}\phi^{\prime 2}}{192}\frac{(-7)}{2}- \frac{\phi^{\prime\prime 2}}{2}-\phi^{\prime}\phi^{\prime\prime\prime})}{ \frac{c^{2}\Lambda a^{2}}{3}-\frac{\phi^{\prime 2}}{64a^{2}}+\frac{c^{2}a^{2}R \phi^{2}}{192}-kc^{2}}\right)\Bigg{]}. \tag{29}\]
Clearly one can notice that in absence of \(\alpha R^{2}\) term, the TC condensate \(\phi\) is not generated and we recover the Friedmann equations with curvature and the cosmological constant.
In order to proceed further we consider some approximations: (I) we assume small \(\alpha\) so that the effects of \(\alpha R^{2}\) term can be treated perturbatively, i.e. \(O(\alpha^{2})\approx 0\) and we get
\[\frac{a^{\prime}}{a}=\frac{3\alpha}{8c^{2}a^{6}}\phi^{\prime}\phi^{\prime\prime }\pm\sqrt{\Gamma}. \tag{30}\]
Note that due to the small \(\alpha\) approximation, it will not be appropriate to use \(m_{2}\) in (9) and we will use \(m_{1}\) only. (II) **secondly,** putting back eq. (18) in eq. (11) and considering the real part yields \(\phi\sim\cos(\omega\eta)\), where \(\omega=ca\sqrt{\frac{|R|}{3}}\). We make a somewhat naive small \(\eta\) approximation such that \(\sin(\omega\eta)\sim 0;\cos(\omega\eta)\sim 1\), and we recover a tractable result for the Hubble parameter in conformal time as
\[\frac{a^{\prime}}{a}\approx\pm\sqrt{\left(\frac{c^{2}\Lambda a^{2}}{3}+\frac{ c^{2}a^{2}|R|}{192}-kc^{2}\right)}\left[1+\left(\frac{\alpha|R|^{2}c^{2}}{192a^{2}} \right)\left(\frac{1}{\frac{c^{2}\Lambda a^{2}}{3}+\frac{c^{2}a^{2}|R|}{192}- kc^{2}}\right)\Bigg{]}. \tag{31}\]
Now let us introduce the deceleration parameter as (in conformal time)
\[q\left(a(\eta)\right)=-\frac{a^{\prime\prime}/a-(a^{\prime}/a)^{2}}{(a^{ \prime}/a)^{2}}. \tag{32}\]
Then from the above equation (32), it is straightforward to compute the deceleration parameter
\[q\left(a(\eta)\right)=-\left(\frac{\frac{c^{2}\Lambda a^{2}}{3}+\frac{c^{2}|R| a^{2}}{192}}{\frac{c^{2}\Lambda a^{2}}{3}+\frac{c^{2}|R|a^{2}}{192}-kc^{2}} \right)\Bigg{[}1-\left(\frac{\alpha c^{2}|R|^{2}}{96a^{2}}\right)\times\left( \frac{(\frac{c^{2}\Lambda a^{2}}{2}+\frac{c^{2}|R|a^{2}}{128}-kc^{2})}{(\frac{ c^{2}\Lambda a^{2}}{3}+\frac{c^{2}|R|a^{2}}{192})(\frac{c^{2}\Lambda a^{2}}{3}+ \frac{c^{2}|R|a^{2}}{192}-kc^{2})}\right)\Bigg{]}. \tag{33}\]
In cosmic time \(t\), the Hubble parameter is given by
\[H=\frac{\dot{a}}{a}\approx\pm\sqrt{\left(\frac{c^{2}\Lambda}{3}+\frac{c^{2}R}{ 192}-\frac{kc^{2}}{a^{2}}\right)}\ \Bigg{[}1+\left(\frac{\alpha R^{2}c^{2}}{192a^{4}}\right)\left(\frac{1}{ \frac{c^{2}\Lambda}{3}+\frac{c^{2}R}{192}-\frac{kc^{2}}{a^{2}}}\right)\Bigg{]}. \tag{34}\]
Defining an effective cosmological constant, \(\Lambda_{\rm eff}=\Lambda+\frac{|R|}{64}\), the Hubble equation can be rewritten as
\[H^{2}=\frac{\dot{a}^{2}}{a^{2}}\approx\frac{c^{2}\Lambda_{\rm eff}}{3}-\frac{ kc^{2}}{a^{2}}+\alpha\frac{R^{2}c^{2}}{96a^{4}}. \tag{35}\]
An interesting observation is that, in our crude approximation, the TC condensate correction behaves like a radiation contribution, generated solely from metric. One can find an analytic solution of the scale factor as
\[a(t)=\sqrt{\frac{3k}{2\Lambda_{\rm eff}}+\frac{\Omega}{2}e^{2c(t-t_{*})\sqrt{ \frac{\lambda_{\rm eff}}{3}}}+\frac{9k^{2}}{8\Omega\Lambda_{\rm eff}^{2}}e^{-2 c(t-t_{*})\sqrt{\frac{\lambda_{\rm eff}}{3}}}-\frac{\alpha|R|^{2}}{64 \Lambda_{\rm eff}\Omega}e^{-2c(t-t_{*})\sqrt{\frac{\lambda_{\rm eff}}{3}}}}, \tag{36}\]
where we have used the boundary conditions, \(a_{*}=a(t=t_{*})\) at \(t=t_{*}\) and \(\Omega\) is defined as
\[\Omega=\bigg{(}a_{*}^{2}-\frac{3k}{2\Lambda_{\rm eff}}\bigg{)}+\sqrt{\bigg{(} a_{*}^{2}-\frac{3k}{2\Lambda_{\rm eff}}\bigg{)}^{2}+\bigg{(}\alpha\frac{|R|^{2}}{ 32\Lambda_{\rm eff}}-\frac{9k^{2}}{4\Lambda_{\rm eff}^{2}}\bigg{)}}. \tag{37}\]
Moreover, the TC correction to the age of the universe can be computed as
\[t_{0}=\sqrt{\frac{3}{4c^{2}\Lambda_{\rm eff}}}\ln\frac{(a_{0}^{2}-\frac{3k}{2 \Lambda_{\rm eff}})+\sqrt{(a_{0}^{2}-\frac{3k}{2\Lambda_{\rm eff}})^{2}+( \alpha\frac{|R|^{2}}{32\Lambda_{\rm eff}}-\frac{9k^{2}}{4\Lambda_{\rm eff}^{2 }})}}{(a_{in}^{2}-\frac{3k}{2\Lambda_{\rm eff}})+\sqrt{(a_{in}^{2}-\frac{3k}{2 \Lambda_{\rm eff}})^{2}+(\alpha\frac{|R|^{2}}{32\Lambda_{\rm eff}}-\frac{9k^{2} }{4\Lambda_{\rm eff}^{2}})}} \tag{38}\]
where \(a_{0}=a(t=t_{0})\) and \(a_{in}=a(t=0)\) with \(t_{0}\) representing the present time. In a convenient parametrization, let us write
\[\frac{H(a)}{H_{0}}=\sqrt{\Omega_{\Lambda}+\Omega_{k}\left(\frac{a}{a_{0}} \right)^{-2}+\Omega_{\alpha}\left(\frac{a}{a_{0}}\right)^{-4}} \tag{39}\]
where eqn. (35) is used at \(t=t_{0}\) to define
\[H_{0}^{2}=\frac{c^{2}\Lambda_{eff}}{3}-\frac{kc^{2}}{a_{0}^{2}}+\frac{\alpha c ^{2}|R|^{2}}{96a_{0}^{4}}. \tag{40}\]
Here, \(\Omega_{\Lambda},\;\Omega_{k},\;\Omega_{\alpha}\) are the dimensionless density parameters of the associated fluid components, defined as
\[\Omega_{\Lambda}=\frac{c^{2}\Lambda_{eff}}{3H_{0}^{2}},\;\;\Omega_{k}=-\frac{ kc^{2}}{H_{0}^{2}a_{0}^{2}},\;\;\Omega_{\alpha}=\frac{\alpha c^{2}|R|^{2}}{96H_{0}^{ 2}a_{0}^{2}}, \tag{41}\]
From eq. (39), we get the constraint relation \(\Omega_{\Lambda}+\Omega_{k}+\Omega_{\alpha}=1\) by considering the equation at \(t=t_{0}\). Furthermore, the scale factor \(a(t)\) can be expressed in terms of the density parameters as
\[a(t)=\sqrt{\frac{\chi}{2}e^{2H_{0}\sqrt{\Omega_{\Lambda}}(t-t_{*})}+\left( \frac{\Omega_{k}^{2}}{8\Omega_{\Lambda}^{2}}-\frac{\Omega_{\alpha}}{2\Omega_{ \Lambda}}\right)e^{-2H_{0}\sqrt{\Omega_{\Lambda}}(t-t_{*})}-\frac{\Omega_{k}} {2\Omega_{\Lambda}}}, \tag{42}\]
where
\[\chi(a_{*})=\bigg{(}a_{*}^{2}+\frac{\Omega_{k}}{2\Omega_{\Lambda}}\bigg{)}+ \sqrt{\bigg{(}a_{*}^{2}+\frac{\Omega_{k}}{2\Omega_{\Lambda}}\bigg{)}^{2}+ \bigg{(}\frac{\Omega_{\alpha}}{\Omega_{\Lambda}}-\frac{\Omega_{k}^{2}}{4\Omega _{\Lambda}^{2}}\bigg{)}}. \tag{43}\]
And the other set of solutions for the scale factor \(a(t)\) will become
\[a(t)=\sqrt{-\frac{\chi}{2}e^{2H_{0}\sqrt{\Omega_{\Lambda}}(t-t_{*})}-\left( \frac{\Omega_{k}^{2}}{8\Omega_{\Lambda}^{2}}-\frac{\Omega_{\alpha}}{2\Omega _{\Lambda}}\right)e^{-2H_{0}\sqrt{\Omega_{\Lambda}}(t-t_{*})}-\frac{\Omega_{k} }{2\Omega_{\Lambda}}}. \tag{44}\]
The deceleration parameter is defined in another way (in terms of the cosmic time) \(q\equiv-1-\dot{H}/H^{2}=-1-(a/H)dH/da\), turns out to be
\[q(a)=\bigg{[}-1+\bigg{(}\frac{\Omega_{k}(\frac{a}{a_{0}})^{-2}+2\Omega_{ \alpha}(\frac{a}{a_{0}})^{-4}}{\Omega_{\Lambda}+\Omega_{k}(\frac{a}{a_{0}})^{- 2}+\Omega_{\alpha}(\frac{a}{a_{0}})^{-4}}\bigg{)}\bigg{]}. \tag{45}\]
## V Results and their implications
Let us finally discuss the major consequences the novel Time Crystal-like condensate can have on the evolution of the universe. As described in section IV, the TC background can provide an expression for the scale factor \(a(t)\) in terms of density parameters \(\Omega_{j}\) (\(j=\Lambda,\alpha,k\)). This leads to two of the well-known cosmological parameters, namely, the Hubble parameter \(H(a)\) and the deceleration parameter \(q(a)\) which offer a clear picture of the dynamics of our universe. In order to understand the dynamics of the universe, in Figs. 1, 2, 3, 4, we provide the graphical descriptions of \(H(a)\) and \(q(a)\) in various curvature scenarios and strengths of the TC condensate through \(\alpha\) (generated by the \(\alpha R^{2}\)-term in gravity). In all the figures, \(\Omega_{k}\) can take positive, negative or zero value corresponding to open (negative
Figure 1: The evolution of the Hubble parameter \(H\) with respect to the cosmic scale factor \(a\) for a fixed value of \(\Omega_{\alpha}\) (\(=0.3\)) has been depicted for three spatial geometries of the universe, namely, open universe, flat universe, and closed universe.
Figure 2: We display the evolution of the Hubble function for different values of \(\Omega_{\alpha}\) considering three spatial geometries of the universe, namely, open universe (upper left graph), flat universe (upper right graph) and closed universe (lower graph).
curvature), closed (positive curvature) or flat (zero curvature) universe, respectively.
In Fig. 1, for a fixed value of \(\Omega_{\alpha}=0.3\), the evolution of the dimensionless Hubble parameter \(H/H_{0}\) is displayed against \(a/a_{0}\) for positive, negative and vanishing \(\Omega_{k}\). Since from eqn. (41), it is clear that \(\Omega_{\Lambda},\Omega_{\alpha}\) are always positive but \(\Omega_{k}\) can change sign as \(k\) can take three distinct values \((0,+1,-1)\), and only for the negative \(\Omega_{k}\) (i.e. positive curvature or closed universe), it contributes oppositely and there is the possibility of a stationary point (in the present case a minimum) in \(H(a)\ vs.\ a\) profile.
To establish the effects of TC condensate clearly, in Fig. 2, we plot the evolution of the Hubble parameter in terms of \(a/a_{0}\) for three distinct spatial geometries of the universe, namely, \(\Omega_{k}=+0.1\) (upper left graph of Fig. 2), \(\Omega_{k}=0\)
Figure 4: We present the evolution of the deceleration parameter for different \(\Omega_{\alpha}\) considering three spatial geometries of the universe, namely, open universe (upper left graph), flat universe (upper right graph) and closed universe (lower graph).
Figure 3: The evolution of the deceleration parameter for a fixed value of \(\Omega_{\alpha}\) (\(=0.3\)) has been shown for different spatial geometries of the universe, namely, open universe, flat universe, and closed universe.
(upper right graph of Fig. 2) and \(\Omega_{k}=-0.1\) (lower graph of Fig. 2), respectively, and for each case we take three distinct values of \(\Omega_{\alpha}\) as \(\Omega_{\alpha}=0,0.1,0.2\) keeping in mind that it is positive and small and fixing \(\Omega_{\Lambda}\) accordingly. Notice that for \(\Omega_{k}>0\) (open universe) the profiles are qualitatively similar (see the upper left graph of Fig. 2) with only quantitative differences as \(\Omega_{\alpha}\) increases. However, the graphs change significantly for \(\Omega_{k}=0\) (flat universe) where \(\Omega_{\alpha}=0\) yields the well-known constant value of \(H(a)\) but non-zero \(\Omega_{\alpha}\)'s show distinct variation of \(H(a)\) with respect to \(a\) (see the upper right graph of Fig. 2). Once again, the \(\Omega_{k}<0\), see the lower graph of Fig. 2 (closed universe) is the most interesting scenario where _non-zero values of \(\Omega_{\alpha}\) generate completely different profiles compared to \(\Omega_{\alpha}=0\) (no TC condensate); the former ones have a minimum before saturating for large \(a\)_. It is also easy to see from eqn. (39) that all the curves for different \(\Omega_{\alpha}\) will cross at \(a=a_{0}\).
Now we study the behavior of the deceleration parameter \(q(a)\) for the same sets of parameters as considered earlier. Fig. 3 demonstrates a very interesting fact that for any universe (with \(\Omega_{k}\) being positive, negative or zero), a non-trivial \(\Omega_{\alpha}\) induces a _change in the sign of \(q(a)\) from positive to negative as \(a\) increases from 0_. This indicates that the TC condensate generates a decelerating phase before the acceleration starts. Clearly this indicates that after initial contracting phase (deceleration) the universe changes to an expanding phase (acceleration).
Once again we analyse the strength of \(\Omega_{\alpha}\) for the three different spatial structures of the universe. In Fig. 4 we have summarized the evolution of the deceleration parameter for \(\Omega_{k}\) being positive (corresponds to the upper left graph of Fig. 4), zero (upper right graph of Fig. 4) and negative (lower graph of Fig. 4) respectively, taking three distinct values of \(\Omega_{\alpha}\) characterizing the strength of the TC background, namely, \(\Omega_{\alpha}=0,0.1,0.2\). The upper left graph of Fig. 4 (corresponds to \(\Omega_{k}>0\)) and the upper right graph of Fig. 4 (\(\Omega_{k}=0\)) show the emergence of the decelerating phase in the early universe before the accelerating phase ensues for all three values of \(\Omega_{\alpha}\). However, from the lower graph of Fig. 4 (corresponds to \(\Omega_{k}<0\)), we find that the condensate can ameliorate the singularity in \(q(a)\) in the conventional case without the \(\alpha R^{2}\) term. This is clear from eqn. (45) where with \(\Omega_{\alpha}=0\), \(q\) will become singular at \(a/a_{0}=\sqrt{\Omega_{k}/\Omega_{\Lambda}}\) for negative \(\Omega_{k}\).
Let us recall that in conventional cosmology, a decelerating phase can appear only if matter is introduced from outside. However, in our case, no external matter is introduced and this contracting phase is generated solely by the TC condensate (coming from the \(\alpha R^{2}\) term). We speculate that the TC condensate might be identified as a new kind of matter candidate having radiation-like behaviour. The upper left and right plots of Fig. 4 show how the strength of the TC condensate affects the behavior of \(q\) through different curves that merge with the \(\Omega_{\alpha}=0\) curve asymptotically. However, the situation is very different for \(\Omega_{k}=-0.1\) as depicted in the lower graph of Fig. 4. In small \(a\) sector, there appears a discontinuity in \(q\) for \(\Omega_{\alpha}=0\) (standard GR, no \(R^{2}\)-term hence no condensate) that is smoothed out for non-zero \(\Omega_{\alpha}\) (\(R^{2}\)-term with condensate effect) and finally all curves asymptotically saturate to a negative \(q\) at large \(a\).
## VI Summary and conclusions
The idea of TC was a fascinating concept in physics and due to its attractive physical insights, this created a significant amount of interest in the physics community within a couple of years of its theoretical possibility [3] (also see [4; 5; 6]. As argued by several investigators TC could have some effects on cosmological dynamics [8; 9; 10; 11; 12; 13; 14; 15; 16] and this should be further explored. However, an important qualitative distinction between TC as introduced in cosmology and TCC, induced by quadratic \(R^{2}\) term, as considered here, needs to be emphasized. Whereas in previous works, the TC crystal feature was incorporated in the externally introduced matter sector, however, in the present case, the TCC is generated internally from the combinations of the metric tensor components _iff_\(R^{2}\)-term in the gravity action exists.
Following this, in the present article we have investigated the effects of TCC in cosmology aiming to understand whether the TCC could offer some interesting features about the intrinsic nature of our mysterious universe. According to the past historical records, physics at the early and late evolution of our universe has put several question marks on the understanding of our universe and its origin as well. The fundamental questions related to the universe evolution are still need to be answered.
As has been mentioned throughout, we exploit the property of \(f(R)\sim R+\alpha R^{2}\) that it gives rise to a decoupled system of conventional gravity and a higher derivative scalar sector and propose a model where the former evolves in a background of the latter, which enjoys a TC phase. Thus, the energy-momentum tensor for the corresponding TCC acts as a source for cosmological dynamics, characterized by the FLRW line element. We found that the scale factor of the FLRW universe can be analytically solved under some approximations and hence the other cosmological parameters. The behaviour of the cosmological parameters, namely, the Hubble rate (\(H(a)\)) and the deceleration parameter (\(q(a)\)) have been graphically presented (see Figs. 1, 2, 3, 4) for different spatial geometries of the universe and for different strengths of the TC condensate through \(\alpha\) generated by the \(\alpha R^{2}\)-term in the gravitational action. We found some interesting observations that clearly report that the TC condensate can significantly affect the cosmological
dynamics. From the evolution of the Hubble rate for a closed universe (see the lower graph of Fig. 2) we notice that the non-zero values of \(\Omega_{\alpha}\) generate completely different profiles compared to \(\Omega_{\alpha}=0\) (no TC condensate) the former ones have a minimum before saturating for large \(a\). On the other hand, from Fig. 3, we notice that irrespective of the spatial geometry of the universe, the TC condensate generates a decelerating phase before the acceleration starts. This indicates that after initial contracting phase (deceleration) of the universe, it enters into an expanding phase with an acceleration. As the non-trivial \(\Omega_{\alpha}\) offers some interesting results independent of the curvature of the universe, therefore, in Fig. 4 we further investigated how the evolution of the deceleration parameter depends on various strengths of the TC condensate quantified through \(\Omega_{\alpha}\) for different spatial geometries. For \(\Omega_{k}>0\) (upper left plot of Fig. 4), \(\Omega_{k}=0\) (upper right plot of Fig. 4) universe enters into the early accelerating phase before a decelerating phase and this remains true for different three values of \(\Omega_{\alpha}\). However, for the closed universe (corresponds to the lower graph of Fig. 4), we find that the TC condensate can avoid the singularity in \(q(a)\) that appears in the conventional case without the \(\alpha R^{2}\) term.
The final take home message of our analysis is the following: the generic ghost problem in higher order gravity theories is absent in \(R^{2}\)-gravity which, however, is still plagued with the (relatively harmless) additional (_spurious_) scalar degree of freedom. We have shown that in the Time Crystal framework, this extra scalar can act as a condensate, that replaces the vacuum, forms a stable background for conventional gravity, leading to possible improvements with explicit predictions.
Following the existing results and the present outcomes in context of late and early universe, we anticipate that the physics of TC condensate needs considerable attention in the cosmological dynamics. In particular, the existence of some radiation-like fluid extracted (purely) out of the geometrical sector strongly highlights this fact. One may naturally wonder whether TC condensate may lead to some geometrical dark energy in the early universe (early dark energy fluid) [33] that could offer some new insights in the cosmological tensions [34; 35]. One can further investigate whether the finite time future singularities appearing in the cosmological theories can be avoided in this context [36; 37]. There is no doubt that being an emerging field, understanding the nature and the effects of TC condensate, could open new windows in cosmology and astrophysics. It will be interesting to explore further the effects of TC condensate in alternative gravitational theories other than the quadratic \(f(R)\) gravity. We hope to investigate some of them in near future.
## VII Acknowledgments
RKD acknowledges Naresh Saha and Joydeep Majhi for helpful discussions. SP acknowledges the financial support from the Department of Science and Technology (DST), Govt. of India under the Scheme "Fund for Improvement of S&T Infrastructure (FIST)" (File No. SR/FST/MS-I/2019/41).
|
2309.02539 | A Generalized Bandsplit Neural Network for Cinematic Audio Source
Separation | Cinematic audio source separation is a relatively new subtask of audio source
separation, with the aim of extracting the dialogue, music, and effects stems
from their mixture. In this work, we developed a model generalizing the
Bandsplit RNN for any complete or overcomplete partitions of the frequency
axis. Psychoacoustically motivated frequency scales were used to inform the
band definitions which are now defined with redundancy for more reliable
feature extraction. A loss function motivated by the signal-to-noise ratio and
the sparsity-promoting property of the 1-norm was proposed. We additionally
exploit the information-sharing property of a common-encoder setup to reduce
computational complexity during both training and inference, improve separation
performance for hard-to-generalize classes of sounds, and allow flexibility
during inference time with detachable decoders. Our best model sets the state
of the art on the Divide and Remaster dataset with performance above the ideal
ratio mask for the dialogue stem. | Karn N. Watcharasupat, Chih-Wei Wu, Yiwei Ding, Iroro Orife, Aaron J. Hipple, Phillip A. Williams, Scott Kramer, Alexander Lerch, William Wolcott | 2023-09-05T19:19:22Z | http://arxiv.org/abs/2309.02539v3 | # A Generalized Bandsplit Neural Network for Cimematic Audio Source Separation
###### Abstract
Cinematic audio source separation is a relatively new subtask of audio source separation, with the aim of extracting the dialogue, music, and effects stems from their mixture. In this work, we developed a model generalizing the Bandsplit RNN for any complete or overcomplete partitions of the frequency axis. Psychoacoustically motivated frequency scales were used to inform the band definitions which are now defined with redundancy for more reliable feature extraction. A loss function motivated by the signal-to-noise ratio and the sparsity-promoting property of the 1-norm was proposed. We additionally exploit the information-sharing property of a common-encoder setup to reduce computational complexity during both training and inference, improve separation performance for hard-to-generalize classes of sounds, and allow flexibility during inference time with detachable decoders. Our best model sets the state of the art on the Divide and Remaster dataset with performance above the ideal ratio mask for the dialogue stem.
Deep learning, psychoacoustical frequency scale, source separation, cinematic audio +
Footnote †: journal: ICASSP-OJSP
## 1 Introduction
Audio source separation refers to the task of separating an audio mixture into one or more of its constituent components. More formally, consider a set of source signals \(\mathfrak{U}=\{\mathbf{u}_{i}\colon\mathbf{u}_{i}[n]\in\mathbb{R}^{D_{i}},\ n \in[\![0,M_{i}]\!]\}\), where \(i\) is the source index, \(D_{i}\) is the number of channels in the \(i\)th source, \(n\) is the sample index, \(M_{i}\) is the number of samples in the \(i\)th source, and \([\![a,b]\!]=\mathbb{Z}\cap[\![a,b]\!]\). Not all of \(\mathfrak{U}\) may be necessarily 'desired'. The desired subset \(\mathfrak{T}\subseteq\mathfrak{U}\) is often referred to as the set of 'target' sources or stems, while the undesired subset \(\mathfrak{N}=\mathfrak{U}\backslash\mathfrak{T}\) is often referred to as the set of 'noise' sources. An input signal to a source separation (SS) system can usually be modeled as a mixing process
\[\mathbf{x}=\sum_{i}\mathcal{T}_{i}(\mathbf{u}_{i})\in\mathbb{R}^{C\times N}, \tag{1}\]
where \(C\) is the number of channels in the mixture, \(N\) is the number of samples in the mixture, and \(\mathcal{T}_{i}\colon\mathbb{R}^{D_{i}\times M_{i}}\mapsto\mathbb{R}^{C\times N}\) is an audio signal transformation on the \(i\)th source. Some common operations represented by \(\mathcal{T}_{i}\) are the identity transformation, which produces an instantaneous mixture often seen in synthetic data; a convolution, which produces a convolutive mixture often used to model a linear time-invariant (LTI) process; and a nonlinear transformation, often seen in music mixing process. The goal of an SS system is then to recover one, some, all, or composites of the elements of \(\mathfrak{T}\), up to some allowable deformation [1, 2]. Note, however, that (1) does not take into account global nonlinear operations such as dynamic compression.
Composite targets are also often encountered in tasks such as music (e.g. the 'accompaniment' stem) or cinematic SS (e.g. the 'effects' stem), where the true number of component stems a composite target may contain can be fairly large. For simplicity concerning composite targets and multichannel sources, we will denote \(\mathfrak{S}=\{\mathbf{s}_{i}\colon\mathbf{s}_{i}=\sum_{j}\mathcal{T}_{j}( \mathbf{u}_{j}),\ \mathbf{u}_{j}\in\mathfrak{T},\ \mathbf{s}_{i}[n]\in\mathbb{R}^{C},\ n \in[\![0,N]\!]\}\) as the set of 'computational targets' of the algorithms. 'Targets' in this manuscript will refer to \(\mathfrak{S}\), as opposed to \(\mathfrak{T}\).
Cinematic audio source separation (CASS) is a relatively new subtask of audio SS, most commonly concerned with extracting the dialogue, music, and effects stems from their mixture. Research traction in this new subtask can be credited to Petermann et al. [3, 4] and the Cinematic Sound Demixing track of the Sound Demixing Challenge [5], introduced in 2023. While the setup of the task can be easily generalized from standard SS setups, the nature of cinematic audio poses a unique problem not commonly seen in speech or music SS. Specifically, CASS is closely related to universal audio SS, in which nearly the entire ontological categories of audio (speech, music, sound of things, and environmental sounds) must be all retrieved with equal or similar importance. Moreover, the "music" and "effects" stems can be very non-homogeneous. Music can consist of sound made by a very wide variety of acoustic, electronic, and synthetic musical instruments. More challengingly, the effects stem consists of anything that is _not_ speech or music, but also sometimes consists of sounds made by musical instruments in a non-musical context.
In this work, we adapted the Bandsplit RNN (BSRNN) [6] from the music SS task to the CASS task. In particular, we generalized the BSRNN architecture to potentially overlapping band definitions, introduced a loss function based on a combination of the 1-norm and the SNR loss, and modified the BSRNN from a set of single-stem models to a common-encoder system that can support any number of decoders. We further provide empirical results to demonstrate that the common-encoder setup provides superior results for hard-to-learn stems and allows generalization to previously untrained targets without the need for retraining the entire model. To the best of our knowledge, our proposed method1 is currently the state of the art on the Divide and Remaster (DnR) dataset [3].
Footnote 1: Replication code is available at github.com/karnwatcharasupat/bandit.
## II Related Work
Most early audio SS research was originally focused on a mixture of speech signals, particularly due to the reliance on statistical signal processing and latent variable models [7], which do not work well with more complex audio signals such as music or environmental sounds. Specifically, most early systems [8, 9, 10] assume an LTI mixing process, allowing for retrieval of target stems by means of filtering [11], matrix (pseudo-)inversion for (over)determined systems \(C\geq D_{i}\)[12], or other similarity-based methods for underdetermined systems [13]. These methods, however, often require fairly strong assumptions on the source signals such as statistical independence, stationarity, and/or sparsity.
As computational hardware became more powerful, more computationally complex methods also became viable. This allowed for the relaxation of many statistical requirements placed on the signals in pursuit of more data-driven methods and the possibility of performing SS on nonlinear mixtures of highly correlated stems. Time-frequency (TF) masking, in particular, became the dominant method of source extraction in deep SS [14]. While this has led to major improvements in extracted audio quality, it came at the sacrifice of the interpretability once enjoyed in latent variable models.
Denote \(\mathbf{X}\in\mathbb{C}^{C\times F\times T}\) as the STFT of \(\mathbf{x}\), where \(F\) is the number of non-redundant frequency bins and \(T\) is the number of time frames. Similarly, denote \(\mathbf{S}_{i}\) as the STFT of the \(i\)th target source. Most masking SS systems use some form of \(\hat{\mathbf{S}}_{i}=\mathbf{X}\circ\mathbf{M}\), where \(\hat{\mathbf{S}}_{i}\) is the estimate of \(\mathbf{S}_{i}\), \(\circ\) is elementwise multiplication with broadcasting, and \(\mathbf{M}\) is the TF mask. Depending on the method, \(\mathbf{M}\) may be binary, real-valued, or complex-valued, and has the same TF shape as \(\mathbf{X}\), but may or may not be predicted separately for each channel. Although some works have generalized the masking operation to include additive components [15] or more complex operations [16], direct masking still remains the most common method of source extraction, particularly due to its direct connection with time-variant convolution in the time domain. Many deep architectures have been proposed to predict the TF masks: Open-Unmix [17] used bidirectional LSTM (BiLSTM) to obtain a magnitude mask; SepFormer [18] applied a transformer to predict masks for speech separation, improving the performance while allowing parallel computing; (Conv-)TasNet [19, 20] used masks on real-valued basis projections to allow real-time separation.
Despite the popularity of mask-based methods, several works have explored mask-free architectures. Wave-U-Net [21] applies the U-Net structure to directly modify the mixture waveform. Built on Wave-U-Net, Demucs [22] incorporates a BiLSTM at the bottleneck. Hybrid Demucs [23] extends the idea of combining time and frequency domains by applying two separated U-Nets for each domain with a shared bottleneck BiLSTM for cross-domain information fusion. Hybrid Transformer Demucs [24] further improves the performance by replacing the BiLSTM bottleneck with a transformer bottleneck. KUIELab-MDX-Net [25] combines Demucs with a frequency-domain, U-Net-based architecture and uses a weighted average as the final output.
Under the definition in (1), a number of non-generative audio enhancement tasks can also be considered special cases of audio SS, despite often not being actively thought of as one. Most non-generative implementations of noise suppression [26, 27], audio restoration [28], and dereverberation [29, 30] can be considered as an SS task with a noisy (and/or wet) mixture as input, and clean (and/or dry) target source as output. Dialogue enhancement often requires SS to extract the constituent stems before loudness adjustment is applied [31]. Extraction of the dialogue stem in CASS, in particular, can be seen as closely related to the task of speech enhancement, while that of the music-and-effects (M&E) stem can be seen as a speech suppression task.
Among deep learning-based SS models, several common meta-architectures exist. Models such as Open-Unmix [17] and BSRNN [6] have one fully independent model for each stem, with no shared learnable layer. While this is
very simple to train, fine-tune, and inference, the model suffers from the lack of information sharing between each stem-specific model. Adding additional stems to this system involves creating a completely separate network.
Some systems, such as Demucs [23, 24] and Conv-TasNet [20], use one shared model for all stems. This means that training and inference must happen for all stems at the same time. This setup is perhaps the most beneficial in terms of information sharing, but it is also difficult to understand the flow of information within the system, as all intermediate representations are entangled up until the last layer. It can also be very difficult to add an additional stem to the model, as it is not trivial to decide which part of the model parameters may be safe to freeze or unfreeze.
## III Proposed Method
Our proposed method builds upon the BSRNN model proposed in [6]. BSRNN itself is related to works that split the frequency bands into several different groups [32, 33], and those that apply multi-path recurrent networks to deal with long sequences [34, 35]. The original BSRNN is very similar in structure to our proposed model in Fig. 1, but with a separate model per stem. Each BSRNN model consists of a bandsplitting module, a TF modeling module, and a mask estimator. The bandsplitting module in [6] partitions an input spectrogram along its frequency axis into \(B\) disjoint "bands", then, in parallel, performs a normalization and an affine transformation for each band. Each affine transformation contains the same number of \(D\) output neurons. The TF module consists of a stack of bidirectional RNNs operating alternatingly along the time and band axes of the feature map. In [6], this consists of a stack of 12 pairs of residually-connected BiLSTMs. Finally, the mask estimation module consists of \(B\) parallel feedforward modules which produce \(B\) bandwise complex-valued masks.
The overview of the proposed model is shown in Fig. 1. For clarity, BSRNN will only refer to the original model in [6]. Our proposed model will be referred to as "BandIt"2.
Footnote 2: From **bandsplit**, and a reference to the multi-armed bandit problem.
### _Common Encoder_
In this work, we propose to use a common-encoder multiple-decoder system. By treating multi-stem SS as a multi-task problem, this is akin to hard parameter sharing. This system allows information sharing to occur freely in the encoder section, but not in the decoder. It is likely that this can improve the information efficiency, and generalizability of the model [36, 37]. A downside of this system is that adding a new decoder may or may not require the encoder to be retrained, depending on the generalizability of the feature maps after the initial training with the original set of stems.
In addition to the potential information theoretic benefits, the common-encoder structure offers a more practical benefit in terms of the computational requirements. Training using the common encoder system can reduce the amount of parameters needed considerably, and thus reduce memory and hardware requirements. Additionally, in the case where not all decoders can be trained concurrently, simultaneous training can still be approximated by only attaching a subset of the decoders at each optimization step and alternating over them. Finally, this allows an arbitrary number of decoders to be attached and detached as needed during inference.
As seen in Fig. 1, BSRNN can be modified into a common-encoder BandIt by sharing the all modules up to the TF modeling module and only splitting into stem-specific modules at the mask estimator section. Of course, many other possible points of splitting exist; we chose to split only after the TF modeling module in order to force it to learn a common representation that will work for all three stems.
### _Bandsplit Module_
The original definition of the bands in BSRNN has two clear attributes: (A1) the bandwidth in Hz generally increases with its constituent frequencies, and (A2) the number of bands is high in regions where the sources of a stem typically are most active in. From a data compression perspective, this translates to the assumption that (B1) information content per Hz decreases with increasing frequency, and (B2) information content is positively correlated to source activity. Both "priors" may seem trivial. However, the implementation can be tricky as we will discuss below.
In [6], band definitions were mostly handcrafted for each stem. This potentially limits the generalizability of the model and makes architecture design difficult when dealing with stems with unpredictable, non-homogeneous content such as the "other" stem in MUSDB18 [38] and the effects stem in cinematic audio. In other words, the model is prone to prior mismatch when dealing with very diverse content. Moreover, the band definitions in [6] are all disjoint, i.e., each frequency bin is allocated to only one band. From a system reliability perspective, this means that the very first layer of BSRNN already has no redundancy provisioned; any loss of information occurring during the first affine transformation cannot
Figure 1: Overview of the proposed model architecture, Bandit.
be recovered by other parallel affine modules. This also disproportionally affects semantic structures (i.e. the "blobs" in spectrogram) that are located around the band edges, since they will be broken up into two disjoint bands, resulting in neither of which being able to encode their information well.
To deal with these issues, we limit the prior assumption to only (B1), turning to psychoacoustically motivated band definitions in lieu of handcrafting. Additionally, we propose to add redundancy to the bandsplitting process in an attempt to reduce the amount of early information loss. Specifically, we will investigate five different band definitions based on four frequency scales with psychoacoustic motivations, namely, the mel scale, the equivalent rectangular band (ERB) scale, the Bark scale, and the 12-tone equal temperament (12-TET) Western musical scale. Note that we do not directly use the bandwidths associated with the ERB and the Bark scale, but rather take the scale value as a rough approximation of the number of critical bands below it.
For all scale-filterbank combinations, the proposed splitting process is as follows. The minimum scale value \(z^{\text{min}}\) and the maximum \(z^{\text{max}}\) were computed. For all scales, \(z^{\text{max}}\) is given by \(z(0.5f_{\text{s}})\), where \(z\colon\mathbb{R}_{0}^{+}\mapsto\mathbb{R}\) is the mapping function from Hz to the scale's unit, and \(f_{\text{s}}\) is the sampling rate in Hz. For the mel, ERB, and Bark scales, \(z^{\text{min}}=0\). The \(z^{\text{min}}\) musical scale will be detailed later. For \(B\) bands, the center frequencies, in each respective scale, are given by
\[\zeta_{n}=z(0.5f_{\text{s}})\cdot(n+1)\,/(B+2). \tag{2}\]
The frequency weights \(\mathbf{W}\in[0,1]^{B\times F}\) are then computed using a filterbank of choice, and its weights normalized so that \(\sum_{b}\mathbf{W}[b,f]=1,\forall f\in[\![0,F]\!]\). Using the filterbank values, the band definitions are then created using a simple binarization criterion
\[\mathfrak{F}_{b}=\{f\in[\![0,F]\!:\mathbf{W}[b,f]\!>0\},\ \forall b\in[\![0,B]\!]. \tag{3}\]
We then define a subband \(\mathbf{X}_{b}\in\mathbb{C}^{C\times F_{b}\times T}\) of \(\mathbf{X}\) such that
\[\mathbf{X}_{b}=\mathbf{X}[\!:_{c},\mathfrak{F}_{b},\!:_{t}],\ \forall b\in[\![0,B]\!]. \tag{4}\]
The scales and the filterbanks used are detailed as follows, and visualized in Fig. 2.
#### 2.2.1 Mel Scale
The mel scale is one of the most used scales for the calculation of input features, such as the (log-)mel spectrogram and the mel-frequency cepstrum coefficients, for many audio tasks in machine learning and information retrieval. It is a measure of _tone height_[39]. In this work, we use the mel scale given in [40, p.128], where
\[z_{\text{mel}}(f)=2595\log_{10}\left(1+f/700\right). \tag{5}\]
The filterbank used is comprised of triangular-shaped filters with the \(b\)th filter having band edges \(\zeta_{b-1}\) and \(\zeta_{b+1}\), similar to the implementations in librosa [41] and PyTorch [42].
#### 2.2.2 Bark scale
The Bark scale [43] "relates acoustical frequency to perceptual frequency resolution, in which one Bark covers one critical bandwidth [40, p.128]". Also known as the _critical band rate_, the Bark scale is constructed from the bandwidth of measured frequency groups [39]. Unlike the mel scale, the Bark scale is more concerned with the widths of the critical bands than the center frequencies themselves. In this work, we use the approximation [44] given by
\[z_{\text{bark}}(f)=6\sinh^{-1}\left(f/600\right). \tag{6}\]
For the Bark scale, we experimented with two filterbanks. One is a Bark filterbank implementation provided by Spafe [45], and another is a simple triangular filterbank similar to the mel and ERB scales. The former will be referred to as the "Bark" bands, and the latter as "TriBark".
#### 2.2.3 Equivalent Rectangular Bandwidth Scale
The equivalent rectangular bandwidth (ERB) was designed with a similar motivation to the Bark scale. The ERB is an approximation of the bandwidth of the human auditory filter at a given frequency. The ERB scale is a related scale that computes the number of ERBs below a certain frequency. The ERB scale can be modeled as [46]
\[z_{\text{erb}}(f)=\ln\left(1+4.37\times 10^{-3}f\right)/(24.7\cdot 4.37\times 1 0^{-3}). \tag{7}\]
The filterbank is computed similarly to that of the mel scale.
#### 2.2.4 12-TET Western Musical Scale
The 12-TET scale is the most common form of Western musical scale used today. Using a reference frequency of \(f_{\text{ref}}=440\,\mathrm{Hz}\), the unrounded MIDI note number of a
Figure 2: Frequency ranges of each band, by band type, for a 64-band setup with a sampling rate of 44.1 kHz and an FFT size of 2048 samples.
particular pitch can be represented by
\[\tilde{z}_{\text{mus}}(f)=69+12\log_{2}\left(f/f_{\text{ref}}\right). \tag{8}\]
Crucially, scaling a frequency by a factor of \(k\), always lead to a constant change in this scale by \(12\log_{2}k\), i.e.,
\[\tilde{z}_{\text{mus}}(kf)=\tilde{z}_{\text{mus}}(f)+12\log_{2}k. \tag{9}\]
This ensures that the \(k\)th harmonic of a sound is always \(12\log_{2}k\) note numbers away from its fundamental, regardless of the fundamental pitch -- a property that mel, ERB, and Bark scales do not enjoy. In practice, since \(\tilde{z}_{\text{mus}}(f\to 0^{+})\to-\infty^{+}\), we instead set scale value as
\[z_{\text{mus}}(f)=\max\left[z_{\text{mus}}^{\text{min}},\tilde{z}_{\text{mus }}\left(f\right)\right], \tag{10}\]
where \(z_{\text{mus}}^{\text{min}}=\tilde{z}_{\text{mus}}\left(f_{\text{s}}/N_{ \text{FFT}}\right)\), and \(N_{\text{FFT}}\) is the FFT size.
In this work, the filterbank for the musical scale is implemented using rectangular filters with the \(b\)th filter having band edges \(\zeta_{b-1}\) and \(\zeta_{b+1}\). All filters, except for the lowest and highest bands, have the same bandwidth in cents, before being discretized to match FFT bins. For brevity, we will refer to this band type simply as "musical". A comparison of the five proposed band definitions is shown in Fig. 2.
### Bandwise Feature Embedding
After splitting, each of the subbands is viewed as a real-valued tensor in \(\mathbb{R}^{2CF\times T}\) by collapsing the channel and frequency axes and then concatenating its real and imaginary parts. As with BSRNN [6, Fig. 1b], each band is passed through a layer normalization and an affine transformation with \(D=128\) output units along the pseudo-frequency axis. The feature embedding process is denoted by \(\mathcal{P}_{b}\colon\mathbb{C}^{C\times F_{b}\times T}\mapsto\mathbb{R}^{D \times T}\). The bandwise feature tensors are then stacked to obtain the full-band feature tensor \(\mathbf{V}\in\mathbb{R}^{D\times B\times T}\) such that \(\mathbf{V}[:,b,:]=\mathcal{P}_{b}(\mathbf{X}_{b})\), \(\forall b\in[\![0,B]\!)\). Except for the Bark model, the feature embedding module accounts for approximately 600 k parameters in a 64-band setup.
### Time Frequency Modeling
As with BSRNN [6, Fig. 1c], the feature tensor \(\mathbf{V}\) is passed through a series of residual recurrent neural networks (RNNs) with affine projection, alternating its operation between the time and frequency axes. In this work, we reduced the number of residual RNN pairs from 12 to 8 and also opted to use Gated Recurrent Units (GRUs) instead of Long-Short Term Memory (LSTM) units as the RNN backbone. As with [6], each RNN has \(2D\) hidden units. The overall operation of this module is represented by the transformation \(\mathcal{R}\colon\mathbb{R}^{D\times B\times T}\mapsto\mathbb{R}^{D\times B \times T}\) to obtain the output \(\mathbf{\Lambda}=\mathcal{R}\left(\mathbf{V}\right)\in\mathbb{R}^{D\times B \times T}\). TF modeling with 8 residual GRU pairs accounts for 10.5 M trainable parameters3.
Footnote 3: Due to the computational complexity of backpropagation through time with long sequences, we experimented with replacing the RNNs with transformer encoders or convolutional layers. With similar numbers of parameters and all else being equal, these were not able to match the performance of an RNN-based module.
### Overlapping Mask Estimation and Recombination
At this stage, the shared feature \(\mathbf{\Lambda}\) is passed to a separate mask estimator for each stem. The internal implementation of the mask estimation module is identical to that of the original BSRNN. The overall operation of this module is represented by \(\mathcal{Q}_{b,\text{re}}^{(i)},\mathcal{Q}_{b,\text{im}}^{(i)}\colon\mathbb{R }^{D\times B\times T}\mapsto\mathbb{R}^{C\times F_{b}\times T}\) to obtain the bandwise mask
\[\mathbf{M}_{b}^{(i)}=\mathcal{Q}_{b,\text{re}}^{(i)}(\mathbf{\Lambda}_{b})+ \jmath\mathcal{Q}_{b,\text{im}}^{(i)}(\mathbf{\Lambda}_{b})\in\mathbb{C}^{C \times F_{b}\times T}. \tag{11}\]
With overlapping bands, however, the full-band mask can no longer be trivially obtained using stacking. We used weighted recombination to obtain \(\mathbf{M}^{(i)}\in\mathbb{C}^{C\times F\times T}\), such that
\[\mathbf{M}^{(i)}[c,f,t]=\sum_{b}\mathbf{W}_{b}[f]\cdot\mathbf{M}_{b}^{(i)}[c,f-\min\mathfrak{F}_{b},t] \tag{12}\]
A simplified illustration with two bands is shown in Fig. 3. Note that while \(\mathbf{W}_{b}\) is used as the recombination weight, it is possible to not use any weight as \(\mathbf{W}_{b}\) or more appropriate weights can be learned by the model and be absorbed into \(\mathbf{M}_{b}^{(i)}\). In other words, the role of \(\mathbf{W}_{b}\) in the mask estimation module is more of an initialization than a fixed parameter. Except for the Bark model with very wide bandwidths thus a higher number of parameters, the mask estimation module accounts for roughly 25 M parameters in a 64-band setup.4
Footnote 4: We have also attempted a combination of multiplicative and additive masks in this work. However, we found that the inclusion of the additive mask did not lead to any appreciable improvement. We hypothesize that the channel capacity of the model is simply insufficient to reconstruct a sufficiently good full-resolution additive spectrogram, as a non-zero additive will only lead to more artifacts.
### Loss function
We initially experimented with the loss function originally used in [6, 47], whose stem-wise contribution is given by
\[\mathcal{L}_{p}^{(i)}=\|\hat{\mathbf{s}}_{i}-\mathbf{s}_{i}\|_{p}+\|\Re[\hat{ \mathbf{S}}_{i}-\mathbf{S}_{i}]\|_{p}+\|\Im[\hat{\mathbf{S}}_{i}-\mathbf{S}_{ i}]\|_{p}, \tag{13}\]
and \(p=1\). While calculating the loss for the real and imaginary parts separately may seem like a somewhat inequgant approximation, there is a desirable gradient behavior that justifies doing so over calculating a norm of complex differences. Consider \(\mathbf{y}=\mathbf{u}+\jmath\mathbf{v}\) and \(\tilde{\mathbf{y}}=\tilde{\mathbf{u}}+\jmath\tilde{\mathbf{v}}\). The gradient of the 1-norm of a complex difference vector gives
\[\partial\|\hat{\mathbf{y}}-\mathbf{y}\|_{1}=\sum\nolimits_{i}\frac{(\hat{u}_{ i}-u_{i})\partial\hat{u}_{i}+(\hat{v}_{i}-v_{i})\partial\hat{v}_{i}}{\sqrt{(\hat{u}_{ i}-u_{i})^{2}+(\hat{v}_{i}-v_{i})^{2}}}. \tag{14}\]
This indicates that the gradient \(\partial\hat{u}_{i}\) will be scaled down if the error on \(\hat{v}_{i}\) is high and vice versa, diluting the
Figure 3: **A simplified illustration of overlapping mask recombination.**
sparseness-encouraging property of a \(1\)-norm. On the other hand, treating the real and imaginary parts separately yields
\[\partial\left(\|\hat{\mathbf{u}}-\mathbf{u}\|_{1}+\|\hat{\mathbf{v}} -\mathbf{v}\|_{1}\right)\\ =\sum\nolimits_{i}\mathrm{sgn}(\hat{u}_{i}-u_{i})\partial\hat{u} _{i}+\mathrm{sgn}(\hat{v}_{i}-v_{i})\partial\hat{v}_{i}, \tag{15}\]
which enjoys the same sparsity benefit of a \(1\)-norm for real-valued differences.
Both acoustically and perceptually, however, the magnitudes of both the time-domain signal and the STFT follow a logarithmic scale. Each of the stems can also have very different energies due to foreground (e.g., dialogue) sources conventionally being mixed louder than background (e.g., music and effects) sources. Inspired by the success of negative signal-to-noise ratio (SNR) as a loss function, we experimented with a generalization to a \(p\)-norm that tackles both of these issues, i.e.,
\[\mathcal{D}_{p}(\hat{\mathbf{y}};\mathbf{y})=10\log_{10}\left[(\|\hat{\mathbf{ y}}-\mathbf{y}\|_{p}^{p}+\epsilon)/(\|\mathbf{y}\|_{p}^{p}+\epsilon)\right], \tag{16}\]
where \(\epsilon\) is a stabilizing constant, setting the minimum of the distance to \(-10\log_{10}(\epsilon^{-1}\|\mathbf{y}\|_{p}^{p}+1)\), which is numerically stable for \(\epsilon\not\ll\|\mathbf{y}\|_{p}^{p}\). In this work, we set \(\epsilon=10^{-3}\). Analyzing the differential of \(\mathcal{D}_{p}\) gives
\[\partial\mathcal{D}_{p}=\log_{10}(e^{10})\cdot(\|\hat{\mathbf{y}}-\mathbf{y} \|_{p}^{p}+\epsilon)^{-1}\cdot\partial\|\hat{\mathbf{y}}-\mathbf{y}\|_{p}^{p} \tag{17}\]
which allows the model to take smaller updates when it is less confident, and larger updates once it is more confident. Gradient explosion is prevented by \(\epsilon\) since the magnitude of the gradients cannot rapidly increase once \(\|\hat{\mathbf{y}}-\mathbf{y}\|_{p}\ll\epsilon\). Note also the importance of \(p\) on the differential, since
\[\partial\mathcal{D}_{1}(\hat{\mathbf{y}};\mathbf{y}) =\frac{\log_{10}(e^{10})}{\|\hat{\mathbf{y}}-\mathbf{y}\|_{1}+ \epsilon}\sum_{i}\mathrm{sgn}(\hat{y}_{i}-y_{i})\cdot\partial\hat{y}_{i}, \tag{18}\] \[\partial\mathcal{D}_{2}(\hat{\mathbf{y}};\mathbf{y}) =\frac{2\log_{10}(e^{10})}{\|\hat{\mathbf{y}}-\mathbf{y}\|_{2}^{ 2}+\epsilon}\sum_{i}(\hat{y}_{i}-y_{i})\cdot\partial\hat{y}_{i}. \tag{19}\]
While both differentials were globally modulated by the inverse norm of the error in both cases, \(\partial\mathcal{D}_{2}\) is more prone to outliers in the early stage of training and to the vanishing gradient problem in the later stage due to the elementwise multiplier of \(\partial\hat{y}_{i}\) being dependent on the elementwise error magnitude. On the other hand, the elementwise multiplier in \(\partial\mathcal{D}_{1}\) only depends on the sign of the error and thus does not suffer from either problem. Combining \(\mathcal{D}_{1}\) with the original loss function gives
\[\mathcal{L}_{\text{proposed}}=\mathcal{D}_{1}(\hat{\mathbf{s}};\mathbf{s})+ \mathcal{D}_{1}(\Re\hat{\mathbf{S}};\Re\mathbf{S})+\mathcal{D}_{1}(\Im\hat{ \mathbf{S}};\Im\mathbf{S}), \tag{20}\]
which we will refer to as the proposed "L1SNR" loss. In practice, care must be taken to ensure that the DFT used in the STFT is normalized such that all loss terms are on a similar scale, or appropriate weightings should be used.
## IV Experimental Setup
### Dataset
Most of the experiments in this work will focus on the Divide and Remaster (DnR) dataset [3]. The DnR dataset is a three-stem dataset consisting of the dialogue, music, and effects stems. Each track is 60 s long, single-channel, and provided at two sample rates of 16 kHz and 44.1 kHz. In this work, we will only focus on the high-fidelity sample rate.
The dialogue data were obtained from LibriVox, an English-only audiobook reading. Music data were taken from the Free Music Archive (FMA). Foreground and background effects data were taken from FSD50k. As mentioned in CDX [5], the dialogue data is not as diverse as real motion picture audio, due to the lack of emotional and linguistic diversity. Dialogue data diversity is particularly an issue when seeking high-fidelity speech sampled at 44.1 kHz and above; our own initial attempt to augment the DnR dataset with more languages and emotions required unexpectedly significant effort and was deferred to future work.
### Chunking
Since each track of the DnR dataset is relatively long, the tracks were chunked during training and inference. During training, random 6 s chunks of the tracks are drawn on the fly. During validation, chunks were drawn exhaustively with a length of 6 s and a hop size of 1 s. During testing, we chunk the full signal into 6 s chunks with a hop size of 0.5 s. Inference is performed independently on each chunk before they are recombined with Hann-windowed overlap-add. The 6 s chunk size was originally chosen for compatibility with the original BSRNN implementation. It was also the largest chunk size we could fit into an NVIDIA A10G GPU with a per-GPU batch size of at least two, as a per-GPU batch size of one caused significant instability during backpropagation.
### Training
Unless otherwise stated, all models were trained using an Adam optimizer for 100 epochs. The learning rate is initialized to \(10^{-3}\) with a decay factor of 0.98 every two epochs. Norm-based gradient clipping was additionally enabled with a threshold of 5. Each training epoch is set to 20 k samples regardless of the dataset size.
As additional points of comparison, we trained our adaptation of the Hybrid Demucs [23] and Open-Unmix (umxhqt-like) [17] for the 3-stem problem. The loss function for each model follows that of the respective original paper, while the data processing is identical to our proposed method. BandIt, BSRNN, and Demucs models were trained on a g5.48xlarge Amazon EC2 instance with 8 NVIDIA A10G GPUs (24 GB each). Training was done with PyTorch Lightning using a distributed data-parallel strategy with a batch size of 2 per GPU. Open-Unmix model was trained on a g4dn.4xlarge Amazon EC2 instance with a single NVIDIA T4 GPU (16 GB) with a batch size of 16. BandIt models each took roughly 1.5 days to complete 100 epochs of training.
### Metrics
In this work, we report the signal-to-noise ratio (SNR) and scale-invariant SNR (SI-SNR) [2]. Note that the commonly reported signal-to-distortion ratio (SDR) and its scale
invariant counterpart (SI-SDR) are mathematically identical to SNR and SI-SNR, respectively, when the appropriate version of SDR is used [2]. To avoid ambiguity, we will simply report the "SNR" and the "SI-SNR".
## V Results and Discussion
The main experimental results (SSV-A through SSV-D) are presented in Table 1. In addition to our proposed method, we trained and evaluated our own baselines with Open-Unmix [17] and Hybrid Demucs (a.k.a. Demucs v3) [23] on DnR. Results for the MRX and MRX-C models are reproduced as-is from [4] and are marked with \(\triangle\) to indicate so. We also provide oracle results based on the mixture, the ideal ratio mask, and the phase-sensitive filter [48].
### _Reducing Time-Frequency Modeling Complexity_
The first modification made to the original BSRNN (BSRNN-LSTM12) was to reduce the complexity of the time-frequency modeling module. Switching from LSTM to GRU and cutting the stack size down from 12 pairs to 8 pairs (BSRNN-GRU8) showed nearly no changes to the performance on average. While the GRU-based model performed slightly worse for dialogue, it performed better with effects than the LSTM-based modules. This switch allowed us to significantly cut down the parameters by almost 40 %, while also reducing the considerable memory footprint during backpropagation. For this experiment, we used the Vocals V7 band definition from the original paper, which was used for both the "vocals" and "other" stem in MUSDB18, hence making it the most appropriate multi-purpose band definition for this analysis.
### _Common Encoder_
The next modification was to merge the encoder section, that is, all modules up to and including the TF modeling module, into a shared system for all stems. This further cut the parameters down by 45 % from BSRNN-GRU8. Again, the performance of this common-encoder model (BandIt) is still very similar to either BSRNN system on average. More interestingly, the performance in the effects stem increased by about 1 dB compared to BSRNN-LSTM12, but this is also accompanied by a drop of about 1 dB in dialogue stem performance. This seems to indicate that there is a slight competition in dynamically allocating information from three stems into the shared embedding. Qualitatively, however, speech is known to be easier to detect and semantically segment than effects due to the former being less acoustically diverse and more bandlimited on average. As such, since the speech performance at around 13 dB is closer to the oracle performance, we consider the improvement in the effects stem performance of higher importance.
### _Loss Function_
The next experiment is concerned with choosing the most appropriate loss function for the system. We experimented with 4 loss functions: the L1 loss, the mean squared error (MSE) loss, the proposed L1SNR loss, and the 2-norm ablation (L2SNR) of L1SNR. All loss functions were applied in the time domain, the real part of the spectrogram, and the imaginary part of the spectrogram like in (20). Note that the distance function used in L2SNR is practically identical to commonly used negative SNR loss.
Training on L1SNR loss achieved the highest performance, with at least 0.7 dB higher performance compared to L1 and L2SNR losses across all stems; the latter two performed similarly across all stems. MSE loss performed worst as expected, given that it has the weakest sparsity-encouraging property across the four losses. The order of the performance corroborates with our analyses in Section III.F, but more thorough experiments will be needed in a separate work to fully verify our hypothesis.
### _Band Definitions_
We look into the five proposed overlapping-band definitions. For each band, we experimented with 48-band and 64-band variants. The 48-band variant has a larger input bandwidth per band but fewer neurons provisioned per linear frequency. Overall, the 64-band version consistently outperformed the corresponding 48-band counterpart of the same band type. Mel, TriBark, and ERB models tend to perform similarly. The similarity in performance between the three band types is not too surprising, given the similarity in both their nonlinear frequency transforms and filterbanks (see also Fig. 2). In a 64-band setting, all band types performed better than the ideal ratio mask in the dialogue stem. In both 48- and 64-band settings, the musical band performed the best. We hypothesize that this is due to its underlying musical scale containing significantly more nonlinear-frequency units in the lower linear-frequency region than the other three scales, thus more channel capacity was provisioned to the information-dense lower linear-frequency region.
For the best model at 100 epochs (Music 64), we let the model continue to train until the validation loss no longer improves for 20 epochs. This was achieved at epoch 278, with a total training time of about 4.3 days. Per-epoch improvements after the first 100 epochs were very small, but accumulated to about 0.5 dB improvement across all stems after the additional 178 epochs. The performance of this model (BandIt+) is also shown in Table 1.
### _Generalizability_
We additionally tested the generalizability of the feature map learned by the encoder. This is done by freezing the encoder from the BandIt model with 64 musical bands and attaching a new randomly initialized decoder for an output stem that was not directly learned in the original 3-stem training. We first tested the generalizability on an "easier" task of obtaining the music-and-effects stem. Using the sum of the original music and effects stems outputs, the SNR and SI-SNR are at 13.9 dB and 13.7 dB, respectively. Training a new decoder
for the composite stem achieves a slightly better output at 14.1 dB for SNR and 13.9 dB for SI-SNR.
Next, we trained new decoders on completely unseen music data from MUSDB18-HQ [38]5. Note that MUSDB18 provides stereo data and the encoder was only trained on mono signals, so each channel of the music data was passed through the encoder independently. Despite only being trained to separate music as a whole without caring about its constituent instrumentals, the representations from the frozen encoder were sufficient to train decoders that are on par in performance to Open-Unmix, as shown in Table 2.
Footnote 5: The use of MUSDB18 here is strictly for the demonstration of model generalizability, and will not be used commercially.
### Computational Complexity
While the BandIt models have achieved state-of-the-art performance with lower overall complexity than BSRNN, it is important to note that the inference-time Flops count of a 64-band BandIt remains significantly higher than Hybrid Demucs, despite the latter having higher parameter counts, partially due to the RNN-heavy backbone of BandIt. Using 6-second chunk inputs on a machine with an Intel Core i9-11900K CPU and an NVIDIA GeForce RTX 3090 GPU, Demucs processed about 17.0 chunks per second on GPU while BandIt did so at about 8.7 chunks per second. On CPU, Demucs did so at about 1.1 chunks per second, while BandIt did so at about 0.3 chunks per second. The peak memory usage of BandIt at about 650 MB is slightly higher that than of Demucs at about 550 MB.
## VI Conclusion
In this work, we propose BandIt, a generalization of the Bandsplit RNN to any complete or overcomplete partitions of the frequency axis. By also introducing a shared-encoder, a 1-norm SNR-like loss function, and psychoacoustically motivated band definitions, BandIt achieves state-of-the-art performance in CASS with fewer parameters than the original BSRNN or Hybrid Demucs. Future work includes more in-depth analysis of the behavior of the proposed loss function, deriving more information-theoretically optimal band definitions, and extending the work to more realistic audio data with more emotional, linguistic, and spatial diversity.
## Acknowledgment
The authors would like to thank Jordan Gilman, Kyle Swanson, Mark Vulfson, and Pablo Delgado for their assistance.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{**Model**} & & & & \multicolumn{2}{c}{**Dialogue**} & \multicolumn{2}{c}{**Music**} & \multicolumn{2}{c}{**Effects**} & \multicolumn{2}{c}{**Averaged**} \\ \hline
**Backbone** & **Encoder** & **Bands** & **Loss** & **Params.** & **GFlops** & **SNR** & **SI-SNR** & **SNR** & **SI-SNR** & **SNR** & **SI-SNR** & **SNR** & **SI-SNR** \\ \hline BSRNN-LSTM12 & Separate & Vocals V7 & T+RTTF L1 & 77.4M & 1386.5 & 14.2 & 14.0 & 6.3 & 5.2 & 7.0 & 5.9 & 9.2 & 8.4 \\ BSRNN-GRU8 & Separate & Vocals V7 & T+RTTF L1 & 47.4M & 714.5 & 14.0 & 13.9 & 6.4 & 5.2 & 7.2 & 6.2 & 9.2 & 8.4 \\ \hline BandIt & Shared & Vocals V7 & T+RTTF L1 & 25.7M & 243.2 & 13.3 & 13.0 & 6.4 & 5.3 & 7.8 & 6.9 & 9.2 & 8.4 \\ & Vocals V7 & T+RTTF MSE & 25.7M & 243.2 & 12.5 & 12.2 & 5.5 & 4.1 & 7.0 & 6.0 & 8.3 & 7.4 \\ & Vocals V7 & T+RTTF L1SNR & 25.7M & 243.2 & 14.2 & 14.0 & 7.2 & 6.3 & 8.5 & 7.8 & 10.0 & 9.4 \\ & Vocals V7 & T+RTTF L2SNR & 25.7M & 243.2 & 13.5 & 13.3 & 6.5 & 5.4 & 7.9 & 7.1 & 9.3 & 8.6 \\ \cline{2-13} & Bark 48 & T+RTTF L1SNR & 64.5M & 290.6 & 14.1 & 14.0 & 7.3 & 6.3 & 8.6 & 7.8 & 10.0 & 9.4 \\ & Mel 48 & T+RTTF L1SNR & 32.8M & 274.3 & 14.5 & 14.3 & 7.5 & 6.6 & 8.8 & 8.1 & 10.3 & 9.7 \\ & TriBark 48 & T+RTTF L1SNR & 32.7M & 274.2 & 14.6 & 14.5 & 7.6 & 6.7 & 8.9 & 8.2 & 10.4 & 9.8 \\ & EBR 48 & T+RTTF L1SNR & 32.6M & 274.2 & 14.6 & 14.4 & 7.7 & 6.8 & 8.9 & 8.5 & 10.4 & 9.8 \\ & Music 48 & T+RTTF L1SNR & 33.5M & 274.7 & 14.8 & 14.6 & 7.9 & 7.1 & 9.2 & 8.5 & 10.6 & 10.1 \\ \cline{2-13} & Mel 64 & T+RTTF L1SNR & 36.1M & 363.6 & 14.8 & 14.7 & 7.9 & 7.1 & 9.1 & 8.5 & 10.6 & 10.1 \\ & TriBark 64 & T+RTTF L1SNR & 36.0M & 363.5 & 15.0 & **14.9** & 8.0 & 7.2 & 9.2 & 8.6 & 10.8 & 10.2 \\ & Bark 64 & T+RTTF L1SNR & 82.6M & 387.6 & 15.0 & **14.9** & 8.1 & 7.3 & **9.3** & 8.6 & 10.6 & **10.3** \\ & Music 64 & T+RTTFTF L1SNR & 37.0M & 364.1 & **15.1** & **14.9** & **8.2** & **7.4** & **9.3** & **8.7** & **10.9** & **10.3** \\ \hline BandIt+ Shared & Music 64 & T+RTTFTF L1SNR & 37.0M & 364.1 & 15.7 & 15.6 & 8.7 & 8.0 & 9.8 & 8.2 & 11.4 & 10.9 \\ \hline Open-Unmix (umxhq) & TF Mag. MSE & 22.1M & 5.7 & 11.6 & 11.3 & 4.9 & 3.2 & 5.8 & 4.4 & 7.4 & 6.3 \\ MRX\({}^{\Delta}\) & Time SI-SDR & N/R & N/R & — & 12.3 & — & 4.2 & — & 5.7 & — & 7.4 \\ MRX-C\({}^{\Delta}\) & Time SI-SDR & N/R & N/R & — & 12.6 & — & 4.6 & — & 6.1 & — & 7.8 \\ Hybrid Demucs (v3) & Time L1 & 83.6M & 85.0 & 13.6 & 13.4 & 6.0 & 4.7 & 7.2 & 6.1 & 8.9 & 8.1 \\ \hline _Mixture_ & & — & — & 1.0 & 1.0 & -6.8 & -6.8 & -5.0 & -5.0 & -3.6 & -3.6 \\ _Ideal Ratio Mask_ & & — & — & — & 14.4 & 14.6 & 9.0 & 8.4 & 11.0 & 10.7 & 11.5 & 11.2 \\ _Phase Sensitive Filter_ & & — & — & — & 18.5 & 18.4 & 12.9 & 12.7 & 15.0 & 14.8 & 15.4 & 15.3 \\ \hline \hline \end{tabular}
\end{table} TABLE 1: **Model performance on the Dnf test set. Floating-point operation count is based on 6-second input at 44.1 kHz**
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline Model & Vocals & Drums & Bass & Other & Average \\ \hline BandIt (Music 64, frozen enc.) & 5.5 & **6.4** & **4.4** & **3.6** & **5.0** \\ Open-Unmix (umxhq) & **6.0** & 5.6 & **4.4** & 3.4 & 4.9 \\ \hline \hline \end{tabular}
\end{table} TABLE 2: **SNR (dB) Performance on MUSDB18-HQ Test Set.** |
2305.02834 | Strategic flip-flopping in political competition | We study candidates' positioning when adjustments are possible in response to
new information about voters' preferences. Re-positioning allows candidates to
get closer to the median voter but is costly both financially and electorally.
We examine the occurrence and the direction of the adjustments depending on the
ex-ante positions and the new information. In the unique subgame perfect
equilibrium, candidates anticipate the possibility to adjust in response to
future information and diverge ex-ante in order to secure a cost-less victory
when the new information is favorable. | Gaëtan Fournier, Alberto Grillo, Yevgeny Tsodikovich | 2023-05-04T13:50:22Z | http://arxiv.org/abs/2305.02834v1 | # Strategic flip-flopping in political competition+
###### Abstract
We study candidates' positioning when adjustments are possible in response to new information about voters' preferences. Re-positioning allows candidates to get closer to the median voter but is costly both financially and electorally. We examine the occurrence and the direction of the adjustments depending on the ex-ante positions and the new information. In the unique subgame perfect equilibrium, candidates anticipate the possibility to adjust in response to future information and diverge ex-ante in order to secure a cost-less victory when the new information is favorable.
_JEL Classification_: C72, D72, D82
_Keywords_: Spatial voting, Imperfect information.
Introduction
In the run-up to an election, candidates largely rely on polls to learn about voting intentions. Uncovering the electorate's leanings, what is known as the _policy mood_(Stimson, 1991; Stevenson, 2001), help them shape their campaign message but may also pose a difficult choice: should they change their positions to get closer to voters? In the long run, it is well documented that politicians adapt their stances to reflect the evolving preferences of their constituencies (Glazer and Robbins, 1985; Stratmann, 2000; Miler, 2016). Yet, policy changes that are too sudden involve substantial costs, in terms of not only communication efforts but also electoral appeal. In the history of the U.S. presidential elections, for example, the defeats of John Kerry in 2004 and Matt Romney in 2012 are often linked to their shifting views on salient issues1.
Footnote 1: Specifically concerning Kerry’s opposition to the war in Iraq after his previous support and Romney’s multiple shifts, notably on abortion (Croco, 2016).
This paper studies re-positioning choices as a strategic game between candidates. We assume that voters prefer candidates with platforms aligned with their ideal policies, but dislike _flip-floppers_, i.e. candidates who strategically change their positions during the campaign. We examine which candidates are more likely to adjust their positions following new information, in which direction, and how successfully. We also investigate how the anticipation of possible changes affects candidates' positions ex-ante, before the information is revealed.
A theory of policy adjustments is useful in light of the bias of the empirical evidence. As pointed out by Tomz and Van Houweling (2012), _"historical data [...] reveal the consequences of re-positioning only in the specific circumstances when politicians thought re-positioning would be optimal"_. Our analysis of the involved trade-offs aims to clarify what these specific circumstances are.
**Model.** We enrich the Downs-Hotelling framework by supposing an information shock creating a two-stage game. The shock reveals the location of the median voter. This captures the idea that voters' aggregate preferences fluctuate over time and that their current leanings are disclosed during the electoral campaign. In the model, two office-motivated candidates first select their positions before the shock, knowing only the distribution of the median voter's location. They can then revise their positions after the shock (after learning the actual location). We assume that such a change involves both an _electoral cost_ - voters' discount their evaluation of a candidate who changed position - and an _organizational cost_ - the candidate's payoff is reduced due to the policy change.
On the one hand, the electoral cost represents voters' negative feeling toward a flip-flop. This aversion may be interpreted as a higher uncertainty about what the candidate would do if elected, i.e. to an undermined credibility of the position (Enelow and Munger, 1993). Alternatively, voters may value consistency on policy issues as a cue for character (Kartik and McAfee, 2007), or as a signal for quality of implementation of the ex-post policy. We model voters' dislike for flip-flops in a reduced form, through a penalty in their utility if a candidate changes position.
On the other hand, the organizational cost represents all other costs involved by a change of position which are not related to voters' reaction. The most prominent is the organizational and financial cost for candidates of communicating the change to the voters. While justifying a policy change to the public may reduce its electoral harm, such communication requires costly advertising.
**Results.** We study the subgame-perfect equilibria of the game, using backward induction. In the second stage, if the revealed information is not clearly in favor of one candidate, the adjustment choices determine the winner of the election. The advantaged candidate would like to adjust only if the other candidate threatens the advantage by also moving, while the disadvantaged candidate would like to adjust only if his opponent does not. The only equilibrium is therefore in mixed strategies, with both candidates flip-flopping with positive probability and having a chance to win the election. If instead the revealed information strongly favors one candidate, there are no incentives to flip-flop as the election cannot be disputed anymore.
In the first stage, candidates anticipate their strategic responses to the information shock. The game has at most one subgame-perfect equilibrium, which exists if both the electoral and the organizational costs are sufficiently high. In such equilibrium, candidates contradict the median voter theorem: they choose differentiated platforms to secure a cost-less victory when the information is in their favor.
**Contribution.** Assuming sufficiently high adjustment costs, our model yields several implications concerning the occurrence of flip-flops. First, along the equilibrium path, repositioning happens only toward the center, while candidates never adjust toward more extreme positions. This highlights a _moderation effect_, according to which candidates cultivate separate electorates when elections are far in time and then soften their positions during the campaign. In the U.S., such an effect is often attributed to the presence of primary elections, in which candidates need to convince a more extreme median voter (Agranov, 2016). Our framework provides a different rationale for a similar dynamic, which can play
out even in the absence of primary elections.
A second prediction is that flip-flops consist mostly of small adjustments made by an advantaged candidate in order to secure his victory. Only a minority of flip-flops are large adjustments made by a disadvantaged candidate who seeks to reverse the likely outcome of the election. Indeed, on the one hand, when the favorite candidate adjusts his position, the magnitude of the adjustment is smaller than when his opponent adjusts. On the other hand, we find that a candidate favored by the new information is more likely to adjust his position than a disadvantaged candidate. Hence, an adjustment by the advantaged candidate guarantees his victory but an adjustment by the challenger is more likely to be unsuccessful.
Finally, we provide comparative statics results with respect to changes in the adjustment costs. We find that an increase in the electoral cost decreases the polarization of candidates but increases their equilibrium payoffs. Indeed, such an increase makes flip-flopping less likely, because it increases the probability that the election is secured after the information shock, which guarantees the favorite candidate a cost-less victory.
**Extension.** We look at how the results are modified by an asymmetry between candidates. Asymmetry in organizational costs has no impact on the results. Asymmetry in electoral costs makes the more flexible candidate choose a more central platform, while the less flexible candidate offers a more polarized platform. We show that a candidate who is significantly less flexible in adjusting his position loses the election in equilibrium, even if he is favored by the information on voters. In the general asymmetric game, payoffs are decreasing in candidates' own electoral cost, but increasing in the opponent's electoral cost.
**Literature.** Following the seminal work by Hotelling (1929) and Downs (1957), most models of political competition assume that candidates can freely commit to any electoral platform. At the extreme opposite, some papers take campaign announcements as cheap-talk and assume that, once elected, candidates act according to their own preferences (Alesina, 1988; Osborne and Slivinski, 1996; Besley and Coate, 1997).
We lay down a more realistic framework, in which platforms are binding although not immutable, and candidates can costly adjust them over time. The evidence that politicians respond in an adaptive way to changes in voters' preferences is substantial in political science, see Stimson et al. (1995), Adams et al. (2004), and Kousser et al. (2007). Karol (2009) argues that elite replacement is not necessary for parties' policy changes, which are often driven by incumbent politicians. Adams et al. (2006) find that mainstream parties engage in re-positioning more than niche parties, while Tavits (2007) relates the likelihood of success
to whether the change concerned pragmatic or principled issues. This literature has a strong empirical focus and recently turned also to experimental settings (Tomz and Van Houweling, 2012; Doherty et al., 2016; Robison, 2017). We see our theoretical investigation as a useful complement to this existing research.
Key to our analysis is the assumption that policy changes are costly. Models by Bernhardt and Ingberman (1985), Ingberman (1989) and Enelow and Munger (1993) consider a voters' utility function which is decreasing in the size of a change in candidates' policies. In these studies, the loss is derived from the increased uncertainty concerning the policy that will be effectively implemented. Evidence of an intrinsic preference of voters for consistency is also discussed as a "waffle effect" in political psychology (Carlson and Dolan, 1985). Hoffman and Carver (1984) found that even proposing a policy in agreement with the voters' preferences may not be rewarded if it follows an inconsistent track-record. Our specification is consistent with DeBacker (2015), who finds empirically that the electoral cost is increasing in the size of the change. The organizational cost, instead, captures in a reduced form the idea that informative advertising directed towards voters is costly for candidates (Coate, 2004; Ashworth, 2006). Our model abstracts away the role of interest groups in financing political campaigns, and assumes for simplicity that candidates bear communications costs themselves.2
Footnote 2: See Prato and Wolton (2019) for a similar assumption.
Given the result of ex-ante divergence, our paper belongs to a class of models in which policy differentiation is adopted to soften competition in a second dimension. The baseline argument is familiar in industrial organization models, in which firms differentiate their products in order to reduce the subsequent competition in prices (Tirole, 1988; Shaked and Sutton, 1982). In political competition, Ashworth and Bueno De Mesquita (2009) and Zakharov (2009) examine the reduced need for differentiated candidates to invest in valence. Eyster and Kittsteiner (2007) consider parties as collections of candidates. Parties choose general platforms first and candidates can deviate from their party's platform at a cost, in order to increase their chances in their specific constituency. Divergence of parties' platforms results from the incentive to minimize the aggregate cost of candidates' re-positioning for the party. In Balart et al. (2022), candidates diverge to minimize future costly advertising, which would be needed to impress voters if the ideological divide were low. We focus instead on candidates' choice of changing their own position over time, when doing this is not only financially costly but also penalized electorally. While ex-ante divergence arises analogously to these previous models in the literature, our framework allows us to highlight the emerging
patterns of flip-flopping along the (subgame-perfect) equilibrium.
The paper is more loosely related to three other research lines. First, flip-flopping has been investigated in two-stage models with primary elections (Hummel, 2010; Agranov, 2016). Although we do not have primary elections, our model exhibits a similar dynamic of moderation during the campaign stage. Second, the paper shares with Kamada and Sugaya (2020) an interest in the dynamic aspect of policy announcements during the campaign stage. In their paper, however, the opportunity to revise a position arises stochastically in a revision game a la Kamada and Kandori (2020), and the focus is on candidates' reactions to each others' announcements. Finally, policy adjustments in our model can be seen as a pandering phenomenon. However, such pandering only describes a spatial movement and ignores considerations of optimality from a common-value perspective, addressed in Maskin and Tirole (2004) and Andreottola (2021).
**The rest of the paper is organized as follows.** Section 2 presents the model and Section 3 the equilibrium analysis and the results. We study the case of asymmetric candidates in Section 4. Technical proofs and lemmas are postponed to the appendix.
## 2 The Model
There are two office seeking candidates (namely, candidate 1 and 2) and a continuum of voters that we identify with their ideal policy \(t\in\mathbb{R}\). The location of the median voter \(m\) is drawn according to a uniform distribution on a compact interval, w.l.o.g. fixed to \([0,1]\), and is revealed during the campaign. Candidates choose their platforms twice, both before and after the reveal of the median voter's position. We denote the ex-ante platforms \(x_{1}\) and \(x_{2}\), and the ex-post platforms \(y_{1}\) and \(y_{2}\).
We interpret the choice of the ex-post platforms as the adjustment that candidates may make with respect to their previous positions after learning the electorate's preferences more precisely. The choice of the ex-ante platforms reflects instead the positions in which candidates invest over the long term, knowing that future revisions are allowed but costly.
Voter \(t\)'s utility from a victory of candidate \(i\) depends on both the candidate's ex-ante and the ex-post platform according to the following functional form
\[u_{t}(x_{i},y_{i})=-(t-y_{i})^{2}-a(y_{i}-x_{i})^{2}. \tag{1}\]
The first term represents voters' preference for a candidate whose final policy is closer to their ideal one. The second term represents how voters penalize candidates for changing their platform with respect to their previous position. In our model, voters trust that policy
\(y_{i}\) will be implemented if candidate \(i\) wins, hence do not care about the distance between \(t\) and \(x_{i}\). Yet, they have an intrinsic preference for candidates who are consistent. The parameter \(a>0\) measures the relative effect of this electoral penalty and thus how voters trade off policy considerations with their dislike for a candidate changing position.
The election is decided by majority rule. Because voters' preferences are single-peaked and their utilities have the same functional form, the candidate that attracts the median voter wins the election. The payoff of the candidates depends on both the outcome of the election and the possible organizational cost of a policy change. The benefit from winning the election is set to 1 and that from losing to 0. In addition, a candidate that changed his policy, by selecting \(y_{i}\neq x_{i}\), pays a fixed cost \(\phi\in(0,\frac{1}{2})\).3 We interpret \(\phi\) as the cost induced by the need to communicate the change of policy to the public. Although greater changes may require more substantial communication strategies, we abstract from the dependency of the organizational cost on the magnitude of the change for technical simplicity. Our assumption of a fixed \(\phi\) captures the presence of some fixed cost, which is likely present independently of the magnitude of the change. The payoff of candidate \(i\) is then
Footnote 3: We assume that the cost \(\phi\) is smaller than \(\frac{1}{2}\), which is the expected gain from competing in the election since players are symmetric. If \(\phi>0.5\), saving the organizational costs is more important than competing for the election.
\[g_{i}(\mathbf{x},\mathbf{y})=\mathbb{E}(\mbox{$1\!\!1$}_{i\;\mathrm{ wins}}-\phi\mbox{$1\!\!1$}_{y_{i}\neq x_{i}}),\]
where \(\mbox{$1\!\!1$}_{i\;\mathrm{wins}}\) is the indicator that candidate \(i\) wins the election, and the expectation is taken with respect to the position of the median voter \(m\) and the possibly mixed actions of the players. Ties are broken by a toss of a fair coin, so in case of a tie, \(\mathbb{E}(\mbox{$1\!\!1$}_{i\;\mathrm{wins}})=\frac{1}{2}\).
To summarize, the timing of the game is as follows.
1. **First Stage:** The two candidates choose ex-ante platforms \(x_{1}\) and \(x_{2}\).
2. **Information Shock:** The position of the median voter \(m\) is revealed.
3. **Second Stage:** The two candidates choose ex-post platforms \(y_{1}\) and \(y_{2}\).
4. **The Voting:** The candidate's preferred by the median voter is elected and candidates' payoffs are realized.
## 3 Equilibrium analysis
We are interested in the subgame-perfect equilibrium of the game. Hence, we solve the game using backward induction and start by studying the strategic flip-flopping of candidates after
their observation of the ex-ante platforms \(x_{1},x_{2}\) and the location of the median voter \(m\) (Propositions 1 and 2). The analysis shows that two scenarios are possible in the second stage: either the advantage of one candidate is too large and he is sure to win, or the election is still open and the outcome depends on candidates' reactions. In the first case, it is optimal for both candidates not to change platforms and remain at their ex-ante positions. In the second case, both candidates mix between moving to an ex-post optimal position and not moving.
Next, given the strategies in the second stage, we analyze the optimal choice of ex-ante platforms. We show that for sufficiently high values of the costs \((a,\phi)\), the unique subgame-perfect equilibrium requires candidates to invest in divergent ex-ante positions (Proposition 3). Candidates differentiate in order to maximize the chances of having a sufficient advantage in the second stage, for which they save on the organizational cost of changing policy. In the remaining region of the parameter space (see Figure 3), a subgame-perfect equilibrium does not exist but an \(\epsilon\)-equilibrium exists in which candidates take centrist positions one \(\epsilon\) away from each other (Proposition 4). Finally, we summarize the implications in terms of flip-flopping behavior that emerge from our equilibrium analysis (Proposition 5 and 6).
### Second Stage: Choice of Ex-Post Platforms \(y_{1}\), \(y_{2}\)
We take \(x_{1}\), \(x_{2}\) and \(m\) as given and consider separately the cases where the ex-ante platforms are the same or different, as the analysis in these two cases differs significantly.
#### 3.1.1 Different Ex-Ante Platforms \(x_{1}\neq x_{2}\)
Let us assume first \(x_{1}\neq x_{2}\) and consider, without loss of generality, the case \(x_{1}<x_{2}\). Given \(m\), we refer to the candidate with ex-ante platform closer to \(m\) as the _favorite_ candidate.
**Definition 1**.: _Candidate \(i\) is the favorite and candidate \(j\) is the challenger if \(|x_{i}-m|<|x_{j}-m|\)._
The intuition for the term _favorite_ is that, if candidates do not change their platforms, the favorite candidate \(i\) wins the election as \(u_{m}(x_{i},x_{i})>u_{m}(x_{j},x_{j})\). We ignore the case where \(m=\frac{x_{1}+x_{2}}{2}\) which occurs with null probability.
In the case where \(m=x_{1}\), candidate 1 can win the election without changing platform and incurring the organizational cost \(\phi\), as the median voter has the highest possible utility from his victory. Candidate 2, on the other hand, loses the election regardless of his ex-post
platform, so it is optimal for both candidates to not move and save on the organizational cost. This argument remains true when \(m\neq x_{1}\) but is close enough to \(x_{1}\): candidate 1 still wins the election without changing his platform, and candidate 2 also does not change his platform as he loses the election anyway. In this case, we say that the favorite candidate has _secured_ the election.
**Definition 2**.: _The election is secured for the favorite candidate \(i\) if \(u_{m}(x_{i},x_{i})>\max_{y\in[0,1]}u_{m}(x_{j},y)\). Otherwise, the election is still open._
A favorite candidate who has secured the election has a strictly dominant strategy \(y_{i}=x_{i}\), by which he wins without paying the organizational cost \(\phi\). The challenger also has a strictly dominant strategy \(y_{j}=x_{j}\), as he cannot win the election and should save the organizational cost. Hence, if the the election is secured, the strategies \(y_{1}=x_{1}\), \(y_{2}=x_{2}\) constitute the unique equilibrium of the second stage subgame. The equilibrium payoffs are 1 for the favorite candidate and 0 for the challenger. In Lemma 1 in the appendix, we solve the inequality that appears in Definition 2 and show that the election is secured for candidate 1 whenever \(m\in(\underline{m},\overline{m})\), where
\[\underline{m}:=\tfrac{\alpha x_{1}-x_{2}}{\alpha-1}\lor 0,\qquad\overline{m}:= \tfrac{\alpha x_{1}+x_{2}}{\alpha+1},\]
and where
\[\alpha=\sqrt{\tfrac{1+a}{a}} \tag{2}\]
is a useful parameter. Similarly, the election is secured for candidate 2 whenever \(m\in(\underline{n},\overline{n})\) where
\[\underline{n}:=\tfrac{\alpha x_{2}+x_{1}}{\alpha+1},\qquad\overline{n}:= \tfrac{\alpha x_{2}-x_{1}}{\alpha-1}\wedge 1.\]
Figure 1 summarizes the regions where each candidate has secured the election, and those in which the election is still open. We observe that the election is secured if \(m\) realizes in a neighborhood of each candidate's ex-ante platform \(x_{i}\).
Figure 1: An illustration of the status of each player being the favorite while the election is secured (FS) or still open (FO) before choosing the ex-post platform, with respect to the position of the median voter \(m\in[0,1]\).
If the election is still open, the favorite candidate wins the election if neither candidate changes his platform. However, he can lose the election if he keeps the platform while the challenger moves closer to the median voter. Hence, the challenger wants to move if the favorite does not change his platform. Analogously, the favorite also wants to move if the challenger moves, in order to secure his victory. In Lemma 2 in the appendix, we show that in this case both candidates have an optimal platform \(\hat{y_{i}}\) which they want to adopt if they decide to change their ex-ante platform. The optimal platform is given by
\[\hat{y_{i}}=\frac{m+ax_{i}}{1+a}. \tag{3}\]
This optimal platform is a weighted average between the ex-ante platform \(x_{i}\) and the realization of the median voter location \(m\). For each candidate, the platform \(\hat{y_{i}}\) is optimal in the sense that all other platforms different from \(x_{i}\) are either dominated or redundant, as proved also in Lemma 2. We can represent the ex-post game between the two candidates by the following \(2\times 2\) one-shot game, assuming that each candidate chooses between moving to \(\hat{y_{i}}\) and not moving. Without loss of generality, we consider candidate 1 as the favorite.
If both candidates move to their optimal platforms, candidate 1 wins the election: his ex-ante platform is closer to the median voter, so his ex-post platform is both closer to the median voter and requires a smaller adjustment \(|\hat{y_{1}}-x_{1}|<|\hat{y_{2}}-x_{2}|\). We observe that the favorite candidate wants to take the same action as the challenger, while the challenger wants to take the opposite action as the favorite. Hence, such a game has no equilibrium in pure strategies but has a unique equilibrium in mixed strategies. At this equilibrium, the favorite changes his platform with probability \((1-\phi)\) and the challenger changes his platform with probability \(\phi\). The expected equilibrium payoffs are \(1-\phi\) for the favorite candidate and 0 for the challenger.
This discussion is summarized in the following proposition.
**Proposition 1** (Equilibrium in the second stage subgame for \(x_{1}\neq x_{2}\)).: _Suppose that different ex-ante platforms \(x_{1}\neq x_{2}\) were chosen. The reaction of the candidates to the revelation of \(m\) is:_
\begin{table}
\begin{tabular}{c c c c} & & \multicolumn{2}{c}{Challenger candidate 2} \\ & & \(\hat{y_{2}}\) & \(x_{2}\) \\ \cline{3-4} Favorite candidate 1 & \(\hat{y_{1}}\) & \((1-\phi,-\phi)\) & \((1-\phi,0)\) \\ \cline{3-4} & \(x_{1}\) & \((0,1-\phi)\) & \((1,0)\) \\ \hline \end{tabular}
\end{table}
Table 1: The normal form game that candidates face if the election is still open.
* _If the favorite candidate has secured the election, the unique subgame equilibrium is_ \((y_{1},y_{2})=(x_{1},x_{2})\)_. The equilibrium payoffs are_ \(1\) _for the favorite and_ \(0\) _for the challenger._
* _If the election is still open, the unique subgame equilibrium is in mixed strategies:_ \[y_{i}=\begin{cases}\hat{y_{i}}&\text{with probability }1-\phi,\\ x_{i}&\text{with probability }\phi,\end{cases}\] _for the favorite candidate_ \(i\) _and_ \[y_{j}=\begin{cases}\hat{y_{j}}&\text{with probability }\phi,\\ x_{j}&\text{with probability }1-\phi,\end{cases}\] _for the challenger_ \(j\)_. The equilibrium payoffs are_ \((1-\phi)\) _for the favorite and_ \(0\) _for the challenger._
Proof.: See Appendix A.2.
If the the election is secured, the uniqueness of the equilibrium is clear, as \((x_{1},x_{2})\) are strictly dominant strategies. Otherwise, the uniqueness concerns the equilibrium payoff but not the equilibrium strategies, as in some configurations other strategies might be redundant to the strategies \(\hat{y_{i}}\) and constitute an equilibrium with the same payoffs. As shown in Lemma 2, disregarding these strategies is without loss of generality because it does not impact the payoffs nor the analysis.
The comparison of the subgame equilibrium payoffs when the election is secured or open illustrates the inefficiency of the second scenario. When the election is still open, the favorite gets \((1-\phi)\) and the challenger \(0\) (in expectation), which is the same payoffs as if the challenger were to concede but the favorite candidate still paid the organizational cost to change his policy.
#### 3.1.2 Identical Ex-Ante Platforms \(x_{1}=x_{2}\)
In the case of identical ex-ante platforms, the election is always open and there is neither a favorite nor a challenger, which breaks down the analysis from Proposition 1. As before, both candidates have an optimal platform \(\hat{y_{i}}\) where they want to move if they do. This optimal position is again equal to \(\hat{y_{i}}=\frac{m+ax_{i}}{1+a}\) and is now identical for both candidates. Hence, if both candidates move or both do not move, each has a probability of \(\frac{1}{2}\) to win the election. We can then restrict our attention to the ex-post game given by the \(2\times 2\) matrix in Table 2.
Since \(\phi<\frac{1}{2}\), we have \(\frac{1}{2}-\phi>0\) and \(1-\phi>\frac{1}{2}\). It follows that keeping the ex-ante platform is a strictly dominated strategy and in the unique equilibrium both candidates choose platforms \(\hat{y_{1}}=\hat{y_{2}}\). The above discussion proves the following Proposition:
**Proposition 2** (Equilibrium in the second stage of the game for \(x_{1}=x_{2}\)).: _Suppose that ex-ante, the identical platforms \(x_{1}=x_{2}\) were chosen. The unique subgame equilibrium in the second stage is \((\hat{y_{1}},\hat{y_{2}})\) (given by Eq. (3)). The equilibrium payoffs are \((\frac{1}{2}-\phi,\frac{1}{2}-\phi)\)._
Note that \(\frac{1}{2}-\phi\) is the worst possible equilibrium payoff in this game. Indeed, no matter the strategy of his opponent, candidate \(i\) can select \(x_{i}=\frac{1}{2}\) and \(y_{i}=\hat{y_{i}}\). By doing so he wins the election with probability of at least \(\frac{1}{2}\), while paying the organizational cost \(\phi\).
### First Stage: Choice of Ex-Ante Platforms \(x_{1}\), \(x_{2}\)
By backward induction, candidates choose their ex-ante platforms by considering that the subgame equilibrium given by Proposition 1 or 2 is played in the second stage. In the first stage, we restrict attention to pure strategies \(x_{i}\) for both candidates.
A first result is that there cannot exist a subgame-perfect equilibrium in which candidates select the same position in the first stage. If the ex-ante positions were identical, each candidate would have a profitable deviation by playing an infinitesimally different ex-ante position, since such a deviation induces a better equilibrium payoff after the second stage. We prove this claim within the proof of Proposition 4 below but anticipating this result allows us to focus on ex-ante platforms that properly define a favorite and a challenger, as in Section 3.1.1.
The next proposition shows that if \(\phi\) and \(a\) are sufficiently big, then the ex-ante platforms diverge from the center in the unique subgame-perfect equilibrium. The intuition behind this result is that, conditional on being the favorite, candidates prefer the election to be secured rather than still open, when the position of \(m\) is revealed. Divergence occurs because the probability of a secured election is proportional to the distance between the ex-ante platforms \(|x_{2}-x_{1}|\). More precisely, given the unique subgame equilibrium described
\begin{table}
\begin{tabular}{c c|c|c} & & \multicolumn{2}{c}{Candidate 2} \\ \multirow{2}{*}{Candidate 1} & \(\hat{y_{1}}\) & \(x_{2}\) \\ \cline{3-4} & \(x_{1}\) & \((\frac{1}{2}-\phi,\frac{1}{2}-\phi)\) & \((1-\phi,0)\) \\ \cline{3-4} & \(x_{1}\) & \((0,1-\phi)\) & \((\frac{1}{2},\frac{1}{2})\) \\ \end{tabular}
\end{table}
Table 2: The normal form game that candidates face if the ex-ante platforms are identical.
in Proposition 1, each candidate \(i\)'s expected payoff is:
\[1\times\mathbb{P}(i\text{ is the favorite and the election is secured})+\\ +(1-\phi)\times\mathbb{P}(i\text{ is the favorite and the election is still open})+0 \tag{4}\]
where the probability \(\mathbb{P}\) represents the randomization of the median voter's location and the \(0\) is the expected payoff if \(i\) is the challenger. On the one hand, player prefer to be the favorite, which creates an incentive to move towards the opposite candidate in order to increase the likelihood of being the closest candidate to \(m\). On the other hand, being the favorite is not the only concern of the candidates, because they also want to maximize the probability of the election being secured conditional on being the favorite. The fact that the probability that the election is secured for the favorite is proportional to the distance between ex-ante platforms \(|x_{2}-x_{1}|\) creates an incentive to diverge. Indeed, by differentiating their platforms, candidates create "secure electorates" - regions of the policy space which guarantee a victory of a candidate if the median voter is revealed in such a region without the need of a costly adjustment of platform. This centrifugal force is to be traded off with the centripetal force and prevails in equilibrium if the condition on \(\phi\) in Proposition 3 holds.
**Proposition 3** (Differentiation of ex-ante platforms).: _Suppose that \(\phi>\frac{1}{1+4\sqrt{a(1+a)}}\) and without loss of generality that \(x_{1}\leq x_{2}\). In the unique subgame-perfect equilibrium, candidates' ex-ante platforms are_
\[(x_{1}^{*},x_{2}^{*})=\left(\frac{1}{\alpha+1},\frac{\alpha}{\alpha+1}\right) \tag{5}\]
_where \(\alpha=\sqrt{\frac{1+a}{a}}\), and the ex-post behavior is according to Propositions 1 and 2. The equilibrium expected payoffs are_
\[g_{1}^{*}=g_{2}^{*}=\frac{1}{2}-\frac{\phi}{2}\left(\frac{\alpha-1}{\alpha+1} \right)^{2} \tag{6}\]
Proof.: See Appendix A.3.
In these equilibrium payoffs, the first term represents the expected gain of the election and the second term represents the expected organizational cost. Indeed, in equilibrium the election is still open with probability \(\left(\frac{\alpha-1}{\alpha+1}\right)^{2}\) and in this case each player deviates with probability \(1-\phi\) or \(\phi\) whether he is the favorite or not, so he pays the cost \(\phi\) with probability \(\frac{1}{2}\phi+\frac{1}{2}(1-\phi)=\frac{1}{2}\).
Figure 2 shows the intervals on the policy space in which the election is secured for the favorite candidate or still open at the subgame-perfect equilibrium, which depend only on the electoral cost \(a\) through the parameter \(\alpha\).
The following proposition shows that if the condition on \(a\) and \(\phi\) in Proposition 3 is reversed, a subgame-perfect equilibrium does not exist. Instead, we prove the existence of an \(\epsilon\)-equilibrium, at which neither candidate can unilaterally improve his payoff by more than \(\epsilon\), for every \(\epsilon>0\).
**Proposition 4**.: _If \(0<\phi<\frac{1}{1+4\sqrt{a(1+a)}}\), there does not exist a subgame-perfect equilibrium. There exists instead an \(\epsilon\)-equilibrium, given by \((x_{1},x_{2})=(\frac{1}{2}-\epsilon,\frac{1}{2}+\epsilon)\) for every \(\epsilon>0\) small enough._
Proof.: See Appendix A.4.
The intuition for the result is that if the organizational cost of changing platform is sufficiently small, candidates prefer to converge towards the center to increase the chances of being the favorite candidate, even if this decreases the probability of a secured election. However, the centripetal incentive stops when candidates take the same position because of the discontinuity in the payoff when \(x_{1}=x_{2}\). Indeed, if candidates have minimally differentiated platforms, each has an advantage in the second stage if the median voter is located on his side of the policy space. Formally, by converging fully to \(x_{1}=x_{2}=\frac{1}{2}\) each candidate obtains an expected payoff equal to \(\frac{1}{2}-\phi\). Instead, by minimally differentiating from the center and choosing \(x_{1}=\frac{1}{2}-\epsilon\), \(x_{2}=\frac{1}{2}+\epsilon\) they both obtain a higher expected payoff, which converges to \(\frac{1}{2}-\frac{\phi}{2}\) as \(\epsilon\) goes to \(0\).
The condition on the parameters for the existence of a subgame-perfect equilibrium is drawn in Figure 3 in the \((a,\phi)\) plane. The region dealt with in Proposition 3 is designated by \(R_{0}\) and formally defined as \(\{(a,\phi)|\Psi(a)<\phi<\frac{1}{2}\}\) where
\[\Psi(a)=\frac{1}{1+4\sqrt{a(1+a)}}. \tag{7}\]
Figure 2: The status of each player being the favorite when the election is secured (FS) or still open (FO) at the subgame-perfect equilibrium described by Proposition 3, with respect to the position of the median voter \(m\).
In the region named \(R_{1}\), only an \(\epsilon\)-equilibrium exists.
### Adjustments along the equilibrium path
In light of our analysis, we can now put forward a few implications concerning the occurrence of candidates' policy changes. We focus on the parameter region in which the electoral and organizational costs are high enough, and thus a subgame-perfect equilibrium with divergent ex-ante positions exists.
**Proposition 5**.: _Suppose that \(\phi>\frac{1}{1+4\sqrt{a(1+a)}}\). At the subgame-perfect equilibrium,_
1. _candidates flip-flop to voters' preferences only towards the center;_
2. _the favorite candidate is more likely to flip-flop than the challenger;_
3. _when the favorite candidate flip-flops, the magnitude of the adjustment is smaller than when the challenger flip-flops;_
4. _flip-flopping is always successful when done by a favorite candidate, while it is more likely to be unsuccessful than successful for a challenger._
Proof.: See Appendix A.5.
Figure 3: The two regions \(R_{0}\) and \(R_{1}\) in the \(a-\phi\) space. The colored region represents the area \(R_{0}\) concerned with Proposition 3.
Property (i) describes a dynamic of moderation along the electoral campaign, according to which candidates start out with more extreme positions and converge to the center if and once the median voter realizes in the center. The property also implies that candidates never cross over the position of the opponent. As such, even flip-flopping candidates always keep their relative ideological stands, despite the fact that they are office seekers and have no preferred policy.
The intuition behind property (ii) relies on the mixed strategies used by candidates at equilibrium: the favorite is indifferent between flip-flopping or not only when the probability to lose the election is relatively small, that is when his opponent is less likely to flip-flop. Instead, the disadvantaged candidate is indifferent only when the victory is relatively unlikely, that is when the favorite secures his advantage with high probability.
Property (iv) means that a flip-flopping favorite always wins the election in equilibrium, while a flip-flopping challenger loses whenever the favorite also flip-flops, i.e. with probability \(1-\phi>\frac{1}{2}\). Taken together, these properties suggest that campaign flip-flopping consists most often in a minor adjustment by the favorite candidate in order to consolidate his victory, and only less likely it is a major and risky move by the challenger who tries to reverse the election outcome.
The next proposition focuses on the comparative statics properties with respect to the electoral cost parameter \(a\).
**Proposition 6**.: _When the electorate is more tolerant towards candidates changing positions, that is when \(a\) decreases,_
1. _the candidates' ex-ante policies are more polarized;_
2. _the election is more likely to be still open after the reveal of_ \(m\)_, hence each candidate is more likely to flip-flop;_
3. _the candidates' equilibrium payoffs are lower._
Proof.: See Appendix A.6.
The comparative statics analysis highlights two main phenomena. First, candidates prefer an intransigent electorate that is less tolerant of flip-flopping. The underlying mechanism is described in claim (ii): more tolerant electorates lead to increased competition, with more elections to be still open after the revelation of the median. As a result, winning elections without flip-flopping becomes less likely, leading to smaller equilibrium payoffs.
Additionally, while the electoral cost is a necessary ingredient in Proposition 5 to obtain differentiation, the degree of polarization decreases as the electoral cost increases. Indeed, in equilibrium, candidates push their secured intervals to the limits of the policy space, as \(\underline{m}=0\) and \(\overline{n}=1\), and they position in the middle of these intervals. However, when \(a\) is small, these intervals shrink due to increased competition (the challenger can more easily flip-flop to reverse the election), causing the center of these intervals to shift closer to the extremities, and the degree of candidates polarization is thus larger.
## 4 Asymmetric candidates
In this section, we relax the symmetry between candidates by supposing that voters penalize differently each candidate for changing platform, via different values of the parameter \(a\).4 Thus, the utility of voter \(t\) from voting for candidate \(i\) with ex-ante and ex-post platforms \(x_{i}\) and \(y_{i}\) is
Footnote 4: A similar analysis can be made to consider different organizational costs \(\phi\), without affecting the results significantly.
\[u_{t}^{i}(x_{i},y_{i}):=-(t-y_{i})^{2}-a_{i}(y_{i}-x_{i})^{2} \tag{8}\]
as opposed to Eq. (1).
An interesting phenomenon results from this heterogeneity between candidates: for a range of possible realizations of the median voter's location, the favorite candidate is not guaranteed to win the election even if he adjusts his position. Indeed, the optimal adjustments are still given by weighted averages between the ex-ante platforms and the realization of the median voter, namely \(\hat{y_{i}}=\frac{m+a_{i}x_{i}}{1+a_{i}}\), but because the weights are different for each candidate, the challenger might attract the median voter after both candidates adjust. Suppose, for example, that \(a_{1}<a_{2}\) and that platforms \(x_{1}<x_{2}\) were chosen ex-ante. If \(m\) realizes sufficiently close to \(\frac{x_{1}+x_{2}}{2}\) then candidate 1 can adjust his position closer to \(m\) than candidate 2, thanks to his smaller electoral cost, and win the election. This is true even if \(m\) is larger than \(\frac{x_{1}+x_{2}}{2}\), i.e. if candidate 2 is the favorite according to Definition 1. This justifies the following definition:
**Definition 3**.: _Candidate \(i\) is a weak favorite and candidate \(j\) is a strong challenger if \(|x_{i}-m|<|x_{j}-m|\) but \(u_{m}^{i}(x_{i},\hat{y}_{i})<u_{m}^{j}(x_{j},\hat{y}_{j})\)._
In other terms, a weak favorite candidate wins the election when no candidate adjusts, but looses the election when both candidates adjust their platforms. More precisely, the (second-stage) game played by a weak favorite and a strong challenger is as follows:
The game admits \((\hat{y_{1}},x_{2})\) as its unique equilibrium, since both actions are strictly dominant. That is, in equilibrium, a strong challenger adjusts his position and wins the election with certainty, while a weak favorite does not move. The equilibrium payoffs are given by \((1-\phi,0)\) and are identical to the case in which candidate \(1\) is the favorite and the election is still open in the symmetric game.
For \(a_{1}<a_{2}\), by solving \(u_{m}^{2}(x_{2},\hat{y}_{2})<u_{m}^{1}(x_{1},\hat{y}_{1})\) with respect to \(m\), we find that candidate \(2\) is a weak favorite when \(m\in(\frac{x_{1}+x_{2}}{2},\tilde{m})\), in which \(\tilde{m}=\frac{\alpha_{1}x_{2}+\alpha_{2}x_{1}}{\alpha_{1}+\alpha_{2}}\) and \(\alpha_{i}=\sqrt{\frac{1+a_{i}}{a_{i}}}\). This region is represented in light blue in Figure 4.
The analysis of the second stage in all other regions (when the favorite is not weak) is the same in the asymmetric game as in the symmetric game of the previous sections. The next proposition describes the subgame-perfect equilibrium considering both stages of the game.
**Proposition 7** (Equilibrium with asymmetric electoral costs).: _Assume that \(a_{1}<a_{2}\) and that \(\phi>\max(\Psi(a_{1}),\Psi(a_{2}))\). There exists a unique subgame-perfect equilibrium, in which the ex-ante locations are given by_
\[(x_{1}^{*},x_{2}^{*})=\left(\frac{\alpha_{1}-1}{\alpha_{1}\alpha_{2}-1},\frac {\alpha_{2}(\alpha_{1}-1)}{\alpha_{1}\alpha_{2}-1}\right)\]
_where \(\alpha_{i}=\sqrt{\frac{1+a_{i}}{a_{i}}}\)._
_The ex-post behavior depends on \(m\):_
* _For_ \(m\in\left[\frac{x_{1}^{*}+x_{2}^{*}}{2},\tilde{m}\right]\) _where_ \(\tilde{m}=\frac{\alpha_{1}x_{2}^{*}+\alpha_{2}x_{1}^{*}}{\alpha_{1}+\alpha_{2} }=\frac{\alpha_{2}(\alpha_{1}^{2}-1)}{(\alpha_{1}\alpha_{2}-1)(\alpha_{1}+ \alpha_{2})}\)_, candidate_ \(1\) _adjusts to_ \(\hat{y}_{1}\) _and wins the election, whereas candidate_ \(2\) _remains at_ \(x_{2}^{*}\) _and loses the election._
Figure 4: The status of each player being the favorite when the election is secured (FS) or still open (FO) at the subgame-perfect equilibrium described by Proposition 7, with respect to the position of the median voter \(m\), for \(a_{1}<a_{2}\). In the light blue region, candidate \(2\) is the weak favorite (WF)
* _Otherwise, it follows Proposition_ 1_._
_If \(\phi<\max(\Psi(a_{1}),\Psi(a_{2}))\), there is no subgame-perfect equilibrium._
Proof.: See Appendix A.7
While the ex-ante positions were symmetric around \(\frac{1}{2}\) in the case of identical electoral costs, we now have that
\[\frac{x_{1}^{*}+x_{2}^{*}}{2}=\frac{1}{2}+\frac{\alpha_{1}-\alpha_{2}}{2( \alpha_{1}\alpha_{2}-1)} \tag{9}\]
hence, the candidate with a lower \(a\) (a higher \(\alpha\)) takes a more centrist ex-ante position and has a higher expected payoff. Expected payoffs in equilibrium are modified with respect to the expressions in Eq. (6) by the possibility for the favorite to be weak, as follows:
\[\begin{split}& g_{1}=1\times\left[\tfrac{2\alpha_{2}(\alpha_{1}-1)}{( \alpha_{1}\alpha_{2}-1)(\alpha_{2}+1)}\right]+(1-\phi)\times\left[\tfrac{( \alpha_{1}-1)(\alpha_{1}\alpha_{2}+\alpha 2)}{(\alpha_{1}\alpha_{2}-1)(\alpha_{1}+ \alpha_{2})}-\tfrac{2\alpha_{2}(\alpha_{1}-1)}{(\alpha_{1}\alpha_{2}-1)( \alpha_{2}+1)}\right]\\ & g_{2}=1\times\left[1-\tfrac{(\alpha_{1}\alpha_{2}+1)(\alpha_{ 1}-1)}{(\alpha_{1}\alpha_{2}-1)(\alpha_{1}+1)}\right]+(1-\phi)\times\left[ \tfrac{(\alpha_{1}\alpha_{2}+1)(\alpha_{1}-1)}{(\alpha_{1}\alpha_{2}-1)( \alpha_{1}+1)}-\tfrac{\alpha_{2}(\alpha_{1}^{2}-1)}{(\alpha_{1}\alpha_{2}-1)( \alpha_{1}+\alpha_{2})}\right]\end{split} \tag{10}\]
With respect to Proposition 5, if candidates are asymmetric, it is still true that flip-flopping happens only towards the center. It is no longer true, in general, that a favorite candidate is more likely to flip-flop than a challenger or that the magnitude of his adjustment is smaller. The validity of these statements depends now on whether the favorite is weak or not and on the exact difference in the electoral costs \(a_{i}\). Flip-flopping is now always successful both for a favorite candidate (recall that a weak favorite never flip-flops in equilibrium) and for a strong challenger, while it remains more likely unsuccessful than successful for a challenger who is not strong.
The introduction of heterogeneous electoral costs also allows us to refine the comparative statics properties in Proposition 6. We can deduce from Eq. (9) and Eq. (10) that, when voters are more tolerant towards a change of position by candidate \(i\) (i.e. if \(a_{i}\) decreases for a given \(a_{j}\)), the ex-ante position of \(i\) is more centrist and his expected payoff is higher, while the ex-ante position of \(j\) is more extreme and his expected payoff is lower. Hence, while in the symmetric case, candidates are better off when the electorate penalizes flip-flopping more, in the asymmetric case candidates are better off only if a higher electoral cost concerns the opponent. In principle, a higher electoral cost could be beneficial for a candidate by reducing the likelihood of adjusting the position and paying the organizational cost. We find instead that, in equilibrium, each candidate would prefer voters to be more tolerant with himself. Indeed, the region in which a candidate \(i\) has a secured election is independent of \(a_{i}\) and only depends on the electoral cost \(a_{j}\) for the opponent. Hence,
any increase in \(a_{i}\) simply decreases the likelihood of \(i\) being in a better position (non-weak favorite or strong challenger) in a still-open election, and it is therefore detrimental.
## Appendix A Proofs
### Useful Lemmas
In this section we present results which we use in the paper. Most of these results are simple computation and are omitted from the main text for the ease of reading.
**Lemma 1**.: _Suppose \(x_{1}\neq x_{2}\) and without loss of generality \(x_{1}<x_{2}\). Candidate \(1\) has secured the election if and only if \(m\in(\underline{m},\overline{m})\) with_
\[\underline{m}:=\tfrac{\alpha x_{1}-x_{2}}{\alpha-1}\lor 0\]
\[\overline{m}:=\tfrac{\alpha x_{1}+x_{2}}{\alpha+1}\]
_candidate \(2\) has secured the election if and only if \(m\in(\underline{n},\overline{n})\) with_
\[\underline{n}:=\tfrac{\alpha x_{2}+x_{1}}{\alpha+1}\]
\[\overline{n}:=\tfrac{\alpha x_{2}-x_{1}}{\alpha-1}\wedge 1\]
Proof.: candidate \(1\) has secured the election when \(u_{m}^{1}(x_{1},x_{1})>\max\limits_{y}u_{m}^{2}(x_{2},y)\). After simplification, we find that it is the case when:
\[(m-x_{2})^{2}>\frac{1+a}{a}(m-x_{1})^{2}\]
A case by case analysis (whether \(m<x_{1}\), \(m\in[x_{1},x_{2}]\) or \(m>x_{2}\) gives that candidate \(1\) has secured the election if \(m\in(\tfrac{\alpha x_{1}-x_{2}}{\alpha-1},\tfrac{\alpha x_{1}+x_{2}}{\alpha+1})\). Because \(m\) has a support in \([0,1]\) that is \(m\in(\underline{m},\overline{m})\). The computations are symmetric for candidate \(2\). We easily verify that \(\underline{m}<x_{1}<\overline{m}<\tfrac{x_{1}+x_{2}}{2}<\underline{n}<x_{2}< \overline{n}\).
**Lemma 2**.: _Consider the ex-post platform that maximizes the utility of the median voter from candidate \(i\):_
\[\hat{y_{i}}=\operatorname*{arg\,max}\limits_{y_{i}}u_{m}(x_{i},y_{i})=\tfrac{m +ax_{i}}{1+a}\]
_We have:_
1. _If candidate_ \(i\) _is the favorite, any platform_ \(y_{i}\notin\{x_{i},\hat{y_{i}}\}\) _is either dominated by or redundant with a strategy in_ \(\{x_{i},\hat{y_{i}}\}\)
2. _If candidate_ \(i\) _is the challenger, any platform_ \(y_{i}\notin\{x_{i},\hat{y_{i}}\}\) _is weakly-dominated by the platform_ \(\hat{y_{i}}\) _._
3. _Weakly dominated actions are not played in equilibrium._
_Hence, at equilibrium, candidates select \(\hat{y_{i}}\) when they adjust their platforms._
Proof.: Suppose w.l.o.g. that candidate \(1\) is the favorite. For any pair of ex-ante platforms \((x_{1},x_{2})\), we define \(W_{1}:=\{y_{1}|u_{m}^{1}(x_{1},y_{1})>\max_{y_{2}}u_{m}^{2}(x_{2},y_{2})\}\), the set of policies of candidate \(1\) that guarantee him to win the election.
1. By definition, \(\hat{y_{1}}\in W_{1}\). In addition, If \(x_{1}\in W_{1}\), any action \(y_{1}\neq x_{1}\) is strictly dominated by \(x_{1}\) because \(x_{1}\) yields a payoff of \(1\) and any other actions yields at most \(1-\phi\), hence it is strictly dominated by \(x_{1}\).
If \(x_{1}\notin W_{1}\), any action \(y_{1}\in W_{1},y_{1}\neq\hat{y_{1}}\) is redundant with \(\hat{y_{1}}\) and any action \(y_{1}\notin W_{1},y_{1}\neq x_{1}\) is weakly dominated by \(\hat{y_{1}}\). Indeed, consider \(y\in W_{1}\backslash\{\hat{y_{1}}\}\). Both \(y\) and \(\hat{y_{1}}\) ensure a victory with payoff \(1-\phi\) regardless of the platform of candidate \(2\). Hence, \(y\) is redundant with \(\hat{y_{1}}\).
Consider instead \(y\notin W_{1},y\neq x_{1}\). Consider strategy \(\sigma\) that chooses \(y\) with positive probability and strategy \(\sigma^{\prime}\) which is identical to \(\sigma\), except that it plays \(\hat{y_{1}}\) instead of \(y\). Then the payoff of candidate \(1\) with \(\sigma^{\prime}\) is weakly higher than with \(\sigma\) regardless of the strategy of candidate \(2\). Indeed, the expected payoff when using \(y\) includes a term of the form \(p-\phi\), where \(p\) is the probability to win when using \(y\) against the strategy of candidate \(2\), while the corresponding term when using \(\hat{y_{1}}\) is \(1-\phi\), as \(\hat{y_{1}}\) wins with probability \(1\). The other terms in the expected payoff remain the same.
2. Any strategy that chooses \(y\notin\{\hat{y_{2}},x_{2}\}\) with some positive probability is weakly dominated by a strategy that transfers this probability to \(\hat{y_{2}}\). In both cases, \(\phi\) is paid, but \(u_{m}^{2}(x_{2},\hat{y_{2}})>u_{m}^{2}(x_{2},y)\) so \(\hat{y_{2}}\) wins in all the events in which \(y\) wins (and possibly in other events), resulting in a weakly higher payoff.
3. Weakly dominated action cannot be played in equilibrium. Suppose \(y\notin W_{1},y\neq x_{1}\) and suppose that in equilibrium, candidate \(1\) plays \(y\) with positive probability. Then his expected payoff when choosing \(y\) (which, by indifference, is his equilibrium payoff) is \(p-\phi\) where \(p\) is the probability of winning the election, \(p\leqslant 1\). If \(p<1\), then \(\hat{y_{1}}\) is a profitable deviation of candidate \(1\) as he obtains \(1-\phi\). If \(p=1\), candidate \(2\) always looses the election and his expected payoff is non-positive. Then in equilibrium candidate \(2\) must choose \(y_{2}=x_{2}\) which grants a payoff of \(0\), and candidate \(1\) again has a profitable deviation
to \(y_{1}=x_{1}\) which yields a payoff of \(1\). This contradicts the assumption that \(y\) is played with positive probability in equilibrium.
### Proof of Proposition 1
If the favorite candidate has secured the election, he has a strictly dominant strategy \(y_{i}=x_{i}\) whose best reply for the challenger is \(y_{j}=x_{j}\). Hence this is the unique subgame equilibrium.
If the election is still open, by Lemma 2, we can limit the analysis to the \(2\times 2\) game in Table 1. The game has no pure strategy equilibria. The unique mixed strategy equilibrium is such that both candidates are indifferent between the two pure strategies, given the mixed strategy of the opponent. Hence the probability \(p\) of playing \(\hat{y}_{i}\) for the favorite candidate solves
\[-p\phi+(1-p)(1-\phi)=0\quad\Rightarrow\quad p=1-\phi,\]
and the probability \(q\) of playing \(\hat{y}_{i}\) for the challenger candidate solves
\[(1-\phi)=(1-q)\quad\Rightarrow\quad q=\phi.\]
### Proof of Proposition 3
We perform a backward analysis by supposing that after choosing actions \(x_{1},x_{2}\), players play the second-stage equilibrium provided in Proposition 1 and Proposition 2.
We first show that in equilibrium, the best response to a certain ex-ante platform is never a more extreme platform to the same side. Formally and without loss of generality, if \(x_{2}<\frac{1}{2}\) then \(BR_{1}(x_{2})\geq x_{2}\), and if \(x_{2}>\frac{1}{2}\) then \(BR_{1}(x_{2})\leq x_{2}\), where \(BR_{1}\) is the best response ex-ante platform of Candidate 1 to the ex-ante platform \(x_{2}\) of Candidate 2.
To prove that, we show that if \(x_{1}<x_{2}<\frac{1}{2}\) then \(g_{1}(x_{1},x_{2})<g_{1}(1-x_{1},x_{2})\). By the uniform distribution of \(m\), \(g_{1}(x_{1},x_{2})=g_{1}(1-x_{1},1-x_{2})\). Hence, proving \(g_{1}(x_{1},x_{2})<g_{1}(1-x_{1},x_{2})\) is equivalent to proving that \(g_{1}(x_{1},x_{2})<g_{1}(x_{1},1-x_{2})\). When \(x_{1}<x_{2}<\frac{1}{2}\) and Candidate 2 moves from \(x_{2}\) to \(1-x_{2}\):
* candidate 1 is the favorite more often as \(\frac{x_{1}+x_{2}}{2}\) increases with \(x_{2}\).
* candidate 1 secures the election more often as both \(\overline{m}\) increases and \(\underline{m}\) decreases with \(x_{2}\).
We conclude that the expected payoff \(g_{1}\) of candidate 1 is larger, given \(g_{1}=1\times\mathbb{P}(1\text{ FS })+(1-\phi)\times\mathbb{P}(1\text{ FO })\). We can therefore suppose that \(x_{2}\geq\frac{1}{2}\) and conclude that the best
response of Candidate 1 belongs to \([0,x_{2}]\).5 The case where \(x_{2}\leqslant\frac{1}{2}\) and \(x_{2}\leqslant x_{1}\) is symmetric.
Footnote 5: In the border case where \(x_{2}=\frac{1}{2}\), both sides of \(x_{2}\) are symmetric and for every best response \(x_{1}\) in \([x_{2},1]\), the ex-ante platform \(1-x_{1}\in[0,x_{2}]\) is also a best response.
For \(x_{1}<x_{2}\), in each region of the graph in Figure 1, the payoff is determined by Proposition 1:6
Footnote 6: We can safely ignore null probability events regarding the boundaries, such as \(m=\overline{m}\).
* If \(m\in(\underline{m},\overline{m})\), candidate 1 has secured the election and his second-stage equilibrium payoff is 1.
* If \(m\in[0,\underline{m})\cup(\overline{m},\frac{x_{1}+x_{2}}{2})\), candidate 1 has secured the election and his second-stage equilibrium payoff is \(1-\phi\).
* If \(m\in(\frac{x_{1}+x_{2}}{2},1]\), candidate 1 is the challenger and his second-stage equilibrium payoff is 0.
Because \(m\) is uniformly drawn on the unit interval, the ex-ante payoff of candidate 1 is
\[g_{1}(x_{1},x_{2})=(\overline{m}-\underline{m})+(1-\phi)\left(\tfrac{x_{1}+x _{2}}{2}-\overline{m}+\underline{m}-0\right)\]
Substituting the values of \(\overline{m}\) and \(\underline{m}\) from Lemma 1, we have that for \(x_{1}<x_{2}\)
\[g_{1}(r,x)=\begin{cases}x_{1}(\frac{1-\phi}{2}+\frac{\phi\alpha}{\alpha+1})+x _{2}(\frac{1-\phi}{2}+\frac{\phi}{\alpha+1})\text{ if }0\leqslant x_{1} \leqslant\frac{x_{2}}{\alpha}\\ x_{1}(\frac{1-\phi}{2}-\frac{2\phi\alpha}{\alpha^{2}-1})+x_{2}(\frac{1-\phi}{ 2}+\frac{2\phi\alpha}{\alpha^{2}-1})\text{ if }\frac{x_{2}}{\alpha}<x_{1}<x_{2}\end{cases}\]
The payoff function \(g_{1}\) is therefore increasing with \(x_{1}\) in the region \([0,\frac{x_{2}}{\alpha}]\). In the region \([\frac{x_{2}}{\alpha},x_{2})\), the function \(g_{1}\) decreases with \(x_{1}\) when \(\frac{1-\phi}{2}-\frac{2\phi\alpha}{\alpha^{2}-1}<0\), which is equivalent to
\[\phi>\tfrac{\alpha^{2}-1}{\alpha^{2}+4\alpha-1}=\tfrac{1}{1+4\sqrt{a(1+a)}}=\Psi (a)\]
It follows that the best response to \(x_{2}\) for candidate 1 in the region \([0,x_{2})\) is \(x_{1}^{*}(x_{2})=\frac{x_{2}}{\alpha}\). If instead \(x_{1}=x_{2}\), then according to Proposition 2 the payoff \(g_{1}\) is \(\frac{1}{2}-\phi\), regardless of \(x_{2}\). Note that by choosing \(x_{1}=x_{2}-\epsilon\), the payoff \(g_{1}\) would be \(x_{2}(1-\phi)\), which is always greater than \(\frac{1}{2}-\phi\) for any \(x_{2}\geqslant\frac{1}{2}\) and \(\phi>0\). Hence \(x_{1}=x_{2}\) cannot be a best reply for candidate 1.
The same arguments apply to candidate 2 for any \(x_{1}\leqslant\frac{1}{2}\). The function \(g_{2}\) is increasing with \(x_{2}\) in the \([x_{1},1-\frac{1-x_{1}}{\alpha}]\) region and decreasing with \(x_{2}\) in the \([1-\frac{1-x_{1}}{\alpha},1]\) region, when \(\phi>\Psi(a).\) Hence, the best response to \(x_{1}\) for candidate 2 in the region \((x_{1},1]\) is \(x_{2}^{*}(x_{1})=1-\frac{1-x_{1}}{\alpha}\). Choosing \(x_{2}=x_{1}\) cannot be optimal as it is dominated by \(x_{2}=x_{1}+\epsilon\).
To conclude, when \(\phi>\Psi(a)\) the function \(g_{1}\) admits a global maximum in \(x_{1}^{*}(x_{2})=\frac{x_{2}}{\alpha}\) and the function \(g_{2}\) admits a global maximum in \(x_{2}^{*}(x_{1})=1-\frac{1-x_{1}}{\alpha}\). In equilibrium, both
candidates best respond to each other, and these two equations provide the unique profile: \(x_{1}^{*}=\frac{1}{\alpha+1}\) and \(x_{2}^{*}=\frac{\alpha}{\alpha+1}\). The payoffs associated with these locations are:
\[g_{1}^{*}=g_{2}^{*}=\frac{1}{2}-\frac{\phi}{2}\left(\frac{\alpha-1}{\alpha+1} \right)^{2}\]
### Proof of Proposition 4
According to the proof of Proposition 3, since \(\phi<\Psi(a)\), the payoff \(g_{1}\) is strictly increasing with \(x_{1}\) in the \([0,x_{2})\) region. The unique possible equilibrium is then \(x_{1}=x_{2}\), where the payoff is discontinuous. At the limit \(x_{1}\to x_{2}\), both \(\underline{m}\) and \(\overline{m}\) converge to \(x_{2}\), so:
\[\lim_{x_{1}\to x_{2}}g_{1}(x_{1},x_{2})=\lim_{x_{1}\to x_{2}}\tfrac{x_{1}+x_{ 2}}{2}(1-\phi)+\phi(\overline{m}-\underline{m})=x_{2}(1-\phi)\]
On the other hand, the second stage equilibrium when \(x_{1}=x_{2}\) yields a payoff of \(g_{1}=\frac{1}{2}-\phi\). The condition for equilibrium can then be written \(x_{2}(1-\phi)\leq\frac{1}{2}-\phi\) or equivalently \(x_{2}\leq\frac{\frac{1}{2}-\phi}{1-\phi}\). Because \(0<\phi<\frac{1}{2}\) the right hand side of the previous inequality is smaller than \(\frac{1}{2}\), so \(x_{2}<\frac{1}{2}\). By repeating the same calculation for candidate 2, we obtain that \(x_{1}=x_{2}\) is an equilibrium only when \(x_{1}>\frac{1}{2}\), which cannot hold with \(x_{1}=x_{2}\) and \(x_{2}<\frac{1}{2}\).
We now prove that if \(\phi<\Psi(a)\), \((x_{1},x_{2})=(\frac{1}{2}-\epsilon,\frac{1}{2}+\epsilon)\) is an \(\epsilon\)-equilibrium. Based on the expression computed in subsection A.3, candidate 1's payoff is \(g_{1}(\frac{1}{2}-\epsilon,\frac{1}{2}+\epsilon)=(\frac{1}{2}-\epsilon)(\frac{ 1-\phi}{2}-\frac{2\phi\alpha}{\alpha^{2}-1})+(\frac{1}{2}+\epsilon)(\frac{1- \phi}{2}+\frac{2\phi\alpha}{\alpha^{2}-1})=\frac{1-\phi}{4}+\epsilon\frac{4 \alpha\phi}{\alpha^{2}-1}\). On the other hand, because \(\phi<\Psi(a)\), \(g_{1}(x_{1},\frac{1}{2}+\epsilon)\) is increasing with \(x_{1}\) in the region \([0,\frac{1}{2}+\epsilon)\), therefore we can bound from above the possible payoff of candidate 1 by \(\lim_{x_{1}\uparrow\frac{1}{2}+\epsilon}g(x_{1},\frac{1}{2}+\epsilon)=\frac{ 1-\Phi}{4}+\epsilon\frac{1-\phi}{2}\). The loss of candidate 1 when playing \(\frac{1}{2}-\epsilon\) is
\[\lim_{x_{1}\rightarrow\frac{1}{2}-\epsilon}g_{1}(x_{1},\frac{1}{2 }+\epsilon)-g_{1}(\frac{1}{2}-\epsilon,\frac{1}{2}+\epsilon) =\frac{1-\phi}{4}+\epsilon\frac{1-\phi}{2}-\frac{1-\phi}{4}+ \epsilon\frac{4\alpha\phi}{\alpha^{2}-1}\] \[=\epsilon\left(\frac{1-\phi}{2}-\frac{4\alpha\phi}{\alpha^{2}-1} \right)\to 0\text{ as }\epsilon\to 0.\]
### Proof of Proposition 5
(i) Based on Proposition 1 and 3, the election is open if and only if \(m\in\left[\frac{2\alpha}{(\alpha+1)^{2}},\frac{\alpha^{2}+1}{(\alpha+1)^{2}}\right]\). On the other hand, \(x_{1}^{*}\) is on the left of this interval and \(x_{2}^{*}\) on the right. Because candidate \(i\) eventually flip-flops to \(\hat{y_{i}}=\frac{m+ax_{i}^{*}}{1+a}\), he flip-flops towards the center.
(ii) At the mixed equilibrium played in the second stage, if the election is still open, the favorite candidate flip-flops with probability \(1-\phi>\frac{1}{2}\) and that the challenger flip-flops with probability \(\phi<\frac{1}{2}\).
(iii) Based on Eq. (3), the optimal platforms \(\hat{y_{1}}\) and \(\hat{y_{2}}\) are the same weighted average of \(m\) and respectively \(x_{1}\) and \(x_{2}\). If player 1 is the favorite, \(x_{1}\) is closer to \(m\) than \(x_{2}\). Therefore, the distance between \(\hat{y_{1}}\) and \(x_{1}\) is smaller than the distance between \(\hat{y_{2}}\) and \(x_{2}\).
(iv) By point (ii) and the fact that a favorite candidate \(i\) always wins if he moves to \(\hat{y_{i}}\).
### Proof of Proposition 6
(i) Based on Proposition 3, equilibrium location are given by \(\left(\frac{1}{1+\alpha},\frac{\alpha}{1+\alpha}\right)\). The distance between ex-ante platforms is therefore equal to \(\frac{\alpha-1}{\alpha+1}\) which is increasing with \(\alpha\), thus decreasing with \(a\).
(ii) Propositions 1 and 3 together prove that at equilibrium, the election is still open with probability \(\left(\frac{\alpha-1}{\alpha+1}\right)^{2}\) which is also increasing with \(\alpha\), thus decreasing with \(a\). Conditionally on the election to be still open, the likelihood of flip-flopping does not depend on \(a\) (it is \(1-\phi\) and \(\phi\) for the favorite and the challenger respectively).
(iii) Proposition 3 gives that equilibrium payoffs are decreasing with the probability of the election to be still open \(\left(\frac{\alpha-1}{\alpha+1}\right)^{2}\), so this claim is a corollary of claim (ii).
### Proof of Proposition 7
We follow the proof of the symmetric competition (Appendix A.2 and A.3) and discuss the differences. First, the optimal ex-post positions (given by Eq. (3) in the symmetric case) differs with \(a_{i}\): we find \(\hat{y_{i}}=\frac{m+a_{i}x_{i}}{1+a_{i}}\). A candidate with a larger \(a_{i}\) is more penalized by the voters and changes less his ex-ante platform than his opponent. Second, the interval of \(m\) for which each candidate has secured the election generalizes to \((\underline{m},\overline{m})=(\frac{\alpha_{2}x_{1}-x_{2}}{\alpha_{2}-1} \lor 0,\frac{\alpha_{2}x_{1}+x_{2}}{\alpha_{2}+1})\) for candidate 1 and \((\underline{n},\overline{n})=(\frac{\alpha_{1}x_{2}+x_{1}}{\alpha_{1}+1}, \frac{\alpha_{1}x_{2}-x_{1}}{\alpha_{1}-1}\wedge 1)\) for candidate 2, where \(\alpha_{i}=\sqrt{\frac{1+a_{i}}{a_{i}}}\). Notice that the region where each candidate has secured the election does not depend on his own parameter \(\alpha_{i}\) but on his opponent's. The intuition is that the region where candidate \(i\) has secured the election is the region where he wins the election without moving, so the penalty for his movement does not play a role. Moreover, the larger \(a_{j}\), the smaller movement of candidate \(j\) towards the median voter and therefore the region where candidate \(i\) has secured the election is on a broader region of possible \(m\).
Next, we solve the inequality \(u_{m}^{1}(x_{1},\hat{y}_{1})>u_{m}^{2}(x_{2},\hat{y}_{2})\), which holds for \(m<\frac{\alpha_{1}x_{2}+\alpha_{2}x_{1}}{\alpha_{1}+\alpha_{2}}:=\tilde{m}\). Note that \(\tilde{m}>\frac{x_{1}+x_{2}}{2}\) for \(a_{1}<a_{2}\) with equality when \(a_{1}=a_{2}\). Hence, in the region \(\left(\frac{x_{1}+x_{2}}{2},\tilde{m}\right)\) candidate 2 is favorite, the election is still open and candidate 2 cannot defend his advantage: when candidate 1 moves to \(\hat{y}_{1}\), he wins the election regardless of the action of candidate 2. Since candidate 1 loses the election if he does not move, moving is a dominating
strategy and in this region the optimal strategy for candidate 1 is to move, for candidate 2 not to move and the payoff is \((1-\phi,0)\).
Note that the organizational costs are unchanged, so the second stage remains strictly identical to Proposition 1 for \(m\notin\left(\frac{x_{1}+x_{2}}{2},\tilde{m}\right)\). Moreover, the payoff in \(\left(\frac{x_{1}+x_{2}}{2},\tilde{m}\right)\) is exactly the same as the expected payoff in the region where candidate 1 is favorite and the election is still open, so, although the second stage strategy is different, in terms of continuation payoff we can analyze the first stage as if candidate 1 is favorite and the election is still open in the region \([\overline{m},\tilde{m}]\) instead of \([\overline{m},\frac{x_{1}+x_{2}}{2}]\).
Finally, repeating the argument in Appendix A.3, we find that it is necessary and sufficient to have \(\phi>\Psi(a_{1})\) to guarantee the existence of a best response of candidate 1, which is given by \(x_{1}(x_{2})=\frac{x_{2}}{\alpha_{2}}\), and analogously, we have \(x_{2}(x_{1})=1-\frac{1-x_{1}}{\alpha_{1}}\) when \(\phi>\Psi(a_{2})\). Together, we conclude that if \(\phi>\max(\Psi(a_{1}),\Psi(a_{2}))\), then \(x_{1}^{*}=\frac{\alpha_{1}-1}{\alpha_{1}\alpha_{2}-1}\) and \(x_{2}^{*}=\frac{\alpha_{2}(\alpha_{1}-1)}{\alpha_{1}\alpha_{2}-1}\). If any of the thresholds is higher than the organizational cost, there exists no equilibrium.
|
2308.02610 | Efficient Algorithms for Finite $\mathbb{Z}$-Algebras | For a finite $\mathbb{Z}$-algebra $R$, i.e., for a $\mathbb{Z}$-algebra which
is a finitely generated $\mathbb{Z}$-module, we assume that $R$ is explicitly
given by a system of $\mathbb{Z}$-module generators $G$, its relation module
${\rm Syz}(G)$, and the structure constants of the multiplication in $R$. In
this setting we develop and analyze efficient algorithms for computing
essential information about $R$. First we provide polynomial time algorithms
for solving linear systems of equations over $R$ and for basic ideal-theoretic
operations in $R$. Then we develop ZPP (zero-error probabilitic polynomial
time) algorithms to compute the nilradical and the maximal ideals of
0-dimensional affine algebras $K[x_1,\dots,x_n]/I$ with $K=\mathbb{Q}$ or
$K=\mathbb{F}_p$. The task of finding the associated primes of a finite
$\mathbb{Z}$-algebra $R$ is reduced to these cases and solved in ZPPIF (ZPP
plus one integer factorization). With the same complexity, we calculate the
connected components of the set of minimal associated primes ${\rm
minPrimes}(R)$ and then the primitive idempotents of $R$. Finally, we prove
that knowing an explicit representation of $R$ is polynomial time equivalent to
knowing a strong Gr\"obner basis of an ideal $I$ such that $R =
\mathbb{Z}[x_1,\dots,x_n]/I$. | Martin Kreuzer, Florian Walsh | 2023-08-04T13:42:02Z | http://arxiv.org/abs/2308.02610v5 | # Efficient algorithms for finite \(\mathbb{Z}\)-algebras
###### Abstract.
For a finite \(\mathbb{Z}\)-algebra \(R\), i.e., for a \(\mathbb{Z}\)-algebra which is a finitely generated \(\mathbb{Z}\)-module, we assume that \(R\) is explicitly given by a system of \(\mathbb{Z}\)-module generators \(G\), its relation module \(\operatorname{Syz}(G)\), and the structure constants of the multiplication in \(R\). In this setting we develop and analyze efficient algorithms for computing essential information about \(R\). First we provide polynomial time algorithms for solving linear systems of equations over \(R\) and for basic ideal-theoretic operations in \(R\). Then we develop ZPP (zero-error probabilistic polynomial time) algorithms to compute the nilradical and the maximal ideals of \(0\)-dimensional affine algebras \(K[x_{1},\ldots,x_{n}]/I\) with \(K=\mathbb{Q}\) or \(K=\mathbb{F}_{p}\). The task of finding the associated primes of a finite \(\mathbb{Z}\)-algebra \(R\) is reduced to these cases and solved in ZPPIF (ZPP plus one integer factorization). With the same complexity, we calculate the connected components of the set of minimal associated primes \(\operatorname{minAss}(R)\) and then the primitive idempotents of \(R\). Finally, we prove that knowing an explicit representation of \(R\) is polynomial time equivalent to knowing a strong Grobner basis of an ideal \(I\) such that \(R=\mathbb{Z}[x_{1},\ldots,x_{n}]/I\).
Key words and phrases:Finite \(\mathbb{Z}\)-algebra, efficient algorithm, polynomial complexity, primary decomposition, primitive idempotents 2020 Mathematics Subject Classification: Primary 13P99; Secondary 68W39, 13P10, 68Q15
## 1. Introduction
Computing the radical and the primary decomposition of an ideal, the associated primes and the primitive idempotents of an algebra, or the connected components of its spectrum, are among the hardest tasks in Computer Algebra. For a finitely generated algebra \(R=K[x_{1},\ldots,x_{n}]/I\) over a field \(K\) with an ideal \(I\) that is given by its generators, the usual solutions of these tasks involve computing Grobner bases and factoring multivariate polynomials over extension fields of \(K\) (see for instance [7], [13], [15], [19], [20], [21]).
The difficulty of the problem increases further when we consider algebras over the integers, i.e., algebras of the form \(R=\mathbb{Z}[x_{1},\ldots,x_{n}]/I\) with an ideal \(I\) given by a system of generators. In this case we will also have to factor (potentially large) integers, as already the example \(R=\mathbb{Z}/n\mathbb{Z}\) shows. Since the 1970s, various approaches have been taken to tackle these tasks, starting with the case of an algebra \(R\) which is a finitely generated \(\mathbb{Z}\)-module (see [2], [4], [28], [25]). At the core of most of these algorithms lies the calculation of strong Grobner bases for ideals in \(\mathbb{Z}[x_{1},\ldots,x_{n}]\) which tends to be quite demanding. It is also possible to apply more general algorithms for associative, not necessarily commutative algebras here (see [8], [9], [26]), but we can expect the efficiency of such very general methods to be usually even lower than the ones for commutative algebras.
In a recent paper [18], we encountered finite \(\mathbb{Z}\)-algebras given in a different, very classical way, namely by \(\mathbb{Z}\)-module generators, their linear relations, and the structure constants of the multiplication. To simplify the discussion, we call them _explicitly given_\(\mathbb{Z}\)-algebras here. Since we needed good complexity bounds, the computation of strong Grobner bases using a suitable version of Buchberger's algorithm
was not a feasible option. Instead, we developed a collection of algorithms which are of polynomial time complexity, or at least as close as possible to polynomial time complexity in the bit size of the input. These algorithms are explained and carefully analyzed here.
Let us discuss them in more detail one by one. Throughout we work with an explicitly given finite \(\mathbb{Z}\)-algebra \(R\), i.e. a \(\mathbb{Z}\)-algebra for which we know a system of generators \(G=(g_{0},\ldots,g_{n})\) of its additive group, a system of generators of the \(\mathbb{Z}\)-linear relation module \(\operatorname{Syz}_{\mathbb{Z}}(G)\subseteq\mathbb{Z}^{n+1}\), and the structure constants \(c_{ijk}\in\mathbb{Z}\) such that \(g_{i}\cdot g_{j}=\sum_{k=0}^{n}c_{ijk}g_{k}\) for \(i,j=0,\ldots,n\). By a polynomial time algorithm we mean an algorithm whose running time is bounded by a polynomial expression in the bit size of these input data. Using the well-known facts that the Smith and Hermite normal forms of an integer matrix can be calculated in polynomial time (see [17], [22], [29], [30]), we exhibit polynomial time algorithms for solving linear systems of equations over \(R\) (see Proposition 2.6), various operations with ideals in \(R\) (see Proposition 2.8), and the computation of preimages under a Chinese Remainder Theorem type of isomorphism (see Proposition 2.9).
For the manner in which we deal with finite \(\mathbb{Z}\)-algebras later, it is useful to reconsider the case of \(0\)-dimensional algebras over a field \(K\) in Section 3. In order to compute the nilradical of an explicitly given \(K\)-algebra \(R\), we need to calculate the factorization of univariate polynomials over \(K\). The currently best algorithms have polynomial time complexity in the case \(K=\mathbb{Q}\) and zero-error probabilistic polynomial time complexity (ZPP) in the case of a prime field \(\mathbb{F}_{p}\). Algorithm 3.3 then determines the nilradical of \(R\) with these time complexities. To compute the primary decomposition of the zero ideal of \(R\) in the case \(K=\mathbb{F}_{p}\), we apply the method of Frobenius spaces (see [21], Alg. 5.2.7) and get an algorithm in ZPP (see Algorithm 3.5). By applying this algorithm to \(R/\operatorname{Rad}(0)\), we are finally able to find the maximal ideals of \(R\) in PP and ZPP for \(K=\mathbb{Q}\) and \(K=\mathbb{F}_{p}\), respectively (see Corollary 3.6).
In Section 4 we then use the ideas of [25] to compute the associated primes of an explicitly given finite \(\mathbb{Z}\)-algebra \(R\) (see Algorithm 4.2). The method is to distinguish between the prime ideals which contain an integer prime number and those which don't. In both cases the computation is reduced to the setting of the preceding section. Since the determination of associated primes may involve the factorization of an integer, the time complexity we can achieve here is ZPPIF, i.e., ZPP plus one integer factorization. More precisely, the integer which has to be factored is the torsion exponent of the additive group of \(R\).
For the final task, namely the computation of the primitive idempotents of an explicitly given finite \(\mathbb{Z}\)-algebra \(R\), we need to determine the connected components of \(\operatorname{Spec}(R)\) first. More precisely, since \(R\) may have infinitely many prime ideals, we compute the connected components of the set of minimal associated primes \(\operatorname{minAss}(R)\) in Algorithm 5.5. As this uses the results of the preceding section, the complexity class of this algorithm is ZPPIF. Finally, we calculate the primitive idempotents of \(R\) in Algorithm 5.6 by lifting them from the primitive idempotents of \(R/\operatorname{Rad}(0)\) using Algorithm 5.1. Once again the complexity class is ZPPIF.
In the last section of this paper we connect the method of using an explicit representation of \(R\) to the more traditional method of calculating a strong Grobner basis of an ideal \(I\) in \(P=\mathbb{Z}[x_{1},\ldots,x_{n}]\) such that \(R=P/I\). To show that from an explicitly given finite \(\mathbb{Z}\)-algebra \(R\) we can compute in polynomial time a strong Grobner basis of a defining ideal \(I\) of \(R\) (see Corollary 6.5), we use a generalization of the Buchberger-Moller Algorithm (see Algorithm 6.3). Conversely, we also show that a strong Grobner basis of \(I\) allows us to calculate an explicit representation
of \(R\) in polynomial time (see Algorithm 6.7) with the help of a generalization of Macaulay's Basis Theorem to finite \(\mathbb{Z}\)-algebras (see Proposition 6.6).
For the notation and basic definitions we adhere to the conventions in [19] and [20]. All algorithms in this paper were implemented in the computer algebra system ApCoCoA (see [3]) and are available from the authors upon request.
## 2. Polynomial Time Computations in Finite \(\mathbb{Z}\)-Algebras
Let \(R\) be a finite \(\mathbb{Z}\)-algebra, i.e., a \(\mathbb{Z}\)-algebra which is a finitely generated \(\mathbb{Z}\)-module. We denote the additive group of \(R\) by \(R^{+}\). In this section we collect operations in \(R\) which can be computed in polynomial time if a presentation of \(R\) is given as below.
**Remark 2.1**.: **(Explicitly Given \(\mathbb{Z}\)-Algebras)**
Subsequently we assume that a \(\mathbb{Z}\)-algebra \(R\) is given by the following information.
1. A set of generators \(\mathcal{G}=\{g_{0},\ldots,g_{n}\}\) of the \(\mathbb{Z}\)-module \(R^{+}\), together with a matrix \(A=(a_{\ell k})\in\operatorname{Mat}_{m,n+1}(\mathbb{Z})\) whose rows generate the syzygy module \(\operatorname{Syz}_{\mathbb{Z}}(\mathcal{G})\) of \(\mathcal{G}\).
2. Structure constants \(c_{ijk}\) such that \(g_{i}g_{j}=\sum_{k=0}^{n}c_{ijk}g_{k}\) for \(i,j=0,\ldots,n\).
Notice that we can assume that \(g_{0}=1\), and encode this information as an ideal
\[I=\langle x_{i}x_{j}-\sum\limits_{k=0}^{n}c_{ijk}x_{k},\ \sum\limits_{k=0}^{n}a_{ \ell k}g_{k}\ |\ i,j=1,\ldots,n,\ \ell=1,\ldots,m\rangle\]
in \(P=\mathbb{Z}[x_{1},\ldots,x_{n}]\) such that \(R\cong P/I\). If \(R\) is given as above, we call it an **explicitly given \(\mathbb{Z}\)-algebra**.
The bit complexity of the matrix \(A\) in (a) which defines the \(\mathbb{Z}\)-module structure of \(R^{+}\) is given by
\[\beta=(n+1)m\log_{2}(\|A\|)\quad\text{with}\quad\|A\|=\max\{|a_{\ell k}|\}.\]
The bit complexity of the entire input defining the \(\mathbb{Z}\)-algebra \(R\) is then given by
\[\gamma=((n+1)^{3}+(n+1)m)\log_{2}(M)\quad\text{with}\quad M=\max\{|a_{\ell k} |,|c_{ijk}|\}.\]
In this section we collect computations in \(R\) which can be performed in polynomial time in \(\beta\) or \(\gamma\), respectively. More precisely, we will use the following complexity classes.
**Definition 2.2**.: **(Polynomial Time Complexity Classes)**
Consider an algorithm which takes a tuple of integers as input.
1. The algorithm is in the complexity class **P (polynomial time)** if its running time is bounded by a polynomial expression in the bit complexity of the input.
2. The algorithm is in the complexity class **ZPP (zero-error probabilistic polynomial time)** if it is a Las Vegas algorithm which has polynomial running time in the bit complexity of the input.
3. The algorithm is in the complexity class **ZPPIF (zero-error probabilistic polynomial time plus integer factorization)** if, except for the factorization of one integer, the algorithm is in ZPP and the bit size of the integer to be factored is bounded by a polynomial expression in the bit complexity of the input.
It is useful to bring the \(\mathbb{Z}\)-module presentation of \(R^{+}\) into a normal form.
**Remark 2.3**.: Let \(A=(a_{ij})\in\operatorname{Mat}_{m,n+1}(\mathbb{Z})\) be the matrix whose rows are given by the generators of \(\operatorname{Syz}(\mathcal{G})\). Then there exist unimodular transformation matrices \(S\in\operatorname{Mat}_{m,m}(\mathbb{Z})\) and \(T\in\operatorname{Mat}_{n+1,n+1}(\mathbb{Z})\) such that
\[S\cdot A\cdot T=\begin{pmatrix}k_{1}&0&\cdots&\cdots&\cdots&0\\ 0&\ddots&\ddots&&&\vdots\\ \vdots&\ddots&k_{u}&\ddots&&\vdots\\ \vdots&&&\ddots&0&\ddots&\vdots\\ \vdots&&&\ddots&\ddots&0\\ 0&\cdots&\cdots&\cdots&0&0\end{pmatrix}\]
and \(k_{i}\) divides \(k_{j}\) for \(i<j\). This matrix is called the **Smith normal form** of \(A\). It yields the following isomorphism:
\[R^{+}\cong\mathbb{Z}^{r}\oplus\mathbb{Z}/k_{1}\mathbb{Z}\oplus\cdots\oplus \mathbb{Z}/k_{u}\mathbb{Z}.\]
The numbers \(r\) and \(k_{1},\ldots,k_{u}\) are uniquely determined by \(R^{+}\). We call \(r\) the **rank** and \(k_{1},\ldots,k_{u}\) the **invariant factors** of \(R^{+}\). The largest invariant factor \(k_{u}\) is the exponent of the torsion subgroup of \(R^{+}\). We call it the **torsion exponent**\(\tau\) of \(R^{+}\).
In the following we shall show that certain algorithms run in polynomial time by reducing them to the following computations.
**Remark 2.4**.: **(Complexity of Integer Linear Algebra Operations)**
1. The Smith and the Hermite normal form of a matrix \(A\in\operatorname{Mat}_{m,n}(\mathbb{Z})\) can be computed in polynomial time, as first shown by Kannan and Bachem [17] in 1979. Currently, the fastest deterministic algorithm for computing the Smith normal form is the one developed by Storjohann [29]. Note that in contrast to [17] this algorithm does not produce the unimodular transformation matrices.
2. Solving linear systems of equations over the integers can be reduced to computing a Smith normal form together with the unimodular transformation matrices (see for instance [22]). If the linear system is of the form \(Ax=b\) where \(A\in\operatorname{Mat}_{m,n}(\mathbb{Z})\) and \(b\in\operatorname{Mat}_{n,1}(\mathbb{Z})\), then generators of the solution space can be computed in polynomial time in \(n\), \(m\), \(\|A\|\), \(\|b\|\) and the rank of \(A\). Here \(\|A\|\) denotes the maximal absolute value of the entries of \(A\). A concrete complexity bound is given in [30], Theorem 19.
3. Computing the intersection of free submodules of \(\mathbb{Z}^{n}\) can be achieved by computing a basis of the solution space of an appropriate linear system of equations. The problem therefore reduces to (b).
Since the Smith normal form can be computed in polynomial time, it follows that the bit complexity of the torsion exponent of \(R\) is bounded by a polynomial in \(\beta\). Below we give a concrete complexity bound.
**Lemma 2.5**.: _Let \(R\) be an explicitly given finite \(\mathbb{Z}\)-algebra. Then the bit complexity of the torsion exponent \(\tau\) is bounded by \(n\log_{2}(n\left\|A\right\|)\)._
Proof.: The product of the invariant factors of \(R^{+}\) is given by the gcd of all maximal rank minors of \(A\). The torsion exponent is therefore bounded by the absolute value of a non-zero maximal rank minor of \(A\). Hadamard's inequality then yields \(\tau\leq n^{n/2}\left\|A\right\|\), which means the bit complexity of \(\tau\) is bounded by \(n\log_{2}(n\left\|A\right\|)\).
Solving a linear system of equations over \(R\) can be reduced to solving a linear system over the integers.
**Proposition 2.6**.: **(Solving Systems of Linear Equations over \(R\))**
_Let \(R\) be an explicitly given finite \(\mathbb{Z}\)-algebra and \(f_{1},\dots,f_{p}\in R\). For \(k=1,\dots,p\), we write \(f_{k}=b_{k0}g_{0}+\dots+b_{kn}g_{n}\) with \(b_{kj}\in\mathbb{Z}\). Let \(y_{1},\dots,y_{p}\) be further indeterminates. Consider the following homogeneous linear equation over \(R\)._
(i) \[f_{1}y_{1}+\dots+f_{p}y_{p}\;=\;0\]
_Let \(e_{0},\dots,e_{n}\in\mathbb{Z}^{n+1}\) be the standard basis vectors. For the following system of homogeneous linear equations in the indeterminates \(z_{ki}\) and \(w_{j}\) over \(\mathbb{Z}\), let \(\mathcal{L}\) be the projection of the solution space onto the \(z\)-coordinates._
(ii) \[\sum\limits_{k=1}^{p}\sum\limits_{i,j,\ell=0}^{n}z_{ki}b_{kj}c_{ij\ell}e_{ \ell}-\sum\limits_{j=1}^{m}\sum\limits_{i=0}^{n}w_{j}a_{ij}e_{i}\;=\;0\]
_Then the following conditions are equivalent._
* _A tuple_ \((h_{1},\dots,h_{p})\in R^{p}\) _with_ \(h_{k}=d_{k0}g_{0}+\dots+d_{kn}g_{n}\in R\) _and_ \(d_{ki}\in\mathbb{Z}\) _is a solution of (i)._
* _The tuple_ \((d_{ki})\) _is an element of_ \(\mathcal{L}\)_._
Proof.: The tuple \((h_{1},\dots,h_{p})\) is a solution of (i) if and only if
\[\sum\limits_{k=1}^{p}\sum\limits_{i,j=0}^{n}b_{ki}d_{kj}g_{i}g_{j}=0.\]
This is the case if and only if there exist \(\alpha_{1},\dots,\alpha_{m}\in\mathbb{Z}\) such that the left hand side is equal to \(\sum_{j=1}^{m}\sum_{i=1}^{n}\alpha_{j}a_{ij}g_{i}\). The claim then follows by rewriting the products \(g_{i}g_{j}\) using the structure constants of \(R\) and applying the canonical isomorphism \(R\cong\mathbb{Z}^{n+1}/\operatorname{Syz}(G)\).
**Corollary 2.7**.: _Let \(R\) be an explicitly given finite \(\mathbb{Z}\)-algebra. Generators of the solution space of a linear equation over \(R\) as in Proposition 2.6 can be computed in polynomial time in the bit complexity of the input which is given by \(\gamma\) (for \(R\)) and by \(p(n+1)\log_{2}(M)\) where \(M=\max\{b_{kj}\}\) (for the elements \(f_{1},\dots,f_{p}\))._
Proof.: The coefficients in the system of equations (ii) in Proposition 2.6 are \(b_{kj}\), \(c_{ij\ell}\) and \(a_{ij}\). The claim therefore follows immediately from Remark 2.4.b.
The next proposition collects elementary operations in an explicitly given finite \(\mathbb{Z}\)-algebra which can be performed in polynomial time.
**Proposition 2.8**.: **(Elementary Ideal-Theoretic Operations)**
_Let \(R\) be an explicitly given finite \(\mathbb{Z}\)-algebra, and let \(J=\langle f_{1},\dots,f_{k}\rangle\) as well as \(J^{\prime}=\langle h_{1},\dots,h_{\ell}\rangle\) be ideals in \(R\). We assume that the elements \(f_{i},h_{j}\in R\) are given as elements in \(\mathbb{Z}[g_{0},\dots,g_{n}]\) and that the bit complexity of these sets of polynomials is given by \(\delta_{J}\) and \(\delta_{J^{\prime}}\), respectively._
* _The rank and the invariant factors of_ \(R^{+}\) _can be computed in polynomial time in_ \(\beta\)_._
* _Let_ \(\overline{\mathcal{G}}=\{\overline{g}_{0},\dots,\overline{g}_{n}\}\) _be the set of residue classes in_ \(R/J\) _of the elements of_ \(\mathcal{G}\)_. Then generators of_ \(\operatorname{Syz}_{\mathbb{Z}}(\overline{\mathcal{G}})\) _can be computed in polynomial time in_ \(\gamma+\delta_{J}\)_._
* _We can decide whether_ \(J\subseteq J^{\prime}\) _in polynomial time in_ \(\gamma+\delta_{J}+\delta_{J^{\prime}}\)_._
* _We can decide whether_ \(J=\langle 1\rangle\) _in polynomial time in_ \(\gamma+\delta_{J}\)_._
* _Generators of the intersection_ \(J\cap J^{\prime}\) _can be computed in polynomial time in_ \(\gamma+\delta_{J}+\delta_{J^{\prime}}\)_._
Proof.: To prove (a), let \(A=(a_{ij})\in\operatorname{Mat}_{n+1,m}(\mathbb{Z})\) be the matrix whose rows are given by the generators of \(\operatorname{Syz}_{\mathbb{Z}}(\mathcal{G})\). The rank and the invariant factors of \(R^{+}\) can be determined from the Smith normal form of \(A\), which can be computed in polynomial time by Remark 2.4.
Next we show (b). Using the structure constants, we can rewrite the \(f_{i}\) as linear combinations \(b_{i0}g_{0}+\cdots+b_{in}g_{n}\) of the generators of \(R^{+}\). This means that we obtain integer tuples \(v_{1},\ldots,v_{r}\in\mathbb{Z}^{n+1}\) with \(v_{i}=(b_{i0},\ldots,b_{in})\) such that \(v_{1},\ldots,v_{r}\) together with the generators of \(\operatorname{Syz}_{\mathbb{Z}}(\mathcal{G})\) generate \(\operatorname{Syz}_{\mathbb{Z}}(\overline{\mathcal{G}})\).
Using (b), we can compute presentations \(R/J\cong\mathbb{Z}^{n+1}/V_{1}\) and \(R/J^{\prime}\cong\mathbb{Z}^{n+1}/V_{2}\), where \(V_{1}\) and \(V_{2}\) are submodules of \(\mathbb{Z}^{n+1}\), in polynomial time. The ideal \(J\) is then contained in \(J^{\prime}\) if and only if \(V_{1}\subseteq V_{2}\). This proves (c).
To show (d), we use part (b) to compute a presentation \(R/J\cong\mathbb{Z}^{n+1}/\operatorname{Syz}_{\mathbb{Z}}(\overline{\mathcal{ G}})\). We can then apply (a) and compute the rank and the invariant factors of \(R/J\) in polynomial time. Notice that we have \(J=\langle 1\rangle\) if and only if the rank of \(R/J\) is zero and all invariant factors are equal to one.
For the proof of (e), we let
\[\mathcal{M}=\begin{pmatrix}1&f_{1}&\cdots&f_{k}&0&\cdots&0\\ 1&0&\cdots&0&h_{1}&\cdots&h_{\ell}\end{pmatrix}.\]
Generators of \(\operatorname{Syz}_{R}(M)\) can be computed in polynomial time by solving an appropriate linear system of equations over \(R\) using Proposition 2.6. The first non-zero coordinates of the generators then generate \(J\cap J^{\prime}\) by [19], Proposition 3.2.3.
The following algorithm will come in handy when we compute the primitive idempotents of a finite \(\mathbb{Z}\)-algebra.
**Algorithm 2.9**.: **(The Chinese Remainder Preimage Algorithm)**
_Let \(R\) be an explicitly given finite \(\mathbb{Z}\)-algebra. In particular, we assume that \(R^{+}\) is generated by \(\mathcal{G}=\{g_{0},\ldots,g_{n}\}\) with \(g_{0}=1\). Let \(J_{1},\ldots,J_{s}\) be pairwise comaximal ideals in \(R\), and assume that \(J_{1}\cap\cdots\cap J_{s}=\langle 0\rangle\). Given \(i\in\{1,\ldots,s\}\), consider the following sequence of instructions._
1. _Using Proposition_ 2.8.b_, compute_ \(\mathbb{Z}\)_-submodules_ \(V_{j}\subseteq\mathbb{Z}^{n+1}\) _such that we have_ \(R/J_{j}\;\cong\;\mathbb{Z}^{n+1}/V_{j}\) _for_ \(j=1,\ldots,s\)_._
2. _Compute a_ \(\mathbb{Z}\)_-module basis_ \(\{v_{1},\ldots,v_{k}\}\subseteq\mathbb{Z}^{n+1}\) _of_ \(\bigcap_{j\neq i}V_{j}\)_._
3. _Let_ \(\{w_{1},\ldots,w_{\ell}\}\subseteq\mathbb{Z}^{n+1}\) _be a_ \(\mathbb{Z}\)_-basis of_ \(V_{i}\)_. Compute a solution_ \((c_{i})\in\mathbb{Z}^{k+\ell}\) _of the linear system of equations in the indeterminates_ \(y_{1},\ldots,y_{k+\ell}\) _given by_ \[v_{1}y_{1}+\cdots+v_{k}y_{k}=w_{1}y_{k+1}+\cdots+w_{\ell}y_{k+\ell}+(1,0,\ldots, 0).\]
4. _Let_ \(h=(h_{0},\ldots,h_{n})=c_{1}v_{1}+\cdots+c_{k}v_{k}\in\mathbb{Z}^{n+1}\)_. Return the element_ \(f=h_{0}g_{0}+\cdots+h_{n}g_{n}\) _of_ \(R\) _and stop._
_This is a polynomial time algorithm which computes an element \(f\in R\) such that \(f\) is mapped to the \(i\)-th canonical basis vector \(e_{i}\) under the canonical \(\mathbb{Z}\)-linear map_
\[\varphi:\;R\;\longrightarrow\;R/J_{1}\times\cdots\times R/J_{s}.\]
Proof.: The tuple \(h\) satisfies \(h\in\bigcap_{j\neq i}V_{j}\) and \(h-(1,0,\ldots,0)\in V_{i}\). This shows that the residue class of \(h\) in \(\mathbb{Z}^{n+1}/\operatorname{Syz}_{\mathbb{Z}}(\mathcal{G})\) is mapped to \(e_{i}\) under the canonical map \(\psi:\;\mathbb{Z}^{n+1}/\operatorname{Syz}_{\mathbb{Z}}(\mathcal{G})\longrightarrow \mathbb{Z}^{n+1}/V_{1}\times\cdots\times\mathbb{Z}^{n+1}/V_{s}\). Hence \(f\) is mapped to \(e_{i}\) under the map \(\varphi\). Steps (1) and (2) of the algorithm can be performed in polynomial time by Proposition 2.8. The linear system in Step (3) can also be solved in polynomial time by Remark 2.4.
## 3. Computing the Maximal Ideals of a \(0\)-Dimensional \(K\)-Algebra
In this section we assume that \(K\) is either the field of rational numbers \(\mathbb{Q}\) or a finite field \(\mathbb{F}_{p}\). Our goal is to study the complexity of computing the maximal components of a \(0\)-dimensional \(K\)-algebra \(R\) which is explicitly given in the following sense.
**Definition 3.1**.: A \(0\)-dimensional \(K\)-algebra \(R\) is **explicitly given** if it is given by a \(K\)-vector space basis \(\mathcal{B}=\{b_{1},\ldots,b_{n}\}\) and structure constants \(c_{ijk}\) such that \(b_{i}b_{j}=\sum_{k=1}^{n}c_{ijk}b_{k}\) for all \(i,j=1,\ldots,n\).
Note that a \(0\)-dimensional \(K\)-algebra as in this definition can equivalently be given by a basis together with multiplication matrices. The crucial step in computing the maximal ideals of \(R\) is the factorization of univariate polynomials over \(K\).
**Remark 3.2**.: In 1982 Lenstra et al. [24] published a deterministic algorithm for factoring univariate polynomials in \(\mathbb{Q}[x]\). The running time of their algorithm is polynomial in \(\deg(f)\) and \(\log(|f|)\), where for a polynomial \(f=\sum_{i}a_{i}x^{i}\in\mathbb{Q}[x]\) we define \(|f|=\sqrt{\sum_{i}a_{i}^{2}}\). This means it requires only a polynomial number of bit operations measured in the input size.
For univariate polynomials over finite fields, the situation is slightly more complicated. A deterministic algorithm for factoring polynomials over finite fields was presented by Berlekamp in [5]. Its running time for factoring a polynomial \(f\in\mathbb{F}_{p}\) is polynomial in \(p\) and \(\deg(f)\). But this is not polynomial in the bit complexity of the input which is given by \((1+\deg(f))\log_{2}(p)\). In 1970 Berlekamp published a Las Vegas algorithm [6] for the problem which has polynomial running time in the input. Since then many new and faster algorithms were developed, see e.g. [14]. But it is still unknown whether the factorization can be performed in deterministic polynomial time. It was shown by Evdokimov [10] that under the generalized Riemann hypothesis (GRH) the problem can be solved in subexponential time. Furthermore, there have been efforts to drop the GHR assumption (see [16]). In addition, there exist deterministic polynomial time algorithms [12, 27] for many special classes of polynomials over finite fields. Indeed, it is conjectured that the set of polynomials which do not satisfy any of the conditions in [27] is empty.
The first step in computing the maximal ideals of \(R\) is to compute its nilradical.
**Algorithm 3.3**.: _(**Computing the Nilradical of a \(0\)-Dimensional Algebra )** Let \(R\) be an explicitly given 0-dimensional \(K\)-algebra. Consider the following sequence of instructions._
1. _Let_ \(J=\langle 0\rangle\) _and_ \(\mathcal{B}=\{b_{1},\ldots,b_{n}\}\)_._
2. _For_ \(i=1,\ldots,n\)_, perform the following steps (3)-(7)._
3. _Compute the minimal polynomial_ \(\mu_{b_{i}+J}(z)\) _of_ \(b_{i}+J\) _in_ \(R/J\)_._
4. _Calculate_ \(g_{i}(z)=\operatorname{saffree}(\mu_{b_{i}+J}(z))\)_._
5. _Replace_ \(J\) _with_ \(J+\langle g_{i}(b_{i})\rangle\)_._
6. _Using the structure constants, rewrite_ \(g_{i}(b_{i})\) _to obtain a representation of some_ \(b_{j}\in\mathcal{B}\) _as a linear combination of the remaining elements. Remove_ \(b_{j}\) _from_ \(\mathcal{B}\) _and update the structure constants using this linear combination to obtain an explicit presentation of_ \(R/J\)_._
7. _If_ \(\deg(g_{i}(z))=\dim_{K}(R/J)\)_, return the ideal_ \(J\) _together with the explicit presentation of_ \(R/J\) _and stop._
8. _Return the ideal_ \(J\) _together with the explicit presentation of_ \(R/J\) _and stop._
_This is an algorithm which computes the nilradical \(\operatorname{Rad}(0)\) of \(R\) together with an explicit presentation of \(R/\operatorname{Rad}(0)\). If \(K=\mathbb{Q}\), or if \(K\) is a finite prime field, then it has polynomial running time. In particular, the bit complexity of the explicit
representation of \(R/\operatorname{Rad}(0)\) is polynomially bounded by the bit complexity of the input._
Proof.: The correctness of this algorithm is shown in [21] Algorithm 5.4.2. It remains to prove that it runs in polynomial time. The minimal polynomial in step (3) can be computed using [21], Algorithm 1.1.8. It involves finding linear dependencies among the elements \(1+J\), \(b_{i}+J\),..., \(b_{i}^{d}+J\) where \(d=\dim_{K}(R/J)\). Using the structure constants, we rewrite \(b_{i}^{j}\) for \(j=2,\ldots,d\) as linear combinations of the elements of \(\mathcal{B}\). The linear dependencies can then clearly be found in polynomial time. The squarefree part of \(g_{i}(z)\) in step (4) can also be computed in polynomial time (see [15] Section 14.6).
The bit complexity of the presentation of \(R/\operatorname{Rad}(0)\) is polynomially bounded by the bit complexity of the input, since during each iteration the bit complexity of the structure constants obtained in step (6) is polynomially bounded by the bit complexity of the structure constants of the previous iteration.
Having computed the nilradical of \(R\), we can then obtain its maximal ideals as follows. In the case \(K=\mathbb{Q}\), we can use Algorithm 7.2 in [23]. It has polynomial running time in the bit complexity of the input. For \(K=\mathbb{F}_{p}\), we can only hope for an algorithm in ZPP since we need to factor univariate polynomials over \(\mathbb{F}_{p}\). In the more general case of associative algebras over finite fields, the complexity of computing their structure, i.e., their simple components was studied in [11], [26], and [9]. But let us take advantage of the fact that we are in the commutative case and analyze the complexity of the algorithm presented in [21], which was inspired by [13]. In contrast to the methods cited above it has the advantage of being well-suited for an actual implementation.
**Definition 3.4**.: Let \(R\) be a 0-dimensional \(\mathbb{F}_{p}\)-algebra.
1. The map \(\phi_{p}:\ R\longrightarrow R\) defined by \(a\mapsto a^{p}\) is an \(\mathbb{F}_{p}\)-linear ring endomorphism of \(R\). It is called the **Frobenius endomorphism** of \(R\).
2. The \(\mathbb{F}_{p}\)-vector subspace \[\operatorname{Frob}_{p}(R)=\{f\in R\mid f^{p}-f=0\}\] of \(R\), i.e., the fixed-point space of \(R\) with respect to \(\phi_{p}\), is called the **Frobenius space** of \(R\).
In [21], Algorithm 5.2.7, it is explained how one can calculate the Frobenius space of a 0-dimensional \(\mathbb{F}_{p}\)-algebra. Based on this result, we obtain the following algorithm.
**Algorithm 3.5**.: **(Primary Decomposition in Characteristic \(p\))**
_Let \(R\) be an explicitly given 0-dimensional \(\mathbb{F}_{p}\)-algebra. In particular, we assume that \(\mathcal{B}=\{b_{1},\ldots,b_{n}\}\) is a \(K\)-vector space basis of \(R\). Consider the following sequence of instructions._
1. _Form the multiplication matrix_ \(M_{\mathcal{B}}(\phi_{p})\) _of the Frobenius endomorphism of_ \(R\)_, and compute the number_ \(s=n-\operatorname{rank}(M_{\mathcal{B}}(\phi_{p})-I_{n})\) _of primary components of the zero ideal of_ \(R\)_. If_ \(s=1\) _then return_ \(\langle 0\rangle\) _and stop._
2. _Let_ \(L\) _be the list consisting of the pair_ \((\langle 0\rangle,s)\)_. Repeat steps (_3_)-(_6_) until the second component of all pairs in_ \(L\) _is 1. Then return the tuple consisting of all first components of the pairs in_ \(L\) _and stop._
3. _Choose the first pair_ \((J,t)\) _in_ \(L\) _for which_ \(t>1\) _and remove it from_ \(L\)_._
4. _Using Algorithm_ 5.2.7 _in_ _[_21_]__, compute the Frobenius space of_ \(R/J\)_. Choose a non-constant element_ \(f\) _in it._
5. _Calculate the minimal polynomial of the element_ \(f\) _and factor it in the form_ \(\mu_{f}(z)=(z-a_{1})\cdots(z-a_{u})\) _with_ \(a_{1},\ldots,a_{u}\in\mathbb{F}_{p}\)
_._
6. _For_ \(i=1,\ldots,u\)_, let_ \(J_{i}=J+\langle f-a_{j}\rangle\)_. Compute the dimension_ \(d_{i}\) _of_ \(\operatorname{Frob}_{p}(R/J_{i})\) _and append the pair_ \((J_{i},d_{i})\) _to_ \(L\)_._
_This is an algorithm which calculates the list of primary components of the zero ideal of \(R\). It is in ZPP._
Proof.: The correctness of this algorithm is shown in [21], Algorithm 5.2.11. In particular, it is proved there that \(t=d_{1}+\cdots+d_{u}\) throughout the course of this algorithm. Therefore the number of iterations of steps (3)-(6) is bounded by \(s\) which in turn is bounded by the vector space dimension \(n\) of \(R\). Algorithm 5.2.7 in step (4) involves computing a basis for the kernel of a matrix over \(K\) and can therefore be done in polynomial time. As discussed in the proof of Algorithm 3.3, the minimal polynomial in step (5) can also be computed in polynomial time. Computing its factorization is in ZPP by Remark 3.2.
Using this algorithm, we can now calculate the maximal ideals of explicitly given \(0\)-dimensional algebras.
**Corollary 3.6**.: **(Complexity of Computing the Maximal Ideals)**
_Let \(K\) be the field of rational numbers or a finite prime field, and let \(R\) be an explicitly given 0-dimensional \(K\)-algebra._
1. _If_ \(K=\mathbb{Q}\)_, then the maximal ideals of_ \(R\) _can be computed in polynomial time._
2. _If_ \(K=\mathbb{F}_{p}\)_, then the maximal ideals of_ \(R\) _can be computed in ZPP._
Proof.: Using Algorithm 3.3, we compute the nilradical \(\operatorname{Rad}(0)\) of \(R\) in polynomial time. This algorithm also yields an explicit presentation of \(R/\operatorname{Rad}(0)\). If \(K=\mathbb{Q}\), we then apply Algorithm 7.2 from [23] to \(R/\operatorname{Rad}(0)\) and obtain the maximal ideals of \(R\) in polynomial time. Similarly, in the case \(K=\mathbb{F}_{p}\), we apply Algorithm 3.5 to \(R/\operatorname{Rad}(0)\).
## 4. Computing the Associated Primes of Finite \(\mathbb{Z}\)-Algebras
In this section we let \(R\) be a finite \(\mathbb{Z}\)-algebra. We show that the associated primes of \(R\) can be computed in ZPPIF, if \(R\) is explicitly given. Note that the associated primes of \(R\) are given by the primary decomposition of its nilradical \(\operatorname{Rad}(0)\). Algorithms for computing the primary decomposition of ideals in \(\mathbb{Z}[x_{1},\ldots,x_{n}]\) date back to 1978 [4, 28]. More recently, Pfister et al. [25] presented a slightly different approach. Inspired by this algorithm, we gave an efficient algorithm in [18] for computing the primary decomposition of ideals \(I\subseteq\mathbb{Z}[x_{1},\ldots,x_{n}]\) such that \(P/I\) is a finite \(\mathbb{Z}\)-algebra. Let us now apply this approach to explicitly given finite \(\mathbb{Z}\)-algebras.
The following lemma is used to split the computation into computing the associated primes of \(0\)-dimensional ideals in \(\mathbb{Q}[x_{1},\ldots,x_{n}]\) and \(\mathbb{F}_{p}[x_{1},\ldots,x_{n}]\).
**Lemma 4.1**.: _Let \(R=P/I\) be an explicitly given finite \(\mathbb{Z}\)-algebra and let \(\tau\) be its torsion exponent._
1. _The ideal_ \((I:\langle\tau\rangle)/I\) _is the torsion subgroup of_ \(R^{+}\)_._
2. _We have_ \(I=(I:\langle\tau\rangle)\cap(I+\langle\tau\rangle)\)_._
3. _If_ \(R\) _is finite, then_ \(I\cap\mathbb{Z}=\langle\tau\rangle\)_._
Proof.: Part (a) follows immediately from the definition of the exponent of the torsion subgroup of \(R^{+}\). It then implies \(I:\langle\tau\rangle=I:\langle\tau\rangle^{\infty}\), which means that claim (b) is a standard lemma in commutative algebra. To prove (c), we note that the ring \(P/I\) is finite if and only if there exists a positive integer \(k\in\mathbb{Z}\) with
\(I\cap\mathbb{Z}=\langle k\rangle\). If such a number \(k\) exists, we have \(k\cdot f=0\) for all \(f\in R\), and therefore \(\tau\mid k\). But we also have \(n\cdot 1=0\) in \(R=P/I\), and hence \(\tau\in I\). This implies \(k\mid\tau\), and thus \(k=\tau\).
The associated primes of \(R\) can now be computed as described in the following algorithm.
**Algorithm 4.2**.: **(Computing the Associated Primes)**
_Let \(R=P/I\) be an explicitly given finite \(\mathbb{Z}\)-algebra. Consider the following sequence of instructions._
1. _Set_ \(L:=[\;]\)_._
2. _Compute the torsion exponent_ \(\tau\) _of_ \(R^{+}\)_._
3. _if the rank of_ \(R\) _is not zero_ _then___
4. _Compute the prime components_ \(\overline{\mathfrak{p}}_{1}\cap\cdots\cap\overline{\mathfrak{p}}_{\ell}\) _of_ \(I\,\mathbb{Q}[x_{1},\ldots,x_{n}]\)_._
5. _Compute_ \(\overline{\mathfrak{p}}_{j}\cap P\) _and append these ideals to_ \(L\)_._
6. _Recursively apply the algorithm to_ \(I+\langle\tau\rangle\) _and obtain the set_ \(M\)_._
7. _Compute_ \(J:=\bigcap_{\mathfrak{p}\,\in\,L}\mathfrak{p}\)_._
8. _Remove all ideals in_ \(M\) _that contain_ \(J\)_._
9. _return___\(L\cup M\)__
10. _else___
11. _Compute all prime factors_ \(p_{1},\ldots,p_{r}\) _of_ \(\tau\)_._
12. _Set_ \(M:=[\;]\)_._
13. _for_ \(i=1,\ldots,r\) _do___
14. _Compute the prime components_ \(\overline{\mathfrak{p}}_{1}\cap\cdots\cap\overline{\mathfrak{p}}_{m}\) _of_ \(I\,\mathbb{F}_{p_{i}}[x_{1},\ldots,x_{n}]\)_._
15. _Compute the preimages_ \(\mathfrak{p}_{j}\) _of_ \(\overline{\mathfrak{p}}_{j}\) _in_ \(P\) _and append them to_ \(M\)_._
16. _end for___
17. _return___\(M\)__
18. _end if___
_This is an algorithm which computes the associated primes \(\mathfrak{p}_{1},\ldots,\mathfrak{p}_{k}\) of \(R\). It is in ZPPIF._
Proof.: The correctness of the algorithm follows from Lemma 4.1 and Proposition 4.7 in [18]. Let us analyze the complexity of each of its steps. The torsion exponent and the rank of \(R\) can be computed in polynomial time in \(\beta\) using Proposition 2.8.a, and the bit complexity of the torsion exponent is polynomially bounded by Lemma 2.5. Since \(P/I\) is a finite \(\mathbb{Z}\)-algebra, the ideals \(I\mathbb{Q}[x_{1},\ldots,x_{n}]\) and \(I\mathbb{F}_{p}[x_{1},\ldots,x_{n}]\) are \(0\)-dimensional and therefore define \(0\)-dimensional \(\mathbb{Q}\)- and \(\mathbb{F}_{p}\)-algebras, respectively. Their vector space dimension is less than or equal to the number of generators of \(R\), and their structure constants are given by the structure constants of \(R\). Thus we obtain the maximal components in lines (4) and (14) in polynomial and probabilistic polynomial time, respectively, by applying the algorithms in Section 3. The intersection of the prime ideals in line (7) can be computed in polynomial time by Proposition 2.8.e. Finally, Proposition 2.8.c allows us to check the containment of ideals in line (8) in polynomial time.
In summary, all steps except for the integer prime factorization in line (11) are in ZPP.
Note that, since the exponent \(\tau\) of \(R\) is the largest invariant factor of \(R\), all other invariant factors of \(R\) are divisors of \(\tau\). This means that we might already have a partial factorization of \(\tau\).
## 5. Computing Primitive Idempotents
In this section our goal is to compute the primitive idempotents of an explicitly given finite \(\mathbb{Z}\)-algebra \(R\). We describe a variant of the method presented in Section 4 of [18] and analyze its complexity. We will use the fact that the idempotents modulo a nilpotent ideal can be lifted. The following algorithm is based on Lemma 3.2.1 in [8].
**Algorithm 5.1**.: **(Lifting Idempotents)**
_Let \(R\) be an explicitly given finite \(\mathbb{Z}\)-algebra, and let \(\operatorname{Rad}(0)\subseteq R\) be its nilradical. Let \(e\in R\) be such that \(e^{2}\equiv e\mod\operatorname{Rad}(0)\). Consider the following instructions._
1. _Set_ \(h=e\)_._
2. _Compute_ \(f=h+r-2hr\) _where_ \(r=h^{2}-h\)_._
3. _Represent_ \(f\) _as a_ \(\mathbb{Z}\)_-linear combination_ \(f=c_{0}g_{0}+\cdots+c_{n}g_{n}\) _using the structure constants._
4. _If_ \((c_{0},\ldots,c_{n})\in\operatorname{Syz}_{\mathbb{Z}}(\mathcal{G})\)_, return_ \(f\) _and stop. Otherwise set_ \(h=f\) _and continue with step (2)._
_This is an algorithm which computes an idempotent \(f\in R\) such that \(f\equiv e\mod\operatorname{Rad}(0)\). Furthermore, if \(e\) is a primitive idempotent modulo \(\operatorname{Rad}(0)\), then \(f\) is a primitive idempotent in \(R\)._
Proof.: The algorithm terminates since \(\operatorname{Rad}(0)\) is a nilpotent ideal. To prove the correctness, we show that if \(h\) is an idempotent modulo \(\operatorname{Rad}(0)^{2^{k}}\), then \(f\) is an idempotent modulo \(\operatorname{Rad}(0)^{2^{k+1}}\). By assumption, we have \(h^{2}-h\in\operatorname{Rad}(0)^{2^{k}}\), and therefore \(h^{2}r-hr=(h^{2}-h)r=r^{2}\in\operatorname{Rad}(0)^{2^{k+1}}\). Then we get
\[f^{2}\;\equiv\;h^{2}+2hr-4h^{2}r\;\equiv\;h+r+2hr-4hr\;\equiv\;f\mod \operatorname{Rad}(0)^{2^{k+1}}\]
and \(f\equiv h\mod\operatorname{Rad}(0)^{2^{k}}\). Now assume that \(e\mod\operatorname{Rad}(0)\) is a primitive idempotent and that \(f=e^{\prime}+e^{\prime\prime}\) can be written as the sum of two orthogonal idempotents. Then we have \(e^{\prime}\in\operatorname{Rad}(0)\) or \(e^{\prime\prime}\in\operatorname{Rad}(0)\), since \(e\) is primitive. But \(\operatorname{Rad}(0)\) consists only of nilpotent elements. Therefore \(e^{\prime}\) or \(e^{\prime\prime}\) has to be zero.
The number of iterations necessary to lift the idempotents can be bounded as follows.
**Proposition 5.2**.: _Let \(R\) be a finite \(\mathbb{Z}\)-algebra of rank \(r\), and let \(T\) be the torsion subgroup of \(R^{+}\)._
1. _We have_ \(\operatorname{Rad}(0)^{m}=\{0\}\) _for_ \(m=r+\operatorname{length}_{\mathbb{Z}}(T)\)_._
2. _Let_ \(p_{1}^{e_{1}},\ldots,p_{s}^{e_{s}}\) _be the elementary divisors of_ \(R\)_. Then Algorithm_ 5.1 _terminates after at most_ \(\lceil\log_{2}(r+e_{1}+\cdots+e_{s})\rceil\) _steps._
Proof.: To prove (a), note that an element \(f\in\operatorname{Rad}(0)\) yields a nilpotent \(\mathbb{Z}\)-linear endomorphism \(\varphi\) of \(R\) given by multiplication with \(f\). One therefore obtains a chain
\[\operatorname{Ker}(\varphi)\subsetneq\operatorname{Ker}(\varphi^{2})\subsetneq \cdots\subsetneq R.\]
Now we show that if \(\operatorname{rank}(\operatorname{Ker}(\varphi^{i}))>0\), then \(\operatorname{rank}(\operatorname{Ker}(\varphi^{i}))<\operatorname{rank}( \operatorname{Ker}(\varphi^{i+1}))\). Note that \(\operatorname{rank}(\operatorname{Ker}(\varphi^{i}))=\operatorname{rank}( \operatorname{Ker}(\varphi^{i+1}))\) if and only if \(\operatorname{Ker}(\varphi^{i+1})/\operatorname{Ker}(\varphi^{i})\) is a torsion module. Let \(\operatorname{Ker}(\varphi^{i+1})/\operatorname{Ker}(\varphi^{i})\) be a torsion module. We prove by induction that this implies \(\operatorname{Ker}(\varphi^{i+k+1})/\operatorname{Ker}(\varphi^{i+k})\) is a torsion module for all \(k\in\mathbb{N}\). For \(k=0\) the claim is true by assumption. Now assume that \(\operatorname{Ker}(\varphi^{i+k})/\operatorname{Ker}(\varphi^{i+k-1})\) is a torsion module, and let \(x\in\operatorname{Ker}(\varphi^{i+k+1})\). Then we have \(\varphi(x)\in\operatorname{Ker}(\varphi^{i+k})\), and there exists a non-zero \(c\in\mathbb{Z}\) with \(c\varphi(x)\in\operatorname{Ker}(\varphi^{i+k-1})\). Hence we obtain \(cx\in\operatorname{Ker}(\varphi^{i+k})\).
Thus we conclude that \(\operatorname{Ker}(\varphi^{r})\) has rank \(r\), and therefore that \(\varphi^{r}(R)\) is a submodule of \(T\). This forces \(\varphi^{r+\operatorname{length}_{\mathbb{Z}}(T)}=0\).
Part (b) follows immediately from (a), since the length of the torsion is given by the number of elementary divisors \(p_{i}^{e_{i}}\) counted with multiplicity \(e_{i}\).
In order to compute the primitive idempotents of \(R=P/I\), we can now use Algorithm 5.1 to lift the idempotents of \(R/\operatorname{Rad}(0)\). For the task of finding the primitive idempotents of \(R/\operatorname{Rad}(0)\), we consider the minimal associated primes of \(I\). Let us recall the following remark from [18].
**Remark 5.3**.: Let \(T\) be a commutative, unitary, noetherian ring.
1. Given an idempotent \(e\in T\), the set \(\mathcal{V}(1-e)\) is both open and closed in \(\operatorname{Spec}(T)\).
2. If \(U\subseteq\operatorname{Spec}(T)\) is a subset which is both open and closed, there exists a unique idempotent \(e\in T\) such that in \(T_{\mathfrak{p}}/\mathfrak{p}T_{\mathfrak{p}}\) we have \(\bar{e}=1\) for \(\mathfrak{p}\in U\) and \(\bar{e}=0\) otherwise.
3. The correspondence given in (a) and (b) is 1-1. The primitive idempotents correspond uniquely to the connected components of \(\operatorname{Spec}(T)\).
Therefore, in order to compute the primitive idempotents of \(R/\operatorname{Rad}(0)\), we need to calculate the connected components of \(\operatorname{Spec}(R/\operatorname{Rad}(0))\). Since the ring \(R/\operatorname{Rad}(0)\) might have infinitely many prime ideals, we use the following approach to describe the connected components of \(\operatorname{Spec}(R/\operatorname{Rad}(0))\).
**Definition 5.4**.: Let \(R\) be a finite \(\mathbb{Z}\)-algebra, and let \(\operatorname{minAss}(R)\) be the set of minimal associated prime ideals of \(R\). A maximal subset of \(\operatorname{minAss}(R)\) such that all corresponding prime ideals are part of the same connected component of \(\operatorname{Spec}(R)\) is called a **connected component** of \(\operatorname{minAss}(R)\).
Since \(R\) is a finite \(\mathbb{Z}\)-algebra, the associated primes of \(R\) are either of height \(n\) and do not contain a non-zero integer, or they are of height \(n+1\) and hence maximal ideals. Now Algorithm 5.5 determines the connected components of \(\operatorname{minAss}(R)\).
**Algorithm 5.5**.: **(Computing the Connected Components of \(\operatorname{minAss}(R)\))**
_Let \(R\) be an explicitly given finite \(\mathbb{Z}\)-algebra. Consider the following sequence of instructions._
1. _Compute the set of minimal associated prime ideals of_ \(R\)_. Let_ \(\mathfrak{m}_{1},\ldots,\mathfrak{m}_{\ell}\) _be the minimal associated prime ideals of height_ \(n+1\)_, and let_ \(\mathfrak{p}_{1},\ldots,\mathfrak{p}_{m}\) _be the minimal associated primes ideals of height_ \(n\)_._
2. _Let_ \(M=\{\{\mathfrak{p}_{1}\},\ldots,\{\mathfrak{p}_{m}\}\}\)_._
3. _While there are sets_ \(C,C^{\prime}\in M\) _such that there exist_ \(\mathfrak{p}_{i}\in C\) _and_ \(\mathfrak{p}_{j}\in C^{\prime}\) _with_ \(\mathfrak{p}_{i}+\mathfrak{p}_{j}\neq\langle 1\rangle\) _replace_ \(C\) _and_ \(C^{\prime}\) _in_ \(M\) _by_ \(C\cup C^{\prime}\)_._
4. _For every ideal_ \(\mathfrak{m}_{i}\)_, append the set_ \(\{\mathfrak{m}_{i}\}\) _to_ \(M\)_._
5. _Return_ \(M\)_._
_This is an algorithm which computes a set \(M=\{C_{1},\ldots,C_{\nu}\}\) such that \(C_{1},\ldots,C_{\nu}\) are the connected components of \(\operatorname{minAss}(R)\). It is in ZPPIF._
Proof.: The following observations show the correctness of this algorithm. An associated prime ideal of height \(n+1\) is maximal and therefore forms its own connected component. Two prime ideals \(\mathfrak{p}_{i}\) and \(\mathfrak{p}_{j}\) of height \(n\) belong to the same connected component if and only if there is a maximal ideal \(\mathfrak{m}\) containing both \(\mathfrak{p}_{i}\) and \(\mathfrak{p}_{j}\), which is equivalent to \(\mathfrak{p}_{i}+\mathfrak{p}_{j}\neq\langle 1\rangle\).
Let us now show that the algorithm is in ZPPIF. The associated primes of \(R\) can be computed in ZPPIF using Algorithm 4.2. Then only two types of computations remain. Namely, we need to decide whether the sum of two primes is equal to \(\langle 1\rangle\) and whether one prime ideal is contained in another. Both of these tasks can be achieved in polynomial time by Proposition 2.8.
A more general version of this algorithm which computes the connected components of a set of (non-minimal) associated prime ideals is given Section 4 of [18]. From the connected components of \(\mathrm{minAss}(R)\) we can now derive the primitive idempotents of \(R\).
**Algorithm 5.6**.: **(Computing the Primitive Idempotents)**
_Let \(R\) be an explicitly given finite \(\mathbb{Z}\)-algebra. The following steps define an algorithm which computes the primitive idempotents of \(R\) in ZPPIF._
1. _Compute the connected components_ \(C_{1},\ldots,C_{\nu}\) _of_ \(\mathrm{minAss}(R)\) _using Alg._ 5.5_._
2. _Compute_ \(J=\bigcap_{\mathfrak{p}\in\mathrm{minAss}(R)}\mathfrak{p}\)_._
3. _For_ \(i=1,\ldots,\nu\)_, compute_ \(J_{i}=\bigcap_{\mathfrak{p}\in C_{i}}\mathfrak{p}\)_._
4. _Compute the preimages_ \(q_{1},\ldots,q_{\nu}\) _of_ \(e_{1},\ldots,e_{\nu}\) _under the canonical_ \(\mathbb{Z}\)_-linear map_ \(R/J\to R/J_{1}\times\cdots\times R/J_{\nu}\)_._
5. _Using Algorithm_ 5.1_, lift the idempotents_ \(q_{1},\ldots,q_{\nu}\) _of_ \(R/J\) _to idempotents of_ \(R\) _and return them._
Proof.: For a proof of the correctness of this algorithm, we again refer to Section 4 of [18]. Let us analyze the complexity of this algorithm. Step (1) can be performed in ZPPIF using Algorithm 5.5. The remaining steps can performed in polynomial time by Proposition 2.8, Algorithm 2.9, and Algorithm 5.1. The number of iterations necessary to perform Algorithm 5.1 has a polynomial bound by Proposition 5.2.
## 6. Explicit \(\mathbb{Z}\)-Algebra Presentations and Strong Grobner Bases
In the previous sections we made the assumption that a \(\mathbb{Z}\)-algebra is explicitly given, i.e., given by \(\mathbb{Z}\)-module generators, their linear relations, and structure constants. This information can be encoded in an ideal \(I\subseteq P=\mathbb{Z}[x_{1},\ldots,x_{n}]\) such that \(R=P/I\). More precisely, let
( \[*\] ) \[I=\langle x_{i}x_{j}-\sum_{k=0}^{n}c_{ijk}x_{k},\ \sum_{k=0}^{n}a_{\ell k}g_{k}\ |\ i,j=1,\ldots,n,\ \ell=1,\ldots,m\rangle\]
be the ideal in \(P=\mathbb{Z}[x_{1},\ldots,x_{n}]\) encoding the information of an explicitly given \(\mathbb{Z}\)-algebra \(R=P/I\) as in Remark 2.1.
In this section we show that this representation of \(R\) is polynomial time equivalent to computing a strong Grobner basis of \(I\). This notion is defined as follows.
**Definition 6.1**.: Let \(I\) be an ideal in \(P=\mathbb{Z}[x_{1},\ldots,x_{n}]\), and let \(\sigma\) be a term ordering on \(\mathbb{T}^{n}\). A set of polynomials \(G=\{g_{1},\ldots,g_{r}\}\) in \(I\) is called a **strong \(\sigma\)-Grobner basis** of \(I\) if, for every polynomial \(f\in I\setminus\{0\}\), there exists an index \(i\in\{1,\ldots,r\}\) such that \(\mathrm{LM}_{\sigma}(f)\) is a multiple of \(\mathrm{LM}_{\sigma}(g_{i})\).
In the first subsection we show that a presentation \(R=P/I\) with \(I\) as above allows us to compute a strong Grobner basis of \(I\) in polynomial time. In the second subsection we prove that, conversely, if \(R=P/I\) and we know a strong Grobner basis of \(I\), then we can calculate a presentation as in Remark 2.1 in polynomial time.
### Computing a Strong Grobner Basis of an Explicitly Given \(\mathbb{Z}\)-Algebra
Let us begin with the task of computing a strong Grobner basis for an ideal \(I\) as above. More generally, consider the following situation. Let \(P=\mathbb{Z}[x_{1},\ldots,x_{n}]\), and let \(I_{1},\ldots,I_{s}\subseteq P\) be ideals such that \(P/I_{j}\) is a finite \(\mathbb{Z}\)-algebra for \(j=1,\ldots,s\). Our goal is to compute a strong Grobner basis of their intersection \(I_{1}\cap\cdots\cap I_{s}\). For \(0\)-dimensional ideals in a polynomial ring over a field, an intersection like this
can be computed using the generalized Buchberger-Moller algorithm (see [1]). In the following we extend this algorithm to ideals in \(\mathbb{Z}[x_{1},\ldots,x_{n}]\).
Before formulating this generalization, we need to address the task of representing the residue classes in \(P/I_{j}\) using suitable systems of generators.
**Remark 6.2**.: Let \(I\) be an ideal in \(P\) such that \(P/I\) is a finite \(\mathbb{Z}\)-algebra, and let \(\pi:\,P\longrightarrow P/I\) be the canonical epimorphism. We need to be able to express the image \(\pi(f)\) of an element \(f\in P\) as a linear combination of some system of \(\mathbb{Z}\)-module generators of \(P/I\).
Let \((t_{1},\ldots,t_{\mu})\) be a tuple of terms such that \(\overline{\mathcal{O}}=(\overline{t}_{1},\ldots,\overline{t}_{\mu})\) generates \(P/I\) as a \(\mathbb{Z}\)-module. Then a tuple \((a_{1},\ldots,a_{\mu})\in\mathbb{Z}^{\mu}\) such that \(\pi(f)=a_{i}\overline{t}_{1}+\cdots+a_{\mu}\overline{t}_{\mu}\) is called a **representation vector** of \(f\) with respect to \(\mathcal{O}\). Representation vectors are in general not unique, but can be calculated efficiently in several settings.
* If \(I\) is given as in \((*)\) and \(f\in P\), we can replace products \(x_{i}x_{j}\) in \(f\) repeatedly by linear combinations \(\sum\limits_{k=0}^{n}c_{ijk}x_{k}\) until the resulting polynomial \(g\) is linear. Then \(\pi(f)=\pi(g)\) is a \(\mathbb{Z}\)-linear combination of the residue classes of the terms in \(\{1,x_{1},\ldots,x_{n}\}\).
* If \(I\) is given by a strong Grobner basis with respect to a term ordering \(\sigma\), we can use \(\mathcal{O}_{\sigma}(I)=\mathbb{T}^{n}\setminus L\), where \(L=\{m\in\operatorname{LM}_{\sigma}(I)\mid\operatorname{LC}_{\sigma}(m)=1\}\), and represent \(\pi(f)\) for an element \(f\in P\) by the residue class of its normal form \(\operatorname{NF}_{\sigma,I}(f)\) which is a \(\mathbb{Z}\)-linear combination of the terms in \(\mathcal{O}_{\sigma}(I)\).
In either case, if we have an implementation of a function that represents \(\pi(f)\) for every polynomial \(f\in P\) in the form \(\pi(f)=a_{1}\overline{t}_{1}+\cdots+a_{\mu}\overline{t}_{\mu}\) then we write \(\operatorname{RV}_{\mathcal{O}}(f)=(a_{1},\ldots,a_{\mu})\) for the corresponding representation vector.
Now we can formulate the generalized Buchberger-Moller algorithm for ideals in \(P=\mathbb{Z}[x_{1},\ldots,x_{n}]\).
**Algorithm 6.3**.: **(Intersecting Ideals in \(\mathbb{Z}[x_{1},\ldots,x_{n}]\))**
_For \(i=1,\ldots,s\), let \(I_{1},\ldots,I_{s}\) be ideals in \(P\) such that \(P/I_{i}\) is a finite \(\mathbb{Z}\)-algebra, and let \(\mathcal{O}_{i}=\{t_{i1},\ldots,t_{i\mu_{i}}\}\subseteq\mathbb{T}^{n}\) be a set of \(\mu_{i}\) terms such that their residue classes generate \(P/I_{i}\) as a \(\mathbb{Z}\)-module. Furthermore, we assume that a \(\mathbb{Z}\)-submodule \(U_{i}\) of \(\mathbb{Z}^{\mu_{i}}\) is given such that the \(\mathbb{Z}\)-linear map \(P/I_{i}\longrightarrow\mathbb{Z}^{\mu_{i}}/U_{i}\) defined by \(\overline{t}_{ij}\mapsto\overline{e}_{i}\) is an isomorphism. Finally, let \(\sigma\) be a degree compatible term ordering on \(\mathbb{T}^{n}\). Consider the following instructions._
1. _Start with empty lists_ \(G=[\,\,]\)_,_ \(\mathcal{O}=[\,\,]\)_,_ \(M=[\,\,]\)_, and a list_ \(L=[1]\)_._
2. _Let_ \(N=\{n_{1},\ldots,n_{k}\}\subseteq\mathbb{Z}^{\mu}\) _such that_ \(\mathbb{Z}^{\mu_{1}}/U_{1}\oplus\cdots\oplus\mathbb{Z}^{\mu_{s}}/U_{s} \cong\mathbb{Z}^{\mu}/\langle N\rangle\) _for some_ \(\mu\geq 1\)_._
3. _If_ \(L\) _is empty, return the pair_ \([G,\mathcal{O}]\) _and stop. Otherwise, choose the power product_ \(t=\min_{\sigma}(L)\) _and remove it from_ \(L\)_._
4. _Compute the vector_ \(v=\operatorname{RV}_{\mathcal{O}_{1}}(t)\oplus\cdots\oplus\operatorname{RV}_{ \mathcal{O}_{s}}(t)\in\mathbb{Z}^{\mu}\)_._
5. _Let_ \(m_{1},\ldots,m_{\ell}\) _be the elements of_ \(M\)_. Compute a_ \(\mathbb{Z}\)_-basis_ \(B\) _in Hermite normal form of the set of solutions of the homogeneous linear equation_ \[vx_{0}-\sum\limits_{i=1}^{\ell}m_{i}x_{i}-\sum\limits_{i=\ell+1}^{k+\ell}n_{i} x_{i}=0\] _in the indeterminates_ \(x_{0},\ldots,x_{k+\ell}\)_._
6. _If it exists, let_ \((a_{i})\in\mathbb{Z}^{k+\ell+1}\) _be a basis element in_ \(B\) _with_ \(a_{0}\neq 0\)_. Append the polynomial_ \(a_{0}t-\sum\limits_{i=1}^{\ell}a_{i}t_{i}\) _to the list_ \(G\)_, where_ \(t_{i}\) _is the_ \(i\)_-th power product in the list_ \(\mathcal{O}\)_._
7. _If there exists no such solution or if the first component_ \(a_{0}\) _of every solution is different from 1, append the vector_ \(v\) _to_ \(M\) _and the term_ \(t\) _to the list_ \(\mathcal{O}\)
_Add to_ \(L\) _those elements of_ \(\{x_{1}t,\ldots,x_{n}t\}\) _which are neither multiples of an element of_ \(L\) _nor of an element of_ \(\{\operatorname{LM}_{\sigma}(g)\mid g\in G\}\)_._
* _Continue with step (_3_)._
_This is an algorithm which computes a pair \((G,\mathcal{O})\) such that \(G\) is a reduced strong \(\sigma\)-Grobner basis of \(I=\bigcap_{i=1}^{s}I_{s}\) and the residue classes of the elements in \(\mathcal{O}\) generate the \(\mathbb{Z}\)-module \(P/I\)._
Proof.: First we prove correctness using induction on the iterations of the algorithm. More precisely, we show that if the values of \(G\) and \(\mathcal{O}\) are correct at the start of an iteration then they are still correct at the end of the iteration.
If \(L\) is not empty then it contains a minimal element \(t\) with respect to \(\sigma\). So, at the start of each iteration we have that the list \(G\) contains polynomials that can be extended to a minimal strong Grobner basis of the intersection and whose leading terms are \(\sigma\)-smaller than \(t\). Consider the case \(t>_{\sigma}1\). If \((a_{i})\in\mathbb{Z}^{k+\ell+1}\) is the solution of the linear system in step (5) then \(a_{0}v-\sum_{i=1}^{\ell}a_{i}m_{i}\in\langle N\rangle\), and hence \(f=a_{0}t-\sum_{i=1}^{\ell}a_{i}t_{i}\in I_{1}\cap\cdots\cap I_{s}\). Since the solution space is in Hermite normal form, every other polynomial \(h\) in the intersection with \(\operatorname{LT}_{\sigma}(h)=t\) has to satisfy \(a_{0}t\mid\operatorname{LM}_{\sigma}(h)\), and \(f\) cannot be reduced further using the elements in \(G\). This means that a reduced strong Grobner basis of the intersection has to contain \(f\), and \(f\) is added to \(G\) in step (6).
If there exists no solution with non-zero first component, or the first component of all solutions is different from \(1\), there is no element \(g\) in \(G\) such that \(\operatorname{LM}_{\sigma}(g)\mid t\). Hence the term \(t\) has to be added to \(\mathcal{O}\).
Finally, the list \(L\) is updated such that its \(\sigma\)-smallest element is always the \(\sigma\)-smallest term greater than \(t\) and not divisible by the leading monomial of some element of \(G\). Since \(P/(I_{1}\cap\cdots\cap I_{s})\) is a finite \(\mathbb{Z}\)-module, there exists for every \(i\in\{1,\ldots,n\}\) a number \(\alpha_{i}\geq 1\) such that \(x_{i}^{\alpha_{i}}\in\operatorname{LT}_{\sigma}(I_{1}\cap\cdots\cap I_{s})\). Hence only a finite number of terms can be added to the list \(L\). This proves that the procedure terminates.
Let us apply this algorithm to an example.
**Example 6.4**.: Let \(\sigma=\texttt{DegRevLex}\), and consider the ideals \(I=\langle 2x-y,x^{2},y^{2},xy\rangle\) and \(J=\langle x^{2},y^{2},2\rangle\) in \(P=\mathbb{Z}[x,y]\). We have
\[\mathcal{O}_{I}=\mathbb{T}^{n}\setminus\langle x^{2},y^{2},xy\rangle=\{1,y,x \}\quad\text{and}\quad\mathcal{O}_{J}=\mathbb{T}^{n}\setminus\langle x^{2},y^ {2}\rangle=\{1,y,x,xy\}.\]
The following table shows how Algorithm 6.3 can be applied to compute a strong \(\sigma\)-Grobner basis of \(I\cap J\). The first five rows of this table correspond to the elements of \(N\). The algorithm considers the terms \(1,y,x,y^{2},xy,x^{2}\) in this order. Rows 6-11 in the table correspond to the representation vectors computed in step (4) of each iteration.
\[\begin{array}{|c c c c c c c|}1&y&x&1&y&x&xy\\ \hline 0&-1&2&0&0&0&0\\ 0&0&0&2&0&0&0\\ 0&0&0&0&2&0&0\\ 0&0&0&0&0&2&0\\ 0&0&0&0&0&0&2\\ \hline 1&0&0&1&0&0&0&\to G=[\ ]\\ y&0&1&0&0&1&0&0&\to G=[\ ]\\ x&0&0&1&0&0&1&0&\to G=[4x-2y]\\ y^{2}&0&0&0&0&0&0&\to G=[4x-2y,\ y^{2}]\\ xy&0&0&0&0&0&1&\to G=[4x-2y,\ y^{2},\ 2xy]\\ x^{2}&0&0&0&0&0&0&\to G=[4x-2y,\ y^{2},\ 2xy,\ x^{2}]\end{array}\]
For instance, let us examine the \(5\)-th iteration, where the algorithm handles the term \(xy\). In step (5) of this iteration we solve the homogeneous linear system of equations given by rows \(1\)-\(8\) and row \(10\) of the table. This is because in the \(4\)-th iteration we did not add the representation vector to the set \(M\). The Hermite normal form of a basis of the solution space is given by
\[\begin{bmatrix}2&0&0&0&0&0&0&0&-1\\ 0&4&-2&0&-2&0&1&-2&0\end{bmatrix}.\]
Hence we append the element \(2xy\) to \(G\). Altogether, we obtain the strong \(\sigma\)-Grobner basis \(\{4x-2y,y^{2},2xy,x^{2}\}\) of \(I\cap J\).
If the \(\mathbb{Z}\)-algebras \(P/I_{i}\) determined by the ideals \(I_{i}\) are given as in Remark 2.1, then it is not necessary to compute their strong Grobner bases separately, as the following corollary shows.
**Corollary 6.5**.: **(Computing a Strong Grobner Basis)**
_Suppose that a finite \(\mathbb{Z}\)-algebra \(R\) is explicitly given as in Remark 2.1, and let \(I\) be the ideal in \(P\) such that \(R=P/I\) and \(I\) is of the form described in \((*)\). Let \(\sigma\) be a degree compatible term ordering on \(\mathbb{T}^{n}\). Then Algorithm 6.3 computes a strong \(\sigma\)-Grobner basis of \(I\) in polynomial time._
Proof.: As mentioned in Remark 6.2.a, the representation vector in step (4) of an element \(f\in P\) can be obtained by simplifying every product of indeterminates using the structure constants. The linear equation in step (6) can be solved in polynomial time by Remark 2.4. The claim then follows from the fact that the number of iterations is bounded by the number of generators of \(R\).
### Computing an Explicit Representation
Now let \(I\subseteq P\) be an ideal such that \(R=P/I\) is a finite \(\mathbb{Z}\)-algebra. Given a strong Grobner basis of \(I\) with respect to some term ordering \(\sigma\), our goal is to compute a representation of \(R\) as in Remark 2.1. The first step is to find a suitable system of \(\mathbb{Z}\)-module generators of \(R\).
**Proposition 6.6**.: **(Macaulay's Basis Theorem for Finite \(\mathbb{Z}\)-Algebras)**
_Let \(I\subseteq P\) be an ideal such that \(P/I\) is a finite \(\mathbb{Z}\)-algebra, let \(\sigma\) be a term ordering on \(\mathbb{T}^{n}\), and let \(L=\{m\in\operatorname{LM}_{\sigma}(I)\mid\operatorname{LC}_{\sigma}(m)=1\}\) be the set of all monic leading monomials of \(I\). Then the residue classes of the terms in \(\mathcal{O}_{\sigma}=\mathbb{T}^{n}\setminus L\) form a generating set of the \(\mathbb{Z}\)-module \(P/I\)._
Proof.: It suffices to show that the \(\mathbb{Z}\)-submodule \(Q=\sum_{t\in\mathcal{O}_{\sigma}}\mathbb{Z}(t+I)=\sum_{t\in\mathcal{O}_{ \sigma}}\mathbb{Z}t+I\) of \(P\) is equal to \(P\). Suppose that \(Q\subsetneq P\), and let \(f\in P\setminus Q\) be a polynomial with minimal leading term. Then \(\operatorname{LT}_{\sigma}(f)\in\mathcal{O}\) would imply \(f-\operatorname{LC}_{\sigma}(f)\operatorname{LT}_{\sigma}(f)\in P\setminus Q\) which would contradict the minimality of \(f\). Hence we have \(\operatorname{LT}_{\sigma}(f)\in L\). This means there exist a polynomial \(g\in I\) and a term \(t\in\mathbb{T}^{n}\) such that \(\operatorname{LC}_{\sigma}(g)=1\) and \(\operatorname{LT}_{\sigma}(f)=t\operatorname{LT}_{\sigma}(g)\). But then \(f-\operatorname{LC}_{\sigma}(f)tg\) has smaller leading term, again contradicting the minimality of \(f\).
The set \(\mathcal{O}_{\sigma}\) in this proposition can be determined from a strong Grobner basis of \(I\). Having obtained a tuple of \(\mathbb{Z}\)-module generators of \(P/I\), it remains to determine its relation module.
For every polynomial \(f\in P\), the Division Algorithm with respect to a \(\sigma\)-Grobner basis of \(I\) yields its normal form \(\operatorname{NF}_{\sigma,I}(f)=\sum_{i=1}^{\mu}a_{i}t_{i}\) with \(a_{i}\in\mathbb{Z}\) and \(t_{i}\in\mathbb{T}^{n}\). The canonical epimorphism \(\pi:\;P\longrightarrow P/I\) satisfies \(\pi(f)=\sum_{i=1}^{\mu}a_{i}\overline{t}_{i}\). Moreover, for a given generating set of \(P/I\), we have a canonical surjective map \(\varphi:\;P/I\longrightarrow\mathbb{Z}^{\mu}\). Combining this map with \(\pi\), we obtain a \(\mathbb{Z}\)-linear surjective map \(\operatorname{RV}_{\mathcal{O}_{\sigma}}:\;P\longrightarrow\mathbb{Z}^{\mu}\) which sends \(f\) to \((a_{1},\ldots,a_{\mu})\).
Given a strong Grobner basis of the ideal \(I\), we can now compute the kernel of the map \(\varphi\) as follows.
**Algorithm 6.7**.: **(Computing a Module Presentation)**
_Let \(I\) be an ideal in \(P\), let \(\sigma\) be a term ordering on \(\mathbb{T}^{n}\), let \(G\) be a minimal strong \(\sigma\)-Grobner basis of \(I\), and let \(\{t_{1},\dots,t_{k}\}=\mathbb{T}^{n}\setminus L\), where \(L\) equals \(\{m\in\operatorname{LM}_{\sigma}(I)\mid\operatorname{LC}_{\sigma}(m)=1\}\) as in Proposition 6.6. Consider the following instructions._
1. _Start with an empty list_ \(U=[\;]\) _and_ \(\mathcal{O}=[t_{1},\dots,t_{k}]\)_._
2. _If_ \(\mathcal{O}\) _is empty, return the list_ \(U\) _and stop. Otherwise, choose a term_ \(t\) _in_ \(\mathcal{O}\) _and remove it from_ \(\mathcal{O}\)_._
3. _Find the smallest integer_ \(\ell>1\) _such that_ \(\ell\,s\in\operatorname{LM}_{\sigma}(G)\) _for some_ \(s\in\mathbb{T}^{n}\) _with_ \(s\mid t\)_. If no such integer exists, continue with step (2)._
4. _Let_ \(c\in\mathbb{Z}^{k}\) _be the coefficient vector representing_ \(\ell\,t\)_, and let_ \(d\in\mathbb{Z}^{k}\) _be the coefficient vector representing_ \(\operatorname{NF}_{\sigma,I}(\ell\,t)\) _with respect to_ \((t_{1},\dots,t_{k})\)_. Append_ \(c-d\) _to_ \(U\) _and continue with step (2)._
_This is an algorithm which computes a list of tuples \(U\subseteq\mathbb{Z}^{k}\) such that we have \(P/I\cong\mathbb{Z}^{k}/\langle U\rangle\)._
Proof.: By Proposition 6.6, the residue classes of the terms \(t_{1},\dots,t_{k}\) generate the \(\mathbb{Z}\)-module \(P/I\). Assume that \(t_{1}>_{\sigma}t_{2}>_{\sigma}\dots>_{\sigma}t_{k}\), and consider the \(\mathbb{Z}\)-module homomorphism
\[\varphi:\;\mathbb{Z}^{k}\longrightarrow P/I\quad\text{given by }(c_{1},\dots,c_{k}) \mapsto c_{1}t_{1}+\dots+c_{k}t_{k}.\]
Clearly, we have \(\operatorname{Ker}(\varphi)\supseteq\langle U\rangle\). Assume that the converse containment does not hold. Then there exists a tuple \(c=(c_{1},\dots,c_{k})\in\operatorname{Ker}(\varphi)\) such that \(f=c_{1}t_{1}+\dots+c_{k}t_{k}\in I\) and \(c\notin U\). Choose \(c\) with this property such that \(\operatorname{LT}_{\sigma}(f)\) is minimal. Since \(f\in I\), there exists a polynomial \(g\in G\) with \(\operatorname{LM}_{\sigma}(g)\mid\operatorname{LM}_{\sigma}(f)=c_{i}t_{i}\).
Notice that this implies \(\operatorname{LC}_{\sigma}(g)>1\). Otherwise, the term \(t_{i}\) would be divisible by \(\operatorname{LT}_{\sigma}(g)\), and this would imply \(t_{i}\in L\), a contradiction.
Consequently, there is an element \(d=(d_{1},\dots,d_{k})\in U\) with \(d_{1}=\dots=d_{i-1}=0\) and \(\ell d_{i}=c_{i}\) for some \(\ell\in\mathbb{Z}\). The tuple \(c-\ell d\) corresponds to a polynomial whose leading term is smaller than \(\operatorname{LT}_{\sigma}(f)\). This shows \(c-\ell d\in U\), but then we get \(c\in U\) in contradiction to our assumption. Hence we have the equality \(\operatorname{Ker}(\varphi)=\langle U\rangle\) and \(\varphi\) induces the inverse of the desired isomorphism.
Given a strong Grobner basis of an ideal \(I\) as above, an explicit representation of \(P/I\) can now be obtained as follows.
**Corollary 6.8**.: **(Computing an Explicit Representation)**
_Let \(I\) be an ideal in \(P\) such that \(P/I\) is a finite \(\mathbb{Z}\)-algebra, let \(\sigma\) be a term ordering on \(\mathbb{T}^{n}\), and let \(G\) be a minimal strong \(\sigma\)-Grobner basis of \(I\). Consider the following instructions._
1. _Compute the set_ \(\{t_{1},\dots,t_{k}\}=\mathbb{T}^{n}\setminus L\)_, where the set_ \(L\) _equals_ \(\{m\in\operatorname{LM}_{\sigma}(I)\mid\operatorname{LC}_{\sigma}(m)=1\}\)_._
2. _Apply Algorithm_ 6.7 _to compute generators of a submodule_ \(V\subseteq\mathbb{Z}^{k}\) _such that_ \(P/I\cong\mathbb{Z}^{k}/V\)_._
3. _For_ \(i,j=1,\dots,k\)_, use the Division Algorithm with respect to_ \(G\) _to compute the normal form_ \(\operatorname{NF}_{\sigma,I}(t_{i}t_{j})=\sum_{\ell=1}^{k}c_{ij\ell}t_{\ell}\) _with_ \(c_{ij\ell}\in\mathbb{Z}\)_._
4. _Return the residue classes of_ \(t_{1},\dots,t_{k}\) _in_ \(P/I\)_, the generators of_ \(V\)_, and the coefficients_ \(c_{ij\ell}\) _for_ \(i,j,\ell=1,\dots,k\)_._
_This is an algorithm which computes an explicit representation of \(P/I\) in polynomial time in the bit complexity of the input \(G\)._
Proof.: The residue classes of \(t_{1},\dots,t_{k}\) generate \(P/I\) as a \(\mathbb{Z}\)-module by Proposition 6.6. Algorithm 6.7 then correctly computes \(V\) such that \(P/I\cong\mathbb{Z}^{k}/V\). Finally, we check that \(\operatorname{NF}_{\sigma,I}(t_{i}t_{j})\) is of the form given in step (4). Let \(s\) be a term not contained in \(\{t_{1},\dots,t_{k}\}\). Suppose that \(\operatorname{NF}_{\sigma,I}(t_{i}t_{j})\) contains a monomial \(ds\) with
\(d\in\mathbb{Z}\). Then we have \(ds\notin\mathrm{LM}_{\sigma}(I)\), and in particular \(s\notin L\), a contradiction. The time complexity of each step is clearly polynomial.
In conclusion, we can see that an explicit representation of a finite \(\mathbb{Z}\)-algebra \(R\) as in Remark 2.1 is equivalent to knowing a strong Grobner basis of an ideal \(I\) in \(P=\mathbb{Z}[x_{1},\ldots,x_{n}]\) such that \(R=P/I\). Traditionally, many of the algorithms presented in this paper were executed using the calculation of a strong Grobner basis. However, as there is no polynomial time bound for a suitable version of Buchberger's Algorithm, the complexity bounds shown here would be impossible if \(R\) were only given via \(R=P/I\). As explicit representations of the type described in Remark 2.1 occur in many contexts (see for instance [18]), we hope that the algorithms and complexity bounds developed here may prove useful.
|
2307.12286 | Double-Active-IRS Aided Wireless Communication: Deployment Optimization
and Capacity Scaling | In this letter, we consider a double-active-intelligent reflecting surface
(IRS) aided wireless communication system, where two active IRSs are properly
deployed to assist the communication from a base station (BS) to multiple users
located in a given zone via the double-reflection links. Under the assumption
of fixed per-element amplification power for each active-IRS element, we
formulate a rate maximization problem subject to practical constraints on the
reflection design, elements allocation, and placement of active IRSs. To solve
this non-convex problem, we first obtain the optimal active-IRS reflections and
BS beamforming, based on which we then jointly optimize the active-IRS elements
allocation and placement by using the alternating optimization (AO) method.
Moreover, we show that given the fixed per-element amplification power, the
received signal-to-noise ratio (SNR) at the user increases asymptotically with
the square of the number of reflecting elements; while given the fixed number
of reflecting elements, the SNR does not increase with the per-element
amplification power when it is asymptotically large. Last, numerical results
are presented to validate the effectiveness of the proposed AO-based algorithm
and compare the rate performance of the considered double-active-IRS aided
wireless system with various benchmark systems. | Zhenyu Kang, Changsheng You, Rui Zhang | 2023-07-23T10:37:01Z | http://arxiv.org/abs/2307.12286v1 | # Double-Active-IRS Aided Wireless Communication:
###### Abstract
In this letter, we consider a _double-active-intelligent reflecting surface_ (IRS) aided wireless communication system, where two active IRSs are properly deployed to assist the communication from a base station (BS) to multiple users located in a given zone via the double-reflection links. Under the assumption of fixed _per-element_ amplification power for each active-IRS element, we formulate a rate maximization problem subject to practical constraints on the reflection design, elements allocation, and placement of active IRSs. To solve this non-convex problem, we first obtain the optimal active-IRS reflections and BS beamforming, based on which we then jointly optimize the active-IRS elements allocation and placement by using the alternating optimization (AO) method. Moreover, we show that given the fixed per-element amplification power, the received signal-to-noise ratio (SNR) at the user increases asymptotically with the _square_ of the number of reflecting elements; while given the fixed number of reflecting elements, the SNR does not increase with the per-element amplification power when it is asymptotically large. Last, numerical results are presented to validate the effectiveness of the proposed AO-based algorithm and compare the rate performance of the considered double-active-IRS aided wireless system with various benchmark systems.
Intelligent reflecting surface (IRS), active IRS, double IRS, capacity scaling order.
## I Introduction
Intelligent reflecting surface (IRS) has emerged as a promising technology to smartly reconfigure the wireless radio propagation environment [1, 2]. Specifically, IRS is a low-cost meta-surface consisting of massive reflecting elements that can reflect incident signals with flexibly tuned phase shifts and/or amplitudes to enhance desired signal power or suppress undesired interference [3]. However, the conventional IRS with fully passive reflecting elements suffers severe product-distance path loss in practice, which significantly limits the power of IRS-reflected signals.
To address this issue, a new type of IRS, called _active_ IRS, has been recently proposed, which enables simultaneous signal reflection and amplification by using reflection-type amplifiers, hence more effectively compensating the severe path loss of passive IRS [4, 5]. Specifically, it was shown in [3, 6] that given the same IRS location, active IRS achieves a higher rate than passive IRS thanks to the appealing amplification gain. Besides, the authors in [7] showed that active IRS should be properly deployed between the transmitter and receiver to balance the trade-off between the signal and noise amplification, which is in sharp contrast to the case of passive IRS that should be deployed near the transmitter or receiver to minimize the cascaded channel path loss. However, existing works on active IRS have ignored the cooperation between them, which, however, has the potential to achieve higher channel capacity than the single-active-IRS system, as shown in the case of passive IRS [8, 9, 10]. Generally speaking, the design for double-active-IRS aided systems is more complicated than the single-active-IRS counterpart, as elaborated next. First, the active-IRS placement needs to be carefully devised to balance the path loss of different IRS-related channels. Second, it is necessary to properly assign the total reflecting elements to the two IRSs to balance the multiplicative beamforming gain and the amplification noise power given the fixed total number of elements. To the authors' best knowledge, the design of efficient double-active-IRS deployment and its rate performance comparison with the single-active-IRS counterpart have not been studied yet.
To answer the above questions, we study in this letter a double-active-IRS aided wireless communication system as illustrated in Fig. 1, where two active IRSs are deployed to assist the communication between a multi-antenna base station (BS) and multiple single-antenna users. Note that unlike existing works that mostly considered the total active-IRS amplification power constraint (e.g., [4, 5, 6, 7]), we consider in this work the new _per-element_ amplification power constraint for each active-IRS reflecting element [11].1 Thereby, we formulate an optimization problem to maximize the achievable rate of the double-active-IRS system subject to practical constraints on the reflection and beamforming design, elements allocation, and placement of active IRSs. To solve this non-convex problem, we first obtain the optimal active-IRS reflections and BS beamforming, based on which we then jointly optimize the active-IRS elements allocation and placement by using the alternating optimization (AO) method. Moreover, we analytically characterize the system capacity scaling orders with respect to (w.r.t.) the number of reflecting elements and the per-element amplification power. Last, numerical results are provided to evaluate the proposed algorithm and compare the rate performance of the double-active-IRS aided wireless system with various benchmark systems.
Footnote 1: Note that although the active IRS with per-element amplification power control requires higher hardware cost than that with the total amplification power, it leads to lower computational complexity, more flexible control, and better fault tolerance.
## II System Model and Problem Formulation
### _System Model_
Fig. 1: The double-active-IRS aided wireless communication system.
Consider a double-active-IRS aided wireless communication system as shown in Fig. 1, where two active IRSs are properly deployed to assist the communication from a multi-antenna BS to \(\ell\) single-antenna users in a given zone. We consider a challenging scenario where the direct BS-user and single-reflection links are blocked; thus the BS can communicate with the user via a double-reflection link only, i.e., BS\(\rightarrow\)IRS 1\(\rightarrow\)IRS 2\(\rightarrow\)user. We consider the time-division multiple access (TDMA) schemes, where the BS serves different users in different time2. Without loss of generality, we consider a three-dimensional (3D) Cartesian coordinate system, where the BS, the user-zone center, and two IRSs are located at \(\mathbf{u}_{\rm B}=(0,0,0)\), \(\mathbf{u}_{\rm u}=(D,0,0)\), \(\mathbf{u}_{1}=(x_{0},0,H)\) and \(\mathbf{u}_{2}=(x_{0}+x_{1},0,H)\), respectively, where the horizontal locations of the two IRSs (or equivalently \(x_{0}\), \(x_{1}\)) need to be designed with \(r_{\ell}\) denoting the distance between the center of user zone and \(\ell\)-th user and \(\theta_{\ell}\) denoting its azimuth angle, \(\ell\in\{1,\cdots,L\}\). Note that in this letter, we do not consider dynamic IRS deployment, since in practice, IRSs are usually at fixed locations once deployed.
Footnote 2: As the first study on double-active-IRS deployment, this work aims to provide useful insights into its deployment and system capacity scaling order. The active-IRS beamforming design for other multi-user cases, e.g., non-orthogonal multiple access (NOMA), will be studied in our future work.
#### I-A1 Active-IRS Model
Let \(M\) denote the deployment budget on the total number of reflecting elements with \(M_{1}\) and \(M_{2}\) representing the numbers of reflecting elements at IRSs 1 and 2, respectively. For each active IRS \(k\) in the \(\ell\)-th time slot with \(k\in\{1,2\}\), let \(\mathbf{\Psi}_{k,\ell}\triangleq\mathbf{A}_{k,\ell}\mathbf{\Phi}_{k,\ell}\) denote its reflection matrix where \(\mathbf{A}_{k,\ell}\triangleq{\rm diag}(a_{1k,\ell,1},\cdots,a_{1k,\ell,M_{k}})\) and \(\mathbf{\Phi}_{k,\ell}\triangleq{\rm diag}(e^{\varphi\mathbf{\Phi}_{k,\ell}1},\cdots, e^{\varphi\mathbf{\Phi}_{k,\ell}M_{k}})\) denote the amplification matrix and phase-shift matrix. Herein, \(a_{l_{k},\ell,m_{k}}\) and \(e^{\varphi\mathbf{\Phi}_{k,\ell}\cdots\mathbf{\Phi}_{k}}\) represent the amplification factor and phase shift of each element. We consider the fixed per-element amplification power for the two active IRSs, where a separate power supply of \(P_{\rm e}\) is connected to each reflecting element for signal amplification.
#### I-A2 Channel Model
For ease of analysis, we consider the line-of-sight (LoS) channel model for all available links, for which the channel state information (CSI) on involved links are assumed to be known3. Note that the design of active IRS elements allocation and placement mainly depends on the LoS channel path loss, which can be obtained based on the locations of the BS, active IRSs and user. Let \(\theta_{i,j}^{\rm a}(\vartheta_{i,j}^{\rm a})\in[0,\pi]\) denote the azimuth (elevation) angle-of-arrival (AoA) at node \(j\) from node \(i\), \(\theta_{i,j}^{\rm a}(\vartheta_{i,j}^{\rm d})\in[0,\pi]\) denote the azimuth (elevation) angle-of-departure (AoD) from node \(i\) to node \(j\), and \(\mathbf{a}_{\rm r}\) denote the receive steering vector, given by \(\mathbf{a}_{\rm r}\left(\theta_{i,j}^{\rm a},\vartheta_{i,j}^{\rm a},N_{j}\right) \triangleq\mathbf{w}\!\left(\frac{2d_{\rm l}}{\lambda}\!\cos\theta_{i,j}^{\rm a} \sin\vartheta_{i,j}^{\rm a},N_{j,\rm h}\right)\!\otimes\!\mathbf{w}\!\left(\frac{2d _{\rm l}}{\lambda}\!\cos\vartheta_{i,j}^{\rm a},N_{j,\rm w}\right)\!\) with \(\otimes\) denoting the Kronecker product, \(d_{\rm l}\) denoting the IRS reflecting element spacing and \(\lambda\) denoting the signal wavelength, \(N_{j,\rm h}\) (\(N_{j,\rm v}\)) denoting the number of horizontal (vertical) elements of node \(j\), and \(\mathbf{w}(\varsigma,N)\!\triangleq\!\left[1,\cdots,e^{-\jmath\pi(N-1)\varsigma} \right]^{T}\) denoting the steering vector function. The transmit steering vector, \(\mathbf{a}_{\rm t}\), can be modeled similar to \(\mathbf{a}_{\rm r}\). As such, the inter-IRS channel from IRS 1 to IRS 2, denoted by \(\mathbf{G}\in\mathbb{C}^{M_{2}\times M_{1}}\), can be modeled as
Footnote 3: The results obtained in this letter can be extended to more general cases with non-LoS paths, e.g., the Rician fading channel. In practice, the effect of channel estimation error can be mitigated by designing robust IRS beamforming and deployment based on the distribution of channel estimation error.
\[\mathbf{G}\triangleq g\mathbf{a}_{\rm r}\left(\theta_{1,1_{2}}^{\rm a},\vartheta_{1,1 _{2}}^{\rm a},M_{2}\right)\mathbf{a}_{\rm t}^{H}\left(\theta_{1,1_{2}}^{\rm d}, \vartheta_{1,1_{2}}^{\rm d},M_{1}\right), \tag{1}\]
where \(g\!=\!\beta^{\frac{1}{2}}\!e^{-j\frac{2\pi}{\lambda}d_{1}}/d_{1}^{\frac{1}{2}}\) denotes the complex channel gain of the inter-IRS link with \(\beta\) denoting the channel power gain at a reference distance of 1 meter (m), \(d_{1}=x_{1}\) denoting the distance from IRS 1 to IRS 2, and \(\alpha\) denoting the path loss exponent. Let \(\mathbf{\omega}_{\ell}\in\mathbb{C}^{N\times 1}\) denote the normalized transmit beamforming vector of the BS to \(\ell\)-th user with \(N\) denoting the number of BS antennas and \(\|\mathbf{\omega}_{\ell}\|^{2}\leq 1\). Then, the channel from the BS to IRS 1, denoted by \(\mathbf{H}_{1}\in\mathbb{C}^{M_{1}\times N}\), can be modeled as \(\mathbf{N}_{1}\triangleq h_{1}\mathbf{a}_{\rm r}\left(\theta_{{\rm BS},1_{1}}^{\rm a}, \vartheta_{{\rm BS},1_{1}}^{\rm a},M_{1}\right)\mathbf{a}_{\rm t}^{H}\left(\theta_{{ \rm BS},1_{1}}^{\rm d},\vartheta_{{\rm BS},1_{1}}^{\rm d},N\right)\), where its complex channel gain is \(h_{1}\!=\!\beta^{\frac{1}{2}}\!e^{-j\frac{2\pi}{\lambda}d_{0}/d_{1}^{\frac{1}{2}}}\) with \(d_{0}=\sqrt{x_{0}^{2}+h^{2}}\) denoting the BS-IRS 1 distance. The channel from IRS 2 to the \(\ell\)-th user, denoted by \(\mathbf{h}_{2}^{H}\in\mathbb{C}^{1\times M_{2}}\), can be modeled in a similar form as \(\mathbf{h}_{1}\) with the IRS 2-user \(\ell\) distance given by \(d_{2,\ell}=\sqrt{(x_{2}+r_{\ell}\cos\theta_{\ell})^{2}+h^{2}+(r_{\ell}\sin \theta_{\ell})^{2}}\). As such, the received signal at the user is obtained as
\[y_{\ell}\!\!=\!\mathbf{h}_{2,\ell}^{H}\mathbf{\Psi}_{2,\ell}\mathbf{G}\mathbf{\Psi}_{\rm H}\mathbf{ \iota}_{\ell}\mathbf{\iota}_{\ell}\mathbf{s}\!+\!\mathbf{h}_{2,\ell}^{H}\mathbf{\Psi}_{2,\ell} \mathbf{G}\mathbf{\Psi}_{1,\ell}\mathbf{z}_{1}\!+\!\mathbf{h}_{2,\ell}^{H}\mathbf{\Psi}_{2,\ell} \mathbf{z}_{2}\!+\!z_{0}, \tag{2}\]
where \(s\in\mathbb{C}\) denotes the transmitted signal with power \(P_{\rm B}\), \(\mathbf{z}_{k}\in\mathbb{C}^{M_{k}\times 1}\) is the amplification noise induced by IRS \(k\) that is assumed to follow the independent circularly symmetric complex Gaussian (CSCG) distribution with mean of zero and variance of \(\sigma_{1}^{2}\), i.e., \(\mathbf{z}_{k}\sim\mathcal{CN}\left(\mathbf{0}_{M_{k}},\sigma_{1}^{2}\mathbf{\Pi}_{M_{k}}\right)\) with \(\sigma_{1}^{2}\) denoting the amplification noise power, and \(z_{0}\sim\mathcal{CN}\left(0,\sigma_{0}^{2}\right)\) is the additive white Gaussian noise at the user with power \(\sigma_{0}^{2}\). Note that the received signal at the user is superposed by the desired signal over the double-reflection link (i.e., BS\(\rightarrow\)IRS 1\(\rightarrow\)IRS 2\(\rightarrow\)user), the amplification noise induced by IRS 1 over the IRS 1\(\rightarrow\)IRS 2\(\rightarrow\)user link as well as that induced by IRS 2 over the IRS 2\(\rightarrow\)user link. As such, the corresponding received signal-to-noise ratio (SNR) at the \(\ell\)-th user, \(\gamma_{\ell}\), is given by
\[\gamma_{\ell}=\frac{P_{\rm B}\left|\mathbf{h}_{2,\ell}^{H}\mathbf{\Psi}_{2,\ell}\mathbf{G} \mathbf{\Psi}_{1,\ell}\mathbf{H}_{1}\mathbf{\omega}_{\ell}\right|^{2}}{\left\|\mathbf{h}_{2, \ell}^{H}\mathbf{\Psi}_{2,\ell}\right\|^{2}\sigma_{1}^{2}+\left\|\mathbf{h}_{2,\ell}^{H} \mathbf{\Psi}_{2,\ell}\mathbf{G}\mathbf{\Psi}_{1,\ell}\right\|^{2}\sigma_{1}^{2}+ \sigma_{0}^{2}}, \tag{3}\]
and the maximum achievable rate in bits/second/Hertz (bps/Hz) of the \(\ell\)-th user is \(R_{\ell}=\frac{1}{L}\log_{2}\left(1+\gamma_{\ell}\right
each active element is constrained by its per-element amplification power budget, \(P_{\text{e}}\).
## III Proposed Solution to Problem (P1)
Problem (P1) is a non-convex optimization problem due to the non-concave objective function, the unit-modulus phase-shift constraints, and the integer elements allocation constraint, thus making it difficult to be solved optimally. To address this issue, we propose a two-layer AO algorithm that iteratively optimizes the joint BS and active-IRS beamforming, as well as the active-IRS elements allocation and placement.
### _Joint BS and Active-IRS Beamforming Optimization_
First, given the feasible active-IRS elements allocation and placement, the optimization problem (P1) reduces to
\[\text{(P2)} \max_{\mathbf{A},\mathbf{\Phi},\tilde{\eta}} \hat{\eta}\] \[\text{s.t.} \hat{\eta}\leq\gamma_{\ell},\ell=1,\cdots,L, \tag{12}\] \[\text{(\ref{eq:P2})}-\text{(\ref{eq:P3})}.\]
To solve problem (P2), we first decompose the inter-IRS channel as follows:
\[\mathbf{G}\!=\!\underbrace{\sqrt{g}\mathbf{a}_{\mathbf{t}}\big{(}\theta_{1, 1_{2}}^{\text{a}},\vartheta_{1,1_{2}}^{\text{a}},M_{2}\big{)}}_{\mathbf{g}_{1}} \underbrace{\sqrt{g}\mathbf{a}_{\mathbf{t}}^{H}\big{(}\theta_{1,1_{2}}^{\text{d}}, \vartheta_{1,1_{2}}^{\text{d}},M_{1}\big{)}}_{\mathbf{g}_{2}^{H}}. \tag{13}\]
Then, we have the following result.
**Lemma 1**.: _The optimal solution to problem (P2) is_
\[\mathbf{\omega}_{\ell}\!=\!\mathbf{a}_{1}\left(\theta_{\text{BS},1_{1}}^ {\text{d}},\vartheta_{\text{BS},1_{1}}^{\text{d}},N\right)\!/\big{\|}\mathbf{a}_{ \mathbf{t}}\left(\theta_{\text{BS},1_{1}}^{\text{d}},\vartheta_{\text{BS},1_{1}} ^{\text{d}},N\right)\big{\|}\,,\forall\ell, \tag{14}\] \[\varphi_{1,\ell,m_{1}}\!=\!\arg\big{(}\left[\mathbf{g}_{2}\right]_{m_ {1}}\big{)}\!-\!\arg\big{(}\left[\mathbf{h}_{1}\right]_{m_{1}}\big{)}\,,\forall \ell,m_{1},\] (15) \[\varphi_{1,L,m_{2}}\!=\!\arg\big{(}\left[\mathbf{h}_{2,\ell}\right]_{ m_{2}}\big{)}\!-\!\arg\big{(}\left[\mathbf{g}_{1}\right]_{m_{2}}\big{)}\,,\forall \ell,m_{2},,\] (16) \[a_{1_{\ell},m_{1}}=a_{1}\triangleq\frac{P_{\text{e}}d_{0}^{2}}{ P_{\text{B}}\beta+\sigma_{1}^{2}d_{0}^{2}},\forall\ell,m_{1},\] (17) \[a_{1_{2},l,m_{2}}\!=\!\frac{P_{\text{e}}d_{0}^{2}d_{1}^{2}}{P_{ \text{B}}\beta^{2}M_{1}^{2}\!\sigma_{1}^{2}\!+\!\sigma_{1}^{2}\!\beta^{2}M_{ \text{P}}\!\alpha_{1}^{2}\!q_{1}^{2}\!\sigma_{0}^{2}\!+\!\sigma_{1}^{2}\!\!q_ {1}^{2}\!\!q_{1}^{2}},\forall\ell,m_{2}. \tag{18}\]
Proof.: The optimal phase shifts are obtained by phase-aligning the double-reflection channel and the optimal amplification factors are obtained by taking the equalities of the power constraints (5) and (6), which can be shown to hold in the optimal solution to problem (P1).
### _Active-IRS Elements Allocation and Placement Optimization_
Next, given the optimal active-IRS reflections and BS beamforming in (14)-(18), we jointly optimize the active-IRS elements allocation and placement to maximize the minimum achievable rate. Specifically, by substituting (14)-(18) into (3), the received SNR at the \(\ell\)-th user is obtained as \(\gamma_{\ell}=NP_{\text{B}}\beta^{3}/\xi_{\ell}\), where
\[\xi_{\ell} \triangleq\frac{\sigma_{1}^{4}\sigma_{0}^{2}d_{0}^{2}d_{1}^{2}d_{2,\ell}^{2}+P_{\text{B}}\sigma_{1}^{2}\sigma_{0}^{2}\beta d_{1}^{2}d_{2,\ell}^ {2}}{P_{\text{e}}^{2}M_{1}^{2}M_{2}^{2}}\] \[+\frac{\sigma_{1}^{4}\beta d_{0}^{2}d_{1}^{2}+P_{\text{B}}\sigma_ {1}^{2}\beta^{2}d_{1}^{2}}{P_{\text{e}}M_{1}^{2}M_{2}}+\frac{P_{\text{B}} \sigma_{0}^{2}\beta^{2}d_{2,\ell}^{2}}{P_{\text{e}}M_{2}^{2}}+\frac{\sigma_{1} ^{2}\beta^{2}d_{0}^{2}}{M_{1}M_{2}}. \tag{19}\]
Based on the above, the solution to problem (P1) can be obtained by solving the following problem (P3).
\[\text{(P3)} \min_{\mathbf{M},\mathbf{X},\tilde{\eta}} \tilde{\eta}\] \[\text{s.t.} \tilde{\eta}\geq\xi_{\ell},\ell=1,\cdots,L, \tag{20}\] \[\text{(\ref{eq:P3})},\text{(\ref{eq:P3})}.\]
Problem (P3) is a non-convex optimization problem, which can be solved by using the AO method in the next.
#### Iii-B1 Active-IRS Elements Allocation
We first optimize the active-IRS elements allocation given fixed active-IRS placement. To tackle the integer constraint (10), we relax the discrete values, \(\mathbf{M}\), into their continuous counterparts, \(\mathbf{\tilde{M}}\triangleq\{\tilde{M}_{1},\tilde{M}_{2}\}\). As such, \(\xi_{\ell}\) in (15) can be relaxed as
\[f_{1,\ell}(\mathbf{\tilde{M}})\triangleq\frac{B_{1,\ell}}{\tilde{M}_{1}^{2}\tilde{ M}_{2}^{2}}+\frac{B_{2,\ell}}{\tilde{M}_{1}\tilde{M}_{2}^{2}}+\frac{B_{3}}{ \tilde{M}_{1}^{2}\tilde{M}_{2}}+\frac{B_{4,l}}{\tilde{M}_{2}^{2}}+\frac{B_{5}}{ \tilde{M}_{1}\tilde{M}_{2}}, \tag{21}\]
where
\[B_{1,\ell}=\frac{\sigma_{0}^{4}\sigma_{0}^{2}d_{0}^{2}d_{1}^{2}d_{2,\ell}^{2}\!+\!P_{\text{B}}\sigma_{1}^{2}\sigma_{0}^{2}\beta d_{1}^{2}d_{2, \ell}^{2}}{P_{\text{e}}^{2}},B_{2,\ell}=\frac{\sigma_{0}^{2}\sigma_{1}^{2}\beta d _{0}^{2}d_{2,\ell}^{2}}{P_{\text{e}}},\] \[B_{3}\!=\!\frac{\sigma_{1}^{4}\beta d_{0}^{2}d_{1}^{2}+P_{\text{B}} \sigma_{1}^{2}\beta^{2}d_{1}^{2}}{P_{\text{e}}},B_{4,l}\!=\!\frac{P_{\text{B}} \sigma_{0}^{2}\beta^{2}d_{2,\ell}^{2}}{P_{\text{e}}},B_{5}\!=\!\sigma_{1}^{2} \beta^{2}d_{0}^{2}.\]
Next, by introducing the slack variables \(\tilde{m_{k}}=\log(M_{k}),k=1,2\), \(f_{1,\ell}(\mathbf{\tilde{M}})\) can be re-expressed as
\[\tilde{f}_{1,\ell}(\tilde{m_{1}},\tilde{m_{2}})\triangleq B_{1, \ell}e^{-2\tilde{m_{1}}-2\tilde{m_{2}}}+B_{2,\ell}e^{-\tilde{m_{1}}-2\tilde{m_{ 2}}}\] \[+B_{3}e^{-2\tilde{m_{1}}-\tilde{m_{2}}}+B_{4,l}e^{-2\tilde{m_{2}}}+B_ {5}e^{-\tilde{m_{1}}-\tilde{m_{2}}}. \tag{22}\]
Then, the optimal solution to problem (P3) can be obtained by solving the following problem.
\[\text{(P3.1)} \min_{\tilde{m}_{1},\tilde{m}_{2}} \tilde{\eta}\] \[\text{s.t.} \tilde{\eta}\geq\tilde{f}_{1,\ell}(\tilde{m_{1}},\tilde{m_{2}}), \ell=1,\cdots,L, \tag{23}\] \[e^{\tilde{m_{1}}}+e^{\tilde{m_{2}}}\leq M. \tag{24}\]
It can be proved by contradiction that in the optimal solution to problem (P3), the equality in constraint (24) always holds. As such, problem (P3.1) is a convex optimization problem, which can be efficiently solved by using the interior-point method. The integer number of reflecting elements can be reconstructed by rounding the continuous solutions to problem (P3.1).
#### Iii-B2 Active-IRS Placement Optimization
Next, we optimize the active-IRS placement given fixed active-IRS elements allocation. To this end, we first rewrite \(\xi_{\ell}\) in (19) as a function of active-IRS locations, given by
\[\xi_{\ell}=f_{2,\ell}(\mathbf{D})\triangleq C_{1}d_{0}^{2}d_{1}^{2}d_{2,\ell}^{2}+C_{2}d_{0}^{2 }d_{1}^{2}+C_{3}d_{0}^{2}d_{2,\ell}^{2}\\ +C_{4}d_{1}^{2}d_{2,\ell}^{2}+
where the equalities in constrains (27) and (28) hold in the optimal solution to problem (P3.2). Note that problem (P3.2) is non-convex due to its non-convex objective function and constraints. To tackle this difficulty, we introduce the slack variables \(y_{k}=\log(d_{k}),k=0,1\), and \(y_{2,\ell}=\log(d_{2,\ell})\), and thus rewrite \(f_{2,\ell}(\mathbf{D})\) as
\[\tilde{f}_{2,\ell}(\mathbf{Y}) \triangleq C_{1}e^{2y_{0}+2y_{1}+2y_{2,\ell}}+C_{2}e^{2y_{0}+2y_{1}}+C_{ 3}e^{2y_{0}+2y_{2,\ell}}\] \[+C_{4}e^{2y_{1}+2y_{2,\ell}}+C_{5}e^{2y_{0}}+C_{6}e^{2y_{1}}+C_{7 }e^{2y_{2,\ell}}, \tag{29}\]
where \(\mathbf{Y}\triangleq\{y_{0},y_{1},\{y_{2,\ell}\}_{\ell=1}^{L}\}\). Problem (P3.2) can then be approximately reformulated as the following convex optimization problem.
\[\text{(P3.3)}\quad\min_{\mathbf{X},\mathbf{Y},\tilde{\eta}} \tilde{\eta}\] \[\text{s.t.}\quad\eqref{eq:constraint_constraint_constraint_constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint constraint_
## V Numerical Results
Numerical results are presented in this section. The horizontal distance from the BS to the user is set as \(D=200\) m and the IRSs are deployed at an altitude of \(H=5\) m. We assume that \(L=4\) users are randomly distributed within the radius of 30 m from the user-zone center. If not specified otherwise, we set the per-element amplification power as \(P_{\mathrm{e}}=1\) mW, and the fixed elements budget as \(M=128\). Other parameters are set as \(N=4\), \(\lambda=0.4\) m, \(\beta=(\lambda/4\pi)^{2}=-30\) dB, \(\alpha=2\), \(P_{\mathrm{B}}=1\) W, and \(\sigma_{0}=\sigma_{\mathrm{I}}=-80\) dBm.
We first compare the optimized active-IRS elements allocation and placement by using the proposed AO algorithm and the exhaustive search (ES). Fig. 2 shows the optimized elements allocation for the two IRSs. One can observe that the proposed AO algorithm yields close-to-optimal elements allocation with the ES method. Besides, in Fig. 2(a), we observe that the optimized number of elements for both IRSs increases with the total number of reflecting elements, while more elements are allocated to IRS 2 (closer to user) to reduce the amplification noise induced by IRS 1. Moreover, as observed in Fig. 2(b), with an increasing per-element amplification power, the number of reflecting elements allocated to IRS 1 increases for optimally balancing the trade-off between the signal and noise power amplification. In Fig. 3, we compare the optimized active-IRS locations. We observe that as the total elements budget or per-element amplification power increases, IRS 1 should be placed even closer to the BS for minimizing the amplification noise power with smaller amplification factors, while IRS 2 should be placed above the user to minimize the cascaded path loss.
Next, Fig. 4 compares the achievable rates of wireless communication system aided by double active IRSs, a single active IRS, double passive IRSs, and a single passive IRS. Fig. 4(a) plots the rate performance versus the total number of reflecting elements. First, it is observed that the double-active-IRS aided system achieves a higher capacity scaling order than that aided by a single active IRS owing to the higher double-reflection multiplicative beamforming gain. Second, the double-active-IRS aided system achieves a higher rate than its double-passive-IRS counterpart when the number of reflecting elements is not excessively large thanks to the additional power amplification gain. In Fig. 4(b), the rate performance is compared versus the per-element amplification power. It is observed that the active-IRS cases outperform the passive-IRS cases thanks to the power amplification gain. Moreover, the achievable rates of active-IRS aided systems increase slowly with the per-element amplification power, despite the non-favorable asymptotic result in Proposition 2. This is because both the powers of the desired signal and amplification noise linearly increase with the per-element amplification power.
## VI Conclusions
In this letter, we proposed an AO-based algorithm to optimize the BS beamforming, active-IRS reflections, elements allocation, and placement for maximizing the achievable rate of the double-active-IRS aided wireless communication system. Besides, we characterized the capacity scaling orders of the double-active-IRS aided wireless system w.r.t. the number of reflecting elements and the per-element amplification power. It was shown that given the fixed per-element amplification power, the received SNR increases asymptotically with the square of the number of reflecting elements; while given the fixed number of reflecting elements, it does not increase with the per-element amplification power when it is asymptotically large. Numerical results were presented to evaluate the proposed algorithm and compare the rate performance of the double-active-IRS aided wireless system with various benchmark systems.
|
2310.07522 | S4C: Self-Supervised Semantic Scene Completion with Neural Fields | 3D semantic scene understanding is a fundamental challenge in computer
vision. It enables mobile agents to autonomously plan and navigate arbitrary
environments. SSC formalizes this challenge as jointly estimating dense
geometry and semantic information from sparse observations of a scene. Current
methods for SSC are generally trained on 3D ground truth based on aggregated
LiDAR scans. This process relies on special sensors and annotation by hand
which are costly and do not scale well. To overcome this issue, our work
presents the first self-supervised approach to SSC called S4C that does not
rely on 3D ground truth data. Our proposed method can reconstruct a scene from
a single image and only relies on videos and pseudo segmentation ground truth
generated from off-the-shelf image segmentation network during training. Unlike
existing methods, which use discrete voxel grids, we represent scenes as
implicit semantic fields. This formulation allows querying any point within the
camera frustum for occupancy and semantic class. Our architecture is trained
through rendering-based self-supervised losses. Nonetheless, our method
achieves performance close to fully supervised state-of-the-art methods.
Additionally, our method demonstrates strong generalization capabilities and
can synthesize accurate segmentation maps for far away viewpoints. | Adrian Hayler, Felix Wimbauer, Dominik Muhle, Christian Rupprecht, Daniel Cremers | 2023-10-11T14:19:05Z | http://arxiv.org/abs/2310.07522v2 | # S4C: Self-Supervised Semantic Scene Completion with Neural Fields
###### Abstract
3D semantic scene understanding is a fundamental challenge in computer vision. It enables mobile agents to autonomously plan and navigate arbitrary environments. SSC formalizes this challenge as jointly estimating dense geometry and semantic information from sparse observations of a scene. Current methods for SSC are generally trained on 3D ground truth based on aggregated LiDAR scans. This process relies on special sensors and annotation by hand which are costly and do not scale well. To overcome this issue, our work presents the first self-supervised approach to SSC called **S4C** that does not rely on 3D ground truth data. Our proposed method can reconstruct a scene from a single image and only relies on videos and pseudo segmentation ground truth generated from off-the-shelf image segmentation network during training. Unlike existing methods, which use discrete voxel grids, we represent scenes as implicit semantic fields. This formulation allows querying any point within the camera frustum for occupancy and semantic class. Our architecture is trained through rendering-based self-supervised losses. Nonetheless, our method achieves performance close to fully supervised state-of-the-art methods. Additionally, our method demonstrates strong generalization capabilities and can synthesize accurate segmentation maps for far away viewpoints.
## 1 Introduction
A plethora of tasks require holistic 3D scene understanding. Obtaining an accurate and complete representation of the scene, both with regard to geometry and high-level semantic information, enables planning, navigation, and interaction. Obtaining this information is a field of active computer vision research that has become popular with the semantic scene completion (SSC) task [64]. SSC jointly infers the scene geometry and semantics in 3D space from limited observations [39].
Current approaches to SSC either operate on Lidar scans [60, 64] or image data [8, 39, 42, 51, 63] as input. Generally, these methods predict discrete voxel grids that contain occupancy and semantic class information. They are trained on 3D ground truth aggregated from numerous annotated Lidar scans. LiDAR-based methods perform better on the task of SSC but depend on costly sensors compared to cameras [38]. Cameras are readily available and offer a dense repre
sentation of the world. However, bridging the gap between 2D camera recordings and 3D voxel grids is not straightforward. MonoScene [8] uses line-of-sight projection to lift 2D features into 3D space. However, this disregards information for occluded and empty scene regions [39]. VoxFormer [39] uses a transformer network to simultaneously predict geometry and semantic labels starting from a few query proposals on a voxel grid.
Recently, neural fields have emerged as a versatile representation for 3D scenes. Here, an MLP learns a mapping from encoded coordinates to some output. While initially focused on geometry and appearance, they have since progressed to also incorporate semantic information [18, 31, 62, 70, 86]. One of the major drawbacks of neural fields is that they rely on test time optimization. The network weights are trained by reconstructing different input views of the scene via volume rendering. To enable generalization on scene geometry and appearance, some methods [75, 80] proposed to condition neural fields on pixel-aligned feature maps predicted by trainable image encoders.
In this work, we introduce the first self-supervised approach to semantic scene completion (SSC). Instead of predicting a voxel grid, our method infers a 3D semantic field from a single image. This field holds density and semantic information and allows for volume rendering of segmentation maps and color images (via image-based rendering). By applying reconstruction losses on the rendered output in 2D, we learn the geometry and semantics of the 3D scene. We train our approach on multiple views from the videos captured by a multi-cam setup mounted on a moving vehicle. To make our method as general as possible, we rely on segmentation maps generated by an off-the-shelf image segmentation network rather than hand-annotated ground truth. In order to learn geometry and semantics in the entire camera frustum, we sample frames from the videos at random time offsets to the input image. As the vehicle moves forward, the different cameras capture many areas of the scene, especially those that are occluded in the input image. Our formulation does not require any form of 3D supervision besides camera poses.
We use the KITTI-360 [43] dataset for training and measure the performance of our proposed approach on the new SSCBench [38] dataset, which is defined on top of KITTI-360. Both qualitative and quantitative results show that our method achieves comparable results to fully-supervised approaches for SSC. Further, we demonstrate the beneficial effects of our different loss components and the random frame sampling strategy. Finally, we also test the unique ability of our method to synthesize high-qualtiy segmentation maps from novel views.
Our **contributions** can be summarized as follows:
* We propose the **first SSC training using self-supervision from images without the need for 3D ground truth data**.
* We achieve close-to state-of-the-art performance compared to fully supervised SSC methods.
* Our method can synthesize high-quality segmentation maps from novel views.
* We release our code upon acceptance to further facilitate research into SSC.
## 2 Related Work
In the following, we will introduce relevant literature to our proposed method. For semantic scene completion (SSC), we will focus the discussion of related work on camera-based approaches and introduce LiDAR-based methods only briefly. We refer the interested reader to [61] for a broader overview of the topic of SSC.
### Single Image Scene Reconstruction
Scene reconstruction refers to the task of estimating 3D geometry from images. While it has been a topic of active research for two decades [23], the introduction of NeRFs [52] has led to renewed interest in this area. An overview can be found in [22]. In the following, we restrict our review to single-view methods and their distinction from monocular depth estimation methods. Monocular depth estimation [19, 20, 46, 65, 87] reconstructs a 3D environment by predicting a per-pixel depth value. Ground-truth supervision [1, 15, 17, 33, 40, 41, 44, 74], reconstruction losses [19, 20, 82, 88], or a combination thereof [32, 79] have been used to train these networks. In contrast to depth estimation and its restriction to visible surfaces, scene reconstruction aims to also predict geometry in occluded regions. PixelNeRF [80], a NeRF variant with the ability to generalize to different scenes, can predict free space in occluded scene regions only from a single image. As an extension, SceneRF [9] uses a probabilistic sampling and a photometric reprojection loss to additionally supervise depth prediction to improve generalization. BTS [75] combines ideas from generalizable NeRF and self-supervised depth estimation and achieves very accurate geometry estimation, even for occluded regions. In contrast to our approach, the above-mentioned methods do not consider semantics and are therefore not suited for SSC. Another line of work for scene reconstruction leverages massive data to learn object shape priors [13, 16, 68, 77].
### 3D Semantic Segmentation
Given a 3D model, such as mesh, semantic multi-view fusion models [3, 26, 30, 47, 50, 83] project segmentation masks from images into the 3D geometry. Implicit representations such as NeRFs have become popular for semantic 3D modeling [18, 31, 69, 70, 86] to ensure multi-view consistency of segmentation masks. [62] extends this idea
to the panoptic segmentation task. The works of Concept-Fusion [28] and OpenScene [55] propose open vocabulary scene understanding by fusing open vocabulary representations into 3D scene representations allowing for segmentation as a downstream task.
In contrast to image segmentation, Lidar segmentation works with 3D data to assign semantic labels to point cloud data. Unlike images, point clouds are a sparse and unordered data collection. To address this different data modality, Lidar segmentation either use point-based [25, 57, 58, 72, 81], voxel-based [48, 56, 71], or projection-based methods [2, 5, 53, 76].
### Semantic Scene Completion
Semantic scene completion (SSC) extends the task of completing the scene geometry by jointly predicting scene semantics. It was first introduced in [64] and has gained significant attention in recent years [61]. This contrasts the separate treatment of the tasks in early works [21, 67]. The first approaches on SSC either focused on indoor settings from image data [7, 10, 34, 35, 36, 84, 85] or outdoor scenes with LiDAR-based methods [12, 37, 59, 60, 78]. MonoScene [8] was the first to present a unified camera approach to indoor and outdoor scenarios using a line-of-sight projection and a novel frustum proportion loss. VoxFormer [39] uses deformable cross-attention and self-attention on voxels from image features. OccDepth [51] utilizes stereo images and stereo depth supervision. Another line of work uses birds-eye-view and temporal information for 3D occupancy prediction [42, 63]. This idea was extended to a Tri-Perspective View in [27]. SSCNet first tackled the problem of SSC from an image and a depth map and used a 3D convolutional network to output occupancy and labels in a voxel grid [64]. LSMCNet combines 2D convolutions with multiple 3D segmentation heads at multiple resolutions to reduce network parameters [60]. Overall, Lidar methods tend to outperform camera approaches on outdoor scenes. All the above-mentioned methods require annotated 3D ground truth for training, which is costly to collect. The need for accurate 3D ground truth data restricts the evaluation of SSC methods to only a few datasets, such as SemanticKITTI [4]. SSCBench [38] is a recently introduced benchmark that includes annotated ground truth for SSC on KITTI-360 [43], nuScenes [6], and Waymo [66].
In contrast, we present **S4C**, a fully self-supervised training strategy that allows our model to be trained from posed images only, lifting the restriction on the expensive datasets with Lidar data.
## 3 Method
In the following, we describe our approach to predict the geometry and semantics of a scene from a single image \(\mathbf{I_{I}}\) to tackle the task of Semantic Scene Completion, as shown in Fig. 2. We first cover how we represent a scene as a continuous semantic field, and then propose a fully self-supervised training scheme that learns 3D geometry and semantics from 2D supervision only.
### Notation
Let \(\mathbf{I_{I}}\in[0,1]^{3\times H\times W}=(\mathbb{R}^{3})^{\Omega}\) be the input image which is defined on a lattice \(\Omega=\{1,\dots,H\}\times\{1,\dots,W\}\). During training, we have access to \(N=\{\mathbf{I_{1}},...,\mathbf{I_{n}}\}\) additional views of the scene beside the input image. Through an off-the-shelf semantic segmentation network \(\Phi(\mathbf{I})\), we obtain segmentation maps \(\mathbf{L}_{i}\in\{0,\dots,c-1\}^{\Omega}\) for all images \(\mathbf{I}\in\{\mathbf{I_{I}}\}\cup N\). \(c\) denotes the number of different classes. Camera poses and projection matrix of the images are given as \(T_{i}\in\mathbb{R}^{4\times 4}\) and \(K_{i}\in\mathbb{R}^{3\times 4}\), respectively. Points in world coordinates are denoted as \(\mathbf{x}\in\mathbb{R}^{3}\). Projection into the image plane of camera \(k\) is given by \(\pi_{k}(\mathbf{x})=K_{k}T_{k}\mathbf{x}\) in homogeneous coordinates.
### Predicting a Semantic Field
Current approaches to SSC typically involve the prediction and manipulation of discrete voxel grids, which come with various limitations. Firstly, these grids have restricted resolution due to their cubic memory requirements and processing constraints. Second, when reconstructing a scene from an image, voxels are not aligned with the pixel space. Consequently, methods have to rely on complex multi-stage approaches to lift information from 2D to 3D [8, 39].
As an alternative, we propose a simple architecture that predicts an _implicit_ and _continuous_ **semantic field** to overcome these shortcomings. [86] A semantic field maps every point \(\mathbf{x}\) within the camera frustum to both volumetric density \(\sigma\in[0,1]\) and a semantic class prediction \(l\in\{0,\dots,c-1\}\). Inspired by [75, 80], we use a high capacity encoder-decoder network to predict a dense pixel-aligned feature map \(\mathbf{F}\in(\mathbb{R}^{3})^{\Omega}\) from the input image \(\mathbf{I_{I}}\). The feature \(f_{\mathbf{u}}\) at pixel location \(\mathbf{u}\) describes the semantic and geometric structure of the scene captured along a ray through that pixel. We follow [75] and do not store color in the neural field. This improves generalization capabilities and robustness.
To query the semantic field at a specific 3D point \(\mathbf{x}\in\mathbb{R}^{3}\) within the camera frustum, we first project \(\mathbf{x}\) onto the camera plane to obtain the corresponding pixel location \(\mathbf{u}\). First, \(\mathbf{x}\) is projected onto the image plane \(\mathbf{u_{I}}=\pi_{\mathbf{I}}(\mathbf{x})\). The corresponding feature vector is extracted from the feature map with bilinear interpolation \(f_{\mathbf{u}}=\mathbf{F}(\mathbf{u})\). Together with positional encodings \(\gamma(d)\) for distance \(d\)[52] and pixel position \(\gamma(\mathbf{u_{I}})\), the feature vectors is then decoded to the density
\[\sigma_{\mathbf{x}}=\phi_{\text{D}}(f_{\mathbf{u_{I}}},\gamma(d),\gamma( \mathbf{u_{I}})) \tag{1}\]
and semantic prediction
\[l_{\mathbf{x}}=\phi_{\text{S}}(f_{\mathbf{u_{I}}},\gamma(d),\gamma(\mathbf{u_{I} }))\,. \tag{2}\]
Both \(\phi_{\text{D}}\) and \(\phi_{\text{S}}\) are small multi-layer perceptron (MLP) networks. Note that \(\phi_{\text{S}}\) predicts semantic logits. To obtain a class distribution or label, we apply \(\operatorname*{Softmax}_{c}\left(l_{\textbf{x}}\right)\) or \(\operatorname*{arg\,max}_{c}\left(l_{\textbf{x}}\right)\), respectively.
### Volumetric Semantic Rendering
The goal of this paper is to develop a method to perform 3D SSC from a single image while being trained from 2D supervision alone. The continuous nature of our scene representation allows us to use volume rendering [29, 49] to synthesize high-quality novel views. As shown in [86], volumetric rendering can be extended from color to semantic class labels. The differentiable rendering process allows us to back-propagate a training signal from both color and semantic supervision on rendered views to our scene representation.
To render segmentation masks from novel viewpoints, we cast rays from the camera for every pixel. Along each ray, we integrate the semantic class labels over the probability of the ray ending at a certain distance. To approximate this integral, density \(\sigma_{\textbf{x}_{i}}\) and label \(l_{\textbf{x}_{i}}\) are evaluated at \(m\) discrete steps \(\textbf{x}_{i}\) along the ray.
Since we consider segmentation in 3D space, we apply \(\operatorname*{Softmax}\) normalization at every query point individually, _before_ we integrate along the ray. The intuition behind this is that regions with low density should not be able to "overpower" high-density regions by predicting very high scores for classes. Thus, this technique makes rendering semantics from the neural field more robust. [62]
\[\alpha_{i}=1-\exp(-\sigma_{\textbf{x}_{i}}\delta_{i})\qquad T_{i}=\prod_{j=1} ^{i-1}(1-\alpha_{j}) \tag{3}\]
\[\hat{l}=\sum_{i=1}^{m}T_{i}\alpha_{i}\cdot\operatorname*{Softmax}_{c}\left(l_ {\textbf{x}_{i}}\right) \tag{4}\]
Here, \(\delta_{i}\) denotes the distance between the points \(\textbf{x}_{i}\) and \(\textbf{x}_{i+1}\), \(\alpha_{i}\) the probability of the ray ending between the points \(\textbf{x}_{i}\) and \(\textbf{x}_{i+1}\), and \(T_{i}\) the probability of \(\textbf{x}_{i}\) being not occluded and therefore visible in the image. \(\hat{l}\) is the final, normalized distribution predicted for a pixel. We construct a per-pixel depth \(d_{i}\), the distance between \(\textbf{x}_{i}\), and the ray origin using the expected ray termination depth.
\[\hat{d}=\sum_{i=1}^{m}T_{i}\alpha_{i}d_{i} \tag{5}\]
From the semantic field, we can also synthesize novel appearance views by applying image-based rendering [75]. Image-based rendering does not predict color information from a neural field but rather queries the color from other images. We sample the color by projecting point \(\textbf{x}_{i}\) into frame \(k\) to the pixel position \(\textbf{u}_{i,k}=\pi_{k}(\textbf{x}_{i})\) and then bilinearly interpolating the color value \(c_{i,k}=\textbf{I}_{k}(\textbf{u}_{i,k})\) at the projected pixel position. The color sample can come from any other image \(k\) except the one we want to render. With volumetric rendering, we obtain the pixel color with queries from image \(k\) as
\[\hat{c}_{k}=\sum_{i=1}^{m}T_{i}\alpha_{i}c_{i,k}\,, \tag{6}\]
Figure 2: **Overview.****a)** From an input image \(\textbf{I}_{\textbf{i}}\), an encoder-decoder network predicts a pixel-aligned feature map \(\textbf{F}\) describing a semantic field in the frustum of the image. The feature \(f_{\textbf{u}_{i}}\) of pixel \(\textbf{u}_{i}\) encodes the semantic and occupancy distribution on the ray cast from the optical center through the pixel. **b)** The semantic field allows rendering novel views and their corresponding semantic segmentation via volumetric rendering. A 3D point \(\textbf{x}_{i}\) is projected into the input image and therefore **F** to sample \(f_{\textbf{u}_{i}}\). Combined with positional encoding of \(\textbf{x}_{i}\), two MLPs decode the density of the point \(\sigma_{i}\) and semantic label \(l_{i}\), respectively. The color \(c_{i}\) for novel view synthesis is obtained from other images via color sampling. **c)** To achieve best results, we require training views to cover as much surface of the scene as possible. Therefore, we sample side views from random future timesteps, that observe areas of the scene that are occluded in the input frame.
where we obtain a different color prediction from every frame \(k\).
### Training for Semantic Multi-View Consistency
All existing methods for SSC rely on 3D ground truth data for training. Such data is generally obtained from annotated LiDAR scans, which are very difficult and costly to acquire. In contrast to 3D data, images with semantic labels are abundantly available. We propose to leverage this available 2D data to train a neural network for 3D semantic scene completion. To make our approach as general as possible, we use pseudo-semantic labels generated from a pre-trained semantic segmentation network. This allows us to train our architecture only from posed images without the need for either 2D or 3D ground truth data in a fully self-supervised manner.
In this paper, we consider SSC in an autonomous driving setting. Generally, there are forward and sideways-facing cameras mounted on a car that is moving. We train our method on multiple posed images. In addition to the source image \(\mathbf{I_{I}}\) from which a network produces the feature map \(\mathbf{F}\), frames \(\mathbf{I}_{i}\) are aggregated from the main camera, stereo, and side-view cameras over multiple timesteps of a video sequence.
We randomly sample a subset of the available frames and use them as reconstruction targets for novel view synthesis. Our pipeline reconstructs both colors and semantic labels - based on our semantic field and color samples from other frames. The discrepancy between the pseudo-ground truth semantic masks and the image is used as the training signal.
As supervision from a view only gives training signals in areas that are observed by this camera, it is important to select training views strategically. Especially sideways-facing camera views give important cues for regions that are occluded in the input image. In order to best cover the entire camera frustum we aim to reconstruct, we, therefore, select sideways-facing views with a random offset to the input image for training. This increases the diversity and improves prediction quality especially for further away regions, which are often difficult to learn with image-based reconstruction methods.
In practice, we only reconstruct randomly sampled patches \(P_{i}\) that we reconstruct from different render frames \(k\) as \(\hat{P}_{i,k}\) for color. For semantic supervision, we reconstruct the semantic labels \(S_{i}\) as \(\hat{S}_{i}\). To train for SSC, _i.e_. scene reconstruction and semantic labelling, we use a combination of semantic and photometric reconstruction loss. While the photometric loss gives strong training signals for the general scene geometry, the semantic loss is important for differentiating objects and learning rough geometry. Furthermore, it guides the model to learn sharper edges around objects.
We use weighted binary cross-entropy loss to apply supervision on the density field from our pseudo ground truth segmentation labels \(\mathcal{L}_{i}\).
\[\mathcal{L}_{\text{sem}}=\text{BCE}\left(S_{i},\hat{S}_{i}\right) \tag{7}\]
The photometric loss employs a combination of L1, SSIM [73] and an edge-aware smoothness (eas) term loss as proposed in [20]. We follow the strategy of [20, 75] to take only the per-pixel _minimum_ into account when aggregating the costs.
\[\mathcal{L}_{\text{ph}}=\min_{k\in N_{\text{color}}}\left(\lambda_{\text{L1} }\text{L1}(P_{i},\hat{P}_{i,k})+\lambda_{\text{SSIM}}\text{SSIM}(P_{i},\hat{ P}_{i,k})\right) \tag{8}\]
\[\mathcal{L}_{\text{eas}}=|\delta_{x}d_{i}^{*}\,|\,e^{-|\delta_{x}P_{i}|}+| \delta_{y}d_{i}^{*}\,|\,e^{-|\delta_{y}P_{i}|} \tag{9}\]
Our final loss is then computed as a weighted sum:
\[\mathcal{L}=\mathcal{L}_{\text{sem}}+\lambda_{\text{ph}}\mathcal{L}_{\text{ ph}}+\lambda_{\text{eas}}\mathcal{L}_{\text{eas}} \tag{10}\]
## 4 Experiments
To demonstrate the capabilities of our proposed method, we conduct a wide range of experiments. First, we evaluate our method using the new SSCBench dataset [38] on KITTI-360 [43] and achieve performance that closely rivals state-of-the-art, but supervised, methods. We also conduct ablation studies to justify our design choices and to demonstrate synergistic effects between semantic and geometric training. Second, we show that our method can synthesize high-quality segmentation maps for novel viewpoints.
### Implementation Details
For the architecture, we rely on a ResNet-50 [24] pre-trained on ImageNet as the backbone and a prediction head based on [20]. The feature vectors \(f_{\mathbf{u}}\) have a dimension of \(64\). Both MLPs \(\phi_{D}\) and \(\phi_{S}\) are very lightweight with two fully-connected layers of \(64\) hidden nodes each. They do not need more capacity, as all information is already contained in the feature vector \(f_{\mathbf{u}}\) and the MLPs are tasked to decode the contained information at a certain distance.
We implement our architecture entirely on PyTorch [54] and train with a batch size of 16 on a single Nvidia A40 GPU with 48GB memory. For every input image, we sample \(32\) patches of size \(8\times 8\) for RGB and semantics reconstruction each. For further technical details, please refer to the supplementary material.
### Data
For training and testing our method, we use the established KITTI-360 [43] dataset, which consists of video sequences captured by multiple cameras mounted on top of a moving vehicle. Besides a pair of forward-facing stereo cameras, KITTI-360 provides recordings from two fisheye cameras facing sideways left and right. The fisheye cameras allow us to gather geometric and semantic information in parts of
the scene occluded in the source view. We are interested in an area approximately 50 meters in front of the car. Based on this and considering an average speed of 30-50kph, we sample fisheye views between 1 to 4 seconds into the future. The recordings in KITTI-360 have a frame rate of \(10\)Hz which translates to an offset of 10 - 40 timesteps. In total, we use a total of eight images per sample during training: Four forward-facing views (out of which one is the input image), and four sideways-facing views.
To generate the pseudo segmentation labels, we run the off-the-shelf segmentation network Panoptic-Deeplab [11] trained on Cityscapes [14].
### SSC Performance on SSCBench
To compare the performance of our method with supervised approaches, we evaluate the predicted semantic fields on the new SSCBench dataset [38] which is defined on KITTI-360 [43]. This dataset aggregates Lidar scans over entire driving sequences and builds voxel grids with occupancy and semantic information. These voxel grids can be used to compute both occupancy (geometry) and semantic-focused performance metrics. SSCBench follows the setup of SemanticKITTI [4] to use a voxel resolution of 0.2m on scenes of size \(51.2m\times 51.2m\times 6.4m\), \(256\times 256\times 32\) voxel volumes. Evaluation of SSCBench happens at three scales of \(12.8m\times 12.8m\times 6.4m\), \(25.6m\times 25.6m\times 6.4m\), and \(51.2m\times 51.2m\times 6.4m\). SSCBench also provides invalid masks for voxels that do not belong to the scene. As these are quite coarse, we use slightly refined invalid masks. For fairness, we rerun all related methods on this refined data and observe a minor performance increase for all approaches. For further details, please refer to the supplementary material.
It is important to note that evaluating on SSCBench means bridging a non-trivial domain gap for our method. Our approach is trained via 2D supervision, which means that scene geometry must be pixel-level accurate. Occupancy is predicted for areas _within_ objects. On the other hand, SSCBench ground truth was collected by aggregating Lidar point clouds. Here, a voxel is considered occupied when a Lidar measurement point lies within the voxel, but Lidar measurement points only capture the _surface_ of an object. Therefore, the voxel ground truth of SSCBench tends to grow objects in size.
We use two techniques during the discretization of our implicit semantic field into a voxel grid to best align our predictions with the way the ground truth was captured. First, we leverage the unique advantage of our architecture to be not bound to the coarse resolution of discrete voxel grids. For every voxel, we query _multiple_ different points distributed evenly within the voxel. The final prediction is obtained by taking the maximum occupancy value among the points and weighting the class predictions accordingly. The intuition behind this is that we want to check whether there is a surface anywhere in the voxel. However, we observe that volumetric rendering encourages the occupancy to blur at object borders. In a second step, we, therefore, perform a neighbourhood check. A voxel is considered occupied when volumetric occupancy is observed in at least one of the immediate neighbouring voxels.
Our method is the first to approach SSC in a self-supervised manner without relying on 3D ground truth. We compare against several fully supervised, state-of-the-art approaches, namely MonoScene [8], LMSCNet [60], and SSCNet [64]. While MonoScene takes single images as input at test time, LMSCNet and SSCNet require Lidar scans even at test time.
As can be seen in Tab. 1, even though we tackle a significantly more challenging task, our proposed **S4C** method achieves occupancy IoU and segmentation mean IoU results that closely rival these of MonoScene [8], which is trained with annotated 3D Lidar ground truth. Furthermore, the results are not far off from the methods that take Lidar inputs at test time. Even though further away regions (25.6m and 51.2m) are usually more challenging for self-supervised approaches, as fewer camera views observe them, the performance of our method stays stable even when evaluating further away distances. We attribute this to the strategy of sampling side views at a random time offset. While the occupancy precision of our method is lower than other methods, our occupancy recall is significantly higher. A possible reason for this is that our method tends to place occupied voxels in areas that are unobserved and that cannot be hallucinated in a meaningful way from the observations in the image. Methods trained on inherently more sparse voxel ground truth tend to place unoccupied voxels in such regions.
When visualizing the predicted voxel grids, as shown in Fig. 3, we can see that our method is able to accurately reconstruct the given scene and to assign correct class labels. The general structure of the scene is recovered at a large scale and smaller objects like cars are clearly separated from the rest of the scene. Even for further distances, which are more difficult for camera-based methods, our method still provides reasonable reconstructions. As mentioned before, our method can be observed to place more occupied in unseen, ambiguous regions. Generally, the qualitative difference between the reconstructions by our method and supervised approaches is very low. This suggests that self-supervised training is a viable alternative to costly fully-supervised approaches to SSC.
### Ablation studies
To give insights into the effect of our design choices, we conduct thorough ablation studies using the SSCBench ground truth.
First, to investigate the effect of the different 2D supervision signals, we train our architecture with the different loss terms turned off and report the results in Tab. 2. The photometric loss alone can already give a clear training signal and allows the network to recover accurate scene geometry. Despite the semantic loss being very high-level and not providing a signal for smaller geometric details, it is enough to guide the network to predict rough geometry correctly. As shown in Fig. 4, the model can synthesize depth maps that clearly depict a geometric understanding of the scene.
\begin{table}
\begin{tabular}{l|c c|c c c|c c c|c c c} \hline \hline \multirow{2}{*}{_Method_} & \multicolumn{3}{c}{**S4C (Ours)**} & \multicolumn{3}{c}{**MonoScene [8]**} & \multicolumn{3}{c}{**LMSCNet [60]**} & \multicolumn{3}{c}{**SSCNet [64]**} \\ \hline \multicolumn{12}{l|}{**Supervision**} & \multicolumn{3}{c}{**Camera only**} & \multicolumn{3}{c}{**Lidar training**} & \multicolumn{3}{c}{**Lidar training**} & \multicolumn{3}{c}{**Lidar training**} & \multicolumn{3}{c}{**Lidar training**} \\ \hline \multicolumn{12}{l|}{**Range**} & 12.8m & **25.6m** & **51.2m** & **12.8m** & **25.6m** & **51.2m** & **12.8m** & **25.6m** & **51.2m** & **12.8m** & **25.6m** & **51.2m** \\ \hline
**IoU** & 54.64 & 45.57 & 39.35 & 58.61 & 48.15 & 40.66 & 66.74 & 58.48 & 47.93 & 74.93 & 66.36 & 55.81 \\
**Precision** & 59.75 & 50.34 & 43.59 & 71.79 & 67.02 & 64.79 & 80.58 & 76.75 & 76.87 & 83.65 & 77.85 & 75.41 \\
**Recall** & 86.47 & 82.79 & 80.16 & 76.15 & 63.11 & 52.20 & 79.54 & 71.07 & 56.00 & 87.79 & 81.80 & 68.22 \\ \hline
**mIoU** & 16.94 & 13.94 & 10.19 & 20.44 & 16.42 & 12.34 & 21.93 & 21.81 & 15.36 & 26.64 & 24.33 & 19.23 \\ car & 22.58 & 18.64 & 11.49 & 36.05 & 29.19 & 20.87 & 39.6 & 32.48 & 20.63 & 52.72 & 45.93 & 31.89 \\
**bicycle** & 0.00 & 0.00 & 0.00 & 2.69 & 1.07 & 0.49 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\
**motorcycle** & 0.00 & 0.00 & 0.00 & 4.70 & 1.44 & 0.59 & 0.00 & 0.00 & 0.00 & 1.41 & 0.41 & 0.19 \\
**truck** & 7.51 & 4.37 & 2.12 & 19.81 & 14.14 & 8.48 & 0.62 & 0.44 & 0.23 & 16.91 & 14.91 & 10.78 \\
**other-vehicle** & 0.00 & 0.01 & 0.06 & 8.81 & 5.61 & 2.78 & 0.00 & 0.00 & 0.00 & 1.45 & 1.00 & 0.60 \\
**person** & 0.00 & 0.00 & 0.00 & 2.26 & 1.30 & 0.87 & 0.00 & 0.00 & 0.00 & 0.36 & 0.16 & 0.09 \\
**road** & 69.38 & 61.46 & 48.23 & 82.94 & 73.32 & 58.23 & 84.60 & 81.24 & 69.06 & 87.81 & 85.42 & 73.82 \\
**sidewalk** & 45.03 & 37.12 & 28.45 & 56.51 & 43.53 & 32.70 & 60.73 & 51.28 & 36.71 & 67.19 & 60.34 & 46.96 \\
**building** & 26.34 & 28.48 & 21.36 & 39.17 & 38.02 & 31.79 & 48.59 & 51.15 & 41.22 & 53.93 & 54.55 & 44.47 \\
**fence** & 9.70 & 6.37 & 3.64 & 12.36 & 6.70 & 3.83 & 1.64 & 0.62 & 0.26 & 14.39 & 10.73 & 6.42 \\
**vegetation** & 35.78 & 28.04 & 21.43 & 38.26 & 31.51 & 25.67 & 51.17 & 46.90 & 38.70 & 56.56 & 51.77 & 43.30 \\
**terrain** & 35.03 & 22.88 & 15.08 & 38.05 & 27.30 & 19.29 & 43.23 & 32.59 & 23.54 & 43.47 & 36.44 & 27.83 \\
**pole** & 1.23 & 0.94 & 0.65 & 10.41 & 9.25 & 7.34 & 0.00 & 0.00 & 0.00 & 1.03 & 1.05 & 0.62 \\
**traffic-sign** & 1.57 & 0.83 & 0.36 & 9.20 & 7.98 & 5.68 & 0.00 & 0.00 & 0.00 & 1.01 & 1.22 & 0.70 \\
**other-object** & 0.00 & 0.00 & 0.00 & 6.62 & 5.17 & 3.44 & 0.00 & 0.00 & 0.00 & 1.20 & 0.97 & 0.58 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Quantitative evaluation on SSCBench-KITTI-360**. We report the performances with respect to different ranges (12.8m, 25.6m, and 51.2m). We provide both **geometric** (IoU, Precision, Recall) and **semantic** (mIoU, per class IoU) metrics. As we use refined invalid masks on SSCBench, we rerun all methods with the provided checkpoints. MonoScene [8] is trained with Lidar but also uses a single image at test time. LMSCNet [60] and SSCNet [64] are trained on Lidar data and require a sparse Lidar scan at test time.
Figure 3: **Predicted voxel grids for SSCBench-KITTI-360**. The qualitative evaluation of our method on occupancy maps shows that our method is able to accurately reconstruct and label the scene. Especially a comparison to other image based methods like MonoScene shows, that **S4C** is able to recover details such as the driveway on the right in image 1. The resulting voxel occupancy from **S4C** shows fewer holes then for Lidar based training, which reproduce holes found in the ground truth.
Best occupancy results are achieved when training relies on both losses. We hypothesize that the object boundaries in segmentation maps, which are much clearer compared to regular images, help the model to learn sharper geometry.
Second, we investigate the impact of our view sampling strategy. When sampling sideways-facing views only at a fixed distance (\(1s\) into the future), the model is not able to learn about far-away geometry. Therefore, the performance is weaker especially when evaluating larger scenes (\(25.6m\) and \(51.2m\)). This effect can also be observed qualitatively in Fig. 4.
### Synthesizing Segmentation Maps
Besides experiments for SSC, we also analyse the effectiveness of our training with pseudo-ground truth labels from an off-the-shelf segmentation network. To this end, we train our model with different levels of pseudo supervision, by providing pseudo segmentation maps for the input frame (one image), all forward-facing views (four images), and all forward and sideways-facing views (eight images) respectively.
We evaluate the segmentation performance for the input image against the real ground-truth segmentation provided by KITTI-360. Furthermore, we project segmentation maps 5, 10 and 15 time steps into the future using our predicted geometry. This tests the model's ability to reason about segmentation in 3D. We report results in Tab. 3.
Providing segmentation masks for more frames during training improves the segmentation performance in all settings. Given pseudo ground truth for all frames, our model is even able to improve over the pseudo segmentation ground truth it was trained with, which achieves an accuracy of \(87.7\%\) against the KITTI-360 ground truth. This trend manifests when synthesizing novel segmentation views a number of timesteps away, where the discrepancy between the best and worst model configuration rises from \(1.3\%\) to over \(3.4\%\).
We hypothesize that this is due to two reasons: First, the pseudo-segmentation ground truth is imperfect and contains artefacts. By forcing the model to satisfy segmentations from different views, the model automatically learns cleaner segmentation masks with fewer artefacts. Second, by having more training views, we improve the 3D geometry, as shown in Sec. 4.4, and are able to learn better semantic labels in occluded regions. The further away we synthesize views, the more important such 3D semantic understanding is for accurate predictions.
## 5 Conclusion
This work presents **S4C**, a novel, image-based approach to semantic scene completion. It allows reconstructing both the 3D geometric structure and the semantic information of a scene from a single image. Although not trained on 3D ground truth, it achieves close to state-of-the-art performance on the KITTI-360 dataset within the SSCBench benchmark suite. As the first work on self-supervised semantic scene completion, **S4C** opens up the path towards scalable and cheap holistic 3D understanding.
Acknowledgements.This work was supported by the ERC Advanced Grant SIMULACRON, by the Munich Center for Machine Learning and by the EPSRC Programme Grant VisualAI EP/T028572/1. C. R. is supported by VisualAI EP/T028572/1 and ERC-UNION-CoG-101001212.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Semantic & Photometric & View Offset & 12.8m & 25.6m & 51.2m \\ \hline ✓ & ✗ & 1s-4s & 31.11 & 32.04 & 27.88 \\ ✗ & ✓ & 1s-4s & 50.26 & 41.94 & 37.40 \\ ✓ & ✓ & 1s & 52.00 & 41.96 & 36.67 \\ ✓ & ✓ & 1s-4s & **54.64** & **45.57** & **39.35** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Ablation studies on SSCBench-KITTI-360**. We report IoU for occupancy. The full model achieves the best performance.
Figure 4: **Effects of different loss terms**. We show expected ray termination depth for the input image and a corresponding side view for different loss configurations. The full model produces sharpest results. We also show reconstruction with and without random offsetting the side views. This technique helps to correctly capture objects that are further away and reduce trailing effects.
\begin{table}
\begin{tabular}{l|c|c c c} \hline \hline Training Configuration & KITTI GT & +5 & +10 & +15 \\ \hline Only Input Frame & 86.50\% & 82.64\% & 77.74\% & 73.65\% \\ Only Front & 86.76\% & 83.09\% & 77.73\% & 73.15\% \\
**Full** & **87.81\%** & **84.88\%** & **81.07\%** & **77.19\%** \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Accuracy of synthesized segmentation maps**. Segmentation maps from **S4C** produce accurate results when compared to the ground truth. Predicting segmentation maps for other frames (5, 10, and 15 timesteps in the future) shows the geometric accuracy of **S4C**. |
2303.10166 | Computer Assisted Proofs and Automated Methods in Mathematics Education | This survey paper is an expanded version of an invited keynote at the
ThEdu'22 workshop, August 2022, in Haifa (Israel). After a short introduction
on the developments of CAS, DGS and other useful technologies, we show
implications in Mathematics Education, and in the broader frame of STEAM
Education. In particular, we discuss the transformation of Mathematics
Education into exploration-discovery-conjecture-proof scheme, avoiding usage as
a black box . This scheme fits well into the so-called 4 C's of 21st Century
Education. Communication and Collaboration are emphasized not only between
humans, but also between machines, and between man and machine. Specific
characteristics of the outputs enhance the need of Critical Thinking. The usage
of automated commands for exploration and discovery is discussed, with mention
of limitations where they exist. We illustrate the topic with examples from
parametric integrals (describing a "cognitive neighborhood" of a mathematical
notion), plane geometry, and the study of plane curves (envelopes, isoptic
curves). Some of the examples are fully worked out, others are explained and
references are given. | Thierry Noah Dana-Picard | 2023-03-10T11:35:50Z | http://arxiv.org/abs/2303.10166v1 | # Computer Assisted Proofs and Automated Methods
###### Abstract
This survey paper is an expanded version of an invited keynote at the ThEdu'22 workshop, August 2022, in Haifa (Israel). After a short introduction on the developments of CAS, DGS and other useful technologies, we show implications in Mathematics Education, and in the broader frame of STEAM Education. In particular, we discuss the transformation of Mathematics Education into exploration-discovery-conjecture-proof scheme, avoiding usage as a black box. This scheme fits well into the so-called 4 C's of 21st Century Education. Communication and Collaboration are emphasized not only between humans, but also between machines, and between man and machine. Specific characteristics of the outputs enhance the need of Critical Thinking. The usage of automated commands for exploration and discovery is discussed, with mention of limitations where they exist. We illustrate the topic with examples from parametric integrals (describing a "cognitive neighborhood" of a mathematical notion), plane geometry, and the study of plane curves (envelopes, isoptic curves). Some of the examples are fully worked out, others are explained and references are given.
## 1 Introduction
### A long history made very short, with a personal touch.
During the 1980's, a computer-assisted proof was not always easily accepted by the community. At the beginning of the era of machine-assisted computations, only numerical algorithms existed, allowing approximate results. These algorithms could not always provide all the solutions to a given problem.
Later, algorithms for symbolic computations began to be developed and implemented. Sometimes the researcher had to write in a general language a specific program for his/her problem. M. Schaps did in the 80's for classification of generic associative algebras [39, 40] and wrote programs for computing Hochschild cohomology groups and groups of automorphisms of given algebras, each one allowing checking the validity of the results of the other one. Vanishing of the homology group ensures that the given algebra is rigid, whence defines a component in the variety \(\mathbf{Alg}_{n}\) which parameterizes the \(n-\)dimensional associative algebras. For local algebras, i.e. algebras of the form \(\mathbb{F}[X]/\mathcal{I}\), where \(\mathcal{I}\) is an ideal in the polynomial ring \(\mathbb{F}[X]\), we began using computations of Grobner bases. The project yielded the classification of generic algebras in dimensions 6, 7 and 8; see [39]. Smaller dimensions were studied previously.
Other researchers in the world were already working using computer computations, but at that time a disclaimer was sometimes added about the non-responsibility of the journal regarding the computations. The computations were time-consuming (for one local algebra, the computation could take 20 minutes, with a 4.77Hz PCs. In order to read the output we had to print it (on wide continuous sheets) and we stored the printouts in case somebody would request to check them.
Development and usage of software did not remain the exclusive property of researchers. Of course, as long as computers were too big and too expensive, they could not enter the regular classroom. With
the first hand-held devices and the personal computers, things changed. Hand-held calculators worked numerically, but quite quickly algorithms for symbolic computations were developed, and implemented in a variety of devices. The Derive software was contained on a small floppy disk, a consequence of the choice of algorithms. For example, the computation by Derive of definite integrals is based on a theorem, which is both easy to prove and rarely presented in textbooks [24]. In [69], we proposed to Derive a parametric improper integral. Derive computed it immediately with the general parameter; at that time, no other system could do it, but for small integer values of the parameter. Later a "clone" of the software was implemented in the TI92. Today, GeoGebra has versions for PCs, iPads and smartphones. In the recent period, mobile versions of the different mathematical software allowed the development of outdoor mathematical activities [6, 5].
Classical methods of teaching mathematics led sometimes teachers to think that mathematics are a fixed domain of knowledge. Even teacher trainers eventually claim this. But the world has changed profoundly, and the various available technologies transformed mathematics into an experimental domain, and discovering novelties at an early stage of education. P. Quaresma says ([61]): "Scientific research and education at all levels are concerned primarily with the discovery, verification, communication, and application of scientific knowledge. Learning, reusing, inventing, and archiving are the four essential aspects of knowledge accumulation in mankind's civilization process. In this cycle of knowledge accumulation, which has been supported for thousands of years by written books and other physical means, rigorous reasoning has always played an essential role. Nowadays this process is becoming more and more effective due to the availability of new paradigms based on computer applications. Geometric reasoning with such computer applications is one of the most attractive challenges for future accumulation and dissemination of knowledge."
In this paper, we relate mostly to the two kinds of mathematical software, namely Computer Algebra Systems (CAS) and Dynamic Geometry Systems (DGS)1. Actually, the distinction between them fades more and more. For example, GeoGebra began as a DGS, but developed in other directions, and a CAS called Giac [52] is embedded in it. New features include also tools for Augmented Reality, allowing outdoor activities, among others; see [12, 53].
Footnote 1: There exist other kinds of software, proof assistants and theorem provers, whose importance cannot be overemphasized. We refer the interested reader to recent contributions to ThEdu, also in this volume
The present survey paper is an expanded version of the keynote delivered at the ThEdu'22 workshop, August 2022, in Haifa (Israel).
### Computer Algebra Systems.
Decades ago, advanced specialized programs have been written, such as GAP - an acrostic for Groups, Algorithms, Programming for Group Theory (its scope is much broader now), FeliX for Number Theory, Macaulay2 for Algebraic Geometry, CoCoA for Computations in Commutative Algebra, etc. All these are examples of Computer Algebra Systems (CAS). General purpose CAS began also to integrate various mathematical fields, both symbolic and numerical algorithms; they are now powerful multi-domains assistants. The web offers also an interactive usage of platforms, where different CAS work in the background; this is the case with the education oriented platform WIMS.
A core feature of a CAS is the ability to manipulate symbolic mathematical expressions, in a way similar to the traditional manual computations. Of course, the syntax of the commands can vary and some CAS may be thought as more user-friendly than others. This can be an issue when working with students.
Among the needed features are a programming language (enabling the user to enter his/her own algorithms), an interpreter and a simplifier. A **simplify** command, with several options is important (pattern recognition may not be enough to lead to use an efficient algorithm). Note that for a given computation, the output may be quite different from what the user would have obtained by hand (if possible), and that two different packages may give different outputs. Simplifying helps the user to understand that different output formats may determine the same mathematical object. Some examples in Section 4 illustrate this fact.
Algorithms for Calculus, Linear Algebra, and other domains have to be used by students and teachers. Therefore, the CAS must include a large library of algorithms and special functions. We refer to a Wikipedia page2 for a more exhaustive list of requested features, including some abilities of interest to the developers. The first list on that page does not mention the plotting features: visualization, plotting, animations. They appear in a second list, not less important. We illustrate this in Section 4.
Footnote 2: [https://en.wikipedia.org/wiki/Computer_algebra_system](https://en.wikipedia.org/wiki/Computer_algebra_system)
The Wikipedia page adds that a CAS should also include a programming language, allowing users to implement their own algorithms, arbitrary-precision numeric operations, exact integer arithmetic and number theory functionality, editing of mathematical expressions in two-dimensional form, plotting graphs and parametric plots of functions in two and three dimensions, and animating them, drawing charts and diagrams, APIs for linking it on an external program such as a database (we discuss later the need for a dialog between different systems), or using in a programming language to use the computer algebra system, string manipulation such as matching and searching, add-ons for use in applied mathematics such as physics, bioinformatics, computational chemistry and packages for physical computation, etc. Some CAS include graphic production and editing such as computer-generated imagery and signal processing as image processing and sound synthesis. DGS may fulfill a certain number of these requirements.
As a visualization of the large variety of existing packages, Figure 1 shows the Google answer to a search for Computer Algebra System.
The danger is to use a CAS as a black box. We mean that the user writes a command, without any idea how it works. He/she enters the data, receives an output and relies on it. It is common sense that an educator will not go this way.
### Dynamic Geometry Systems.
Later, other packages called Dynamic Geometry Systems (DGS) appeared: Geometric SketchPad, Cinderella, Cabri Geometer, GeoGebra, etc.. The needs for programming did not disappear, but a core feature is interactivity. The man-and-machine interaction is different from what is generally offered by a CAS. A Dynamic Geometry System (DGS) provides ways of representing and manipulating geometric objects that are not possible with traditional paper-pencil work, using a compass and straightedge. These various environments provide opportunities for students to explore geometric objects, to measure
Figure 1: Google visual answer to a search for a CAS
objects on the screen and outdoor. They can help students to develop different understandings of many properties and theorems. With the apparition of dragging and measuring offered by a DGS, teachers were sometimes reluctant to use them, as they feared that the role of the proof will fade. Mariotti [56] reflected on these fears, but proposed a more positive view. Her reflection was based on work with the Cabri software.
A DGS offers both button-driven commands and written commands, using a syntax similar to what exists in a CAS. A command has eventually a written version and a button, sometimes with slightly different affordances. For example, in GeoGebra's button for attaching a point to an object has a version as a written command working in cases the button does not. As a whole, this double feature buttons-written commands contribute to the software's user-friendliness.
The main dynamical features are the dragging points and slider bars. The basic objects are free points, and other constructions depend on these free points. Dragging a free point with the mouse induces automatically changes in every object depending on it. For example, the midpoint \(I\) of a segment \(AB\) depends on the endpoints \(A\) and \(B\). Dragging \(A\) (or \(B\), one at a time) with the mouse changes \(I\). A slider bar corresponds to a real parameter; it allows to define and dynamically change objects depending on this parameter. For multi-parameter constructs, a slider is defined for each parameter (generally the software requests this automatically). Using sliders, animated graphs and constructs are available. Dragging and sliders transformed Geometry into an experimental field. The main words in teaching Geometry are now exploration and experimentation. Changes using dragging and sliders, provide infinitely many examples of the situation under study. This is not a proof, but it provides conviction. Moreover, a conjecture can be enounced, and then has to be proven. The sequence exploration-conjecture-proof is ubiquitous when working with a DGS.
Animations are also offered by CAS, with an important difference. With a CAS, an animation is defined by a specific command with some options (size, number of frames per second, etc.). Then the animation is run without an human intervention. The human can look at what happens, sometimes can intervene, but this is minor. An animation with a DGS can be almost automatic: the Animation On option available with a slider is an important feature, sometimes accompanied by the Trace On option for the object under study. It can be mouse-driven, i.e. driven by the human. Examples are given in Section 4. The interested reader can also refer to a dedicated Wikipedia page3.
Footnote 3: [http://en.wikipedia.org/wiki/Interactive_geometry_software](http://en.wikipedia.org/wiki/Interactive_geometry_software)
Mastering the moves with the hand makes the software some kind of a prosthesis for the human; Debray says that being able to build a prothesis is what makes the man a human [46]. Maybe do we deal now with an augmented human? We prefer to speak about a human, totally human, with a strong digital literacy.
### Core issues with man-and-machine and machine-and-machine interaction
The availability of both CAS and DGS opened large new fields of study, in research and in education. Their sets of respective affordances are distinct, with a non empty intersection. Important results can be derived by networking with the two kinds of technology [65, 33], one of them being more powerful geometry and dynamics and the other having stronger algebraic capabilities. After all, the study of a mathematical object is based on different registers of representation (algebraic, numerical, geometric, etc.) [49]. For example, some packages are devoted to 2D only or to 3D only. GeoGebra has both possibilities, the 2D window being fully synchronized with the \(xy-\)plane in the 3D window. Of course, they are also synchronized with the algebraic window. Considering different representations of the same
object, i.e., multiple points of view, enable on the one hand to build bridges between mathematical fields (v.i. Section 2), and on the other hand, to derive new theorems or new proofs of classical theorems (such as in [41]). Until now, in order to have benefit of both environments, CAS and DGS, we generally need to copy-paste from one system to the other. The (lack of) communication between CAS and DGS has been addressed for a long time [65, 66, 33], we wish that solutions will be found in the next future. Actually, one of them was the embedding of a CAS into GeoGebra [53], but as of today DGS and CAS are still different. In the recent years, features of dynamic geometry entered CAS and core CAS components have been implemented into DGS [53]. Work is still needed to have full integration of all the possibilities.
Besides this machine-machine communication and collaboration, new behaviours appeared in man-and-machine communication. This had also a great influence on communication and collaboration between humans. Shall we mention that the communication between humans and between humans and machines developed new aspects during the Covid-19 crisis [30]?
The joint usage of the different technologies, including websurfing and data mining, enable huge advances in the various components of the so-called 4 C's of 21st Century Education: Communication, Collaboration, Critical Thinking and Creativity [21]. The two first C's are generally considered between humans, but man-and-machine and machine-and-machine communication and collaboration are crucial. In [34], we propose even a 5th C, namely Curiosity, a must for human's proceeding further. New didactic situations can be considered and new ways to deal with them. Mathematics Education has been transformed from the traditional Definition-Theorem-Proof scheme into an exploratory domain. The sequence can be now Exploration-Discovery-Conjecture-Proof (or disproof).
The 4C's are the conceptual basis of the STEAM Education approach. It can be illustrated by examples and activities in Abstract Algebra, Combinatorics together with Integrals, the study of Plane Algebraic Curves and Algebraic Surfaces and also Mathematics and Arts. We refer to the papers in [64].
### Numerical vs symbolic representations
Between the symbolic data and the plot, a lot of numerical data is computed, eventually displayed on demand. Numerical data induces also the need to master the approximations [31]. This numerical data is obtained, after choosing a mesh (in 3D) or a partition of the interval of definition (in 2D), by interpolation. This may lead to strange plots [79, 80]. This problem can be overcome either with options for discontinuity or using a non standard mesh (Maple proposes about 20 of non standard meshes for 3D plots).
For other needs, the differences between numerical and symbolic algorithms led GeoGebra's developers to propose two kinds of output, for example, to the **Relation** command4. Figure 2 shows screenshots of the output. The regular Relation command provides an answer based on numerical data only. The "More" button runs other algorithms, that time symbolic, the output is more precise. It is an efficient tool in Education, providing assistance to the teacher in developing new activities.
Footnote 4: We used here a beta-version of _GeoGebra Discovery_, a package developed by Z. Kovács, working on the basis of the regular GeoGebra version. It contains a more advanced version of the **Relation** command. The package is freely downloadable from [https://github.com/kovzol/geogebra-discovery](https://github.com/kovzol/geogebra-discovery) (check there for the last updated version).
## 2 Cognitive neighborhood of a mathematical notion
Either with a single student or with a class, the teacher faces some endeavours
1. Stimulate students' curiosity for interlaced techniques, using more than one of the newly available technologies.
2. Make Mathematics more attractive, and show it as a living field of knowledge by discovering new tracks.
3. Discover links between apparently different fields. In a traditional curriculum, courses are generally taught as separate topics, often without bridges between them. For example, some teachers are reluctant to introduce example for Calculus into a course in Linear Algebra, despite the natural structure of a vector space of the set of continuous (resp. differentiable) functions over an interval in \(\mathbb{R}\).
4. Explore other mathematical objects "looking like" (Consolidation?) the mathematical object of study. Such a task is generally built by the teacher, i.e. in this context, the teacher is active and creative.
An advanced step in teaching integrals is provided by definite integrals depending on one (ore more) parameter. This is a more abstract situation than ordinary definite integrals, important in applied situations [42]. The study of parametric definite integrals provides intra-mathematics connections: integration, combinatorics, applications to physical problems, and they may be strengthened by topics in history of mathematics. The mathematical notions, the bridges between them and the instrumented techniques, accompanied by an appropriate technological discourse (a term coined by Artigue [3]) build what we call a cognitive neighborhood. In the exploration of such a cognitive neighborhood, teachers and students consider mathematical objects "looking like" the original mathematical object of study. Such a task is generally built by the teacher, i.e. in this context, the teacher is active and creative; the student reproduces the teacher's working steps. Such activities incite the students to search for related material. The exploration of the student's Zone of Proximal Development (ZPD [73]) builds step by step an always larger neighborhood.
Links to neighboring mathematical topics, to "real-world" situations can be discovered. Here the student is more autonomous and can develop more initiative. Actually, both the educator and the student are creative.
Figure 3 shows such a construct for parametric integrals, as explored in [43, 44].
Figure 2: Pop-ups offered by the Relation command
## 3 Next step: Automated Deduction in Geometry (ADG)
This field of R & D has seen tremendous developments during the last decades, tens of papers and of conference presentations have been devoted to the advances. Therefore we will mention only a small sample. The interested reader can use the vast bibliography in the few papers that we mention, and also have a look in the proceedings of conferences such as ACA (Applications of Computer Algebra, especially the special sessions on Education, but not only) and ADG conference. In the "far" past, a short survey appeared in [26].
In [10] Botana and Abanades write the following definition of ADG, "as the study and development of computer programs designed to prove geometry theorems". Then they traced the domain back to 1963 when Gelernter [49] connected it to Artificial Intelligence. They wrote: "However, the real flourishing of the field came in the early 1980's with the development by Wu of an algebraic method based on Ritt's characteristic set for proving a restricted set of geometry theorems [76]. Impressive results by several authors using Wu's method [77, 75, 76] encouraged researchers to consider other algebraic methods, among which those based on Grobner bases [19] proved to be the most relevant."
Along the years, other researchers and developers joined the domain. The definitions became more precise, in parallel with the continuous software developments. In 2016, we could find the following description [1]: "By automatic proving of elementary geometry theorems, we refer to the theorem proving approach via computational algebraic geometry methods, as initiated by Wu forty years ago, and popularized by the book of Chou [21]. Roughly speaking, the idea is to provide algorithms, using computer algebra methods, for confirming (or refuting) the truth of some given geometric statement. More precisely, the goal is to decide whether a given statement is generally true or not, i.e. true except for some degenerate cases, to be described by the algorithm."
The developments of different kinds of software, in the present case DGS and Automatic Provers enabled automated reasoning and dynamic geometry together. The collaboration of the different software revealed very successful. In 2020, Kovacs and Recio write [53]: "Along the last half century, automated deduction in elementary geometry has increasingly become one of the most successful achievements in the field of automated reasoning. Along these decades various methods and techniques have been studied and developed for automated proving and discovering of elementary geometry statements. On the other hand, dynamic geometry software systems have emerged, such as Cabri Geometry, C.a.R., Cinderella, DrGeo, GeoGebra, The Geometer's Sketchpad, Geometry Expert, Geometry Expressions or Kig, with an
Figure 3: Visualization of a cognitive neighborhood
ever-increasing presence in mathematics education. Some of them possess a large number of users (over thirty million) all around the world. The merging of these two tools (automatic proving and dynamic geometry) is, thus, a very natural, challenging and promising issue, currently involving logic, symbolic computation, software development, algebraic geometry and mathematics education experts from all over the world."
A central topic in Geometry is the determination of geometric loci. Numerous results have been obtained, some of them are presented in [9, 8]. We explore some loci in subsection 4.2. Commands for automated exploration and discovery of geometric loci have been developed and implemented in GeoGebra. We illustrate them in subsection 4.2.
In subsection 4.3, we address another topic, namely envelopes of parametric families of plane curves. Despite Thom's complained [71] (back in 1962) that it disappeared from the syllabus, research did not stop and numerous papers have been devoted to envelopes, software developments and Mathematics Education. A small sample is [15, 14, 39, 13, 27, 33]. For practical situations, approximate methods had also to be developed; for example, see [67, 60]. An automated command has been implemented in GeoGebra for the determination of envelopes of families of plane curves. Examples are given in subsection 4.3. The first computations consist in solving a non-linear system of equations, which is feasible using a CAS. The output consists in a list of parametric presentations of curves, and may need to be simplified, whence the importance of the simplify command. Examining the equations together with plotting them reveal that the components are complementary. This is to be proven, generally with algebraic tools. First the parametric presentations have to be translated into polynomials, then Grobner packages can be applied. These packages may not be implemented in the DGS, and work has to be transferred to another CAS, generally using copy-paste. The obtained polynomials generate an ideal in a polynomial ring. By elimination of the parameter, an implicit equation can be derived for the envelope. This is performed with a package for Grobner bases computations; see [23, 57] for the theory of Grobner bases and [68] for rational curves. Using once again the CAS, the obtained polynomial can be factored, revealing whether the envelope is reducible or not. Of course, the CAS offers plotting features, and their output will give a confirmation to what has already been obtained with the DGS.
Different definitions of an envelope are given in [18]; we illustrate them with automated methods in subsection 4.3. Offsets are worked out as loci, the difference between them and envelopes are made clear by using a DGS, as in [28].
The third topic that we will address later is the determination of isoptics curves of plane curves. It has been studied for years [22, 56, 70]. Working in a technology-rich environment enabled new results [35, 37]. More recently inner isoptics have been studied [36]. Some dynamics have been introduced into the study of isoptic in [34]. As of today, no specific command has been implemented for the discovery of isoptics; nevertheless the existing features allow exploration, computations and confirmation of the results. This is the topic of subsection 4.4.
To close this section, we wish to mention the book by Pavel Pech [59]. A chapter is devoted to automatic theorem proving and automatic discovery. The subsequent chapters deal with classical theorems in Geometry by means of polynomials with CoCoA, a freely downloadable software for computing in polynomial rings; see [2].
## 4 Other examples
We present now a couple for mathematical activities for which technology may have a crucial role. For exploration, discovery and proofs, we use here GeoGebra and Maple.
### Experiment the continuity of a function
Mastering the \(\epsilon-\delta-\) definition is often difficult for a student. Exploration with technology makes things easier to understand. In order to make things easier, more intuitive, we developed an applet with GeoGebra5. Figure 4 shows two screenshots. The sliders enable to change independently the value of \(\epsilon\) and of \(\delta\), which is useful for a first exploration. The student notes that \(\epsilon\) determines a horizontal band and \(\delta\) a vertical one, whose intersection is a rectangle centered at the point \((x_{0},f(x_{0})\), the question being whether the graph of the function passes fully in the rectangle. Discontinuity appears when \(\epsilon\) is quite small and no value of \(\delta\) enables this.
Footnote 5: Available at [https://www.geogebra.org/m/bkj8872v](https://www.geogebra.org/m/bkj8872v)
Later, the teacher may propose another applet6, where the dependence of \(\delta\) on \(\epsilon\) is utilized, as shown in Figure 5 and only one slider appears. This applet requires preliminary human work to discover a formula for the dependence of \(\delta\) on \(\epsilon\). Therefore it is not suitable for exploration, more suitable for illustration, and even be useful for consolidation of the new knowledge. Of course, the best situation is when the student is able to develop such an applet, and does not rely on the teacher for this. A student may develop suitable skills for this. If not the teacher may offer some scaffolding ([58], pages 105-107).
Footnote 6: [https://www.geogebra.org/m/zsb6p2vd](https://www.geogebra.org/m/zsb6p2vd)
### Exploration of a geometric locus
GeoGebra has several versions of an automated command for the determination of a geometric locus. According to the situation, whether the tracer is geometrically related to the m
Figure 4: Exploration of the continuity of a function according to the \(\epsilon-\delta-\) definition
Figure 5: A more advanced step for illustration of continuity
fulfill a given condition (a Boolean expression), the output has different forms: it can be a plot of a curve or a plot together with an implicit equation. Recent developments provide plots of regions in the plane7.
Footnote 7: Derive did it, representing inequalities in 2 variables graphically.
#### 4.2.1 Classical and less classical constructions.
Take two points \(A\) and \(B\) in the plane. The geometric locus of points \(M\) such that the ratio \(AM/BM\) is equal to \(k\) is the perpendicular bisector of the segment \(AB\) if \(k=1\), and a circle otherwise. This can be explored using the Locus command and a slider for letting the ratio vary (see [https://www.geogebra.org/m/vqqkd57t](https://www.geogebra.org/m/vqqkd57t)). For each value of the parameter an implicit equation is provided.
The applet [https://www.geogebra.org/m/v2kdhrus](https://www.geogebra.org/m/v2kdhrus) shows the construction of the loci of the incenter and excenters of a triangle, where two vertices are fixed and the third one moves along a line; see Figure 6(b). Here the version of the command is Locus(\(<\)Point Creating Locus line \(>\),\(<\)Point\(>\)) and the output is a plot of the locus, but no equation is found.
The same kind of construction can be applied looking for the geometric locus of points \(M\) such that the angle \(\angle\angle\)\(AMB=\theta\) fro given \(\theta\). The output is a circle. In this case, the teacher has to draw the students' attention that, actually, the geometric locus is the circle without the two points \(A\) and \(B\), as if \(M\in\{A,B\}\), there is no angle at all. A good opportunity to educate to Critical Thinking [21].
#### 4.2.2 Beyond the directrix of a parabola.
Let \(\mathcal{P}\) be the parabola whose equation is \(y=x^{2}\). Denote by \(B\) a point on \(\mathcal{P}\) and by \(C\) a point on the \(y-\)axis of \(\mathcal{P}\). Let \(C^{\prime}\) be the image of \(C\) by the reflection about the tangent to \(\mathcal{P}\) at \(B\). Two examples are on display in Figure 7. The Locus(\(<\)Point Creating the Line\(>\),\(<\)Point\(>\)) creates the dotted plot without providing an equation. A database of plane curves may be searched in order to determine which curve has been obtained, and to try to find a suitable equation for it. Students observed that the limiting case between smooth curves and curves with a double point is when the point \(C\) is the focus of the parabola \(\mathcal{P}\), in which case the geometric locus of \(C^{\prime}\) is the directrix of the parabola.
#### 4.2.3 A crosscurve.
Let be given a circle centered at the origin. We explore the geometric locus of the midpoint of the intercepts with the axes of the tangents to the circles. An automated command is available in GeoGebra, the Locus command. Several version are proposed, and the student has to choose among them. A snapshot of a session8 is displayed in Figure 8: a purely geometric construct is performed, enabling the
Figure 6: Geometric loci
usage of the Locus (\(<\)Point\(>\), \(<\)Point\(>\)) command. Here the tracer is the midpoint and the mover is the point on the circle (for details on the terminology see [54]).
Identification of the obtained curve may not be easy. In our example, the circle has radius 2. The geometric locus has equation
\[\frac{1}{x^{2}}+\frac{1}{y^{2}}=1 \tag{1}\]
This equation has been derived by either by hand or with a CAS. In reverse direction, it could have been discovered in a catalogue of classical plane curves9, but this an unilluminating search. Maybe a websearch based on a picture will help, we did not try it.
Footnote 9: Such as the Mathcurve website. The curve appears there at [https://mathcurve.com/courbes2d.gb/cruciforme/cruciforme.shtml](https://mathcurve.com/courbes2d.gb/cruciforme/cruciforme.shtml)
**Remark 1**: _Equation (2) can be transformed into a polynomial equation, namely_
\[x^{2}+y^{2}=x^{2}y^{2}. \tag{2}\]
_This may be an indication that packages for polynomial computations may be useful for the current exploration. Here too, Critical Thinking has to be applied as the domains of validity are different._
#### 4.2.4 A sextric.
A sextric is a curve of degree 6. A _sextric of Maclaurin_ is the geometric locus of the point of intersection of two lines, which are each revolving at constant rates about different points called poles. Figure 9
Figure 8: Automated exploration of a geometric locus
Figure 7: The locus of reflected parabola \(y-\)axis point about tangents
shows three examples, according to the ratio \(r\) of angular velocities being equal to 1/3, -1 or 2 (or, equivalently, to 1/2). In the lst case, the obtained curve is a strophoid. Note the different numbers of components. These are screenshots from a GeoGebra applet10; exploration is made possible by the definition of several sliders.
Footnote 10: [https://www.geogebra.org/f/kwzvga3rpn](https://www.geogebra.org/f/kwzvga3rpn)
Catalogues of plane curves show them as separate objects. The slider provides continuous changes in the values of the parameter, leading to discover that several seemingly different curves belong actually to a single family. The appariition of a straight component requires more theoretical developments.
### Envelopes
There exist 4 different definitions of an envelope; see [18] (Chap. 5). Kock [50] gives three of them, calling them respectively synthetic, impredicative and analytic. We will explore the examples, according to the different definitions, and see how automated methods are applicable. More advanced examples may be found in [40, 27, 28]; dedicated databases provide also numerous examples.
**Definition 2**: _[Synthetic] Let \(\mathcal{C}_{u}\) be a family of real plane curves dependent on a real parameter \(u\). The envelope \(\mathcal{E}\) is the union of the characteristic points \(M_{u}\), where the characteristic point \(M_{u}\) is the limit point of intersections \(\mathcal{C}_{u}\cap\mathcal{C}_{u+\mathcal{E}}\) as \(h\to 0\). In other words, the envelope is the set of limit points of intersections of nearby curves \(\mathcal{C}_{u}\),_
**Definition 3**: _[Impredicative] The envelope \(\mathcal{E}\) is a curve such that at each of its points, it is tangent to a unique curve from the given family. The locus of points where \(\mathcal{E}\) touches \(\mathcal{C}_{u}\) is called the \(\mathcal{E}-\)characteristic point \(M_{u}\)._
**Definition 4**: _[Analytic] Suppose that the family of curves \(\mathcal{C}_{u}\) is given by an equation \(F(x,y,u)=0\) (where \(u\) is a real parameter and \(F\) is differentiable with respect to \(u\)), then an envelope \(\mathcal{E}\) is determined by the solution of the system of equations:_
\[\begin{cases}F(x,y,u)=0\\ \frac{\partial F}{\partial u}F(x,y,u)=0\end{cases}, \tag{3}\]
In Definition 4, the envelope is described as the projection onto the \((x,y)\)-plane of the points, in the 3-dimensional \((x,y,u)\)-space, belonging to the surface with equation \(F(x,y,u)=0\) and having tangent plane
Figure 9: sextrices of MacLaurin
parallel to the \(u\)-axis (or being singular points and, thus, not having tangent plane, properly speaking). See [19], p.102. Note that the analytic definition 4 is the only one given by Berger [8](sections 9.6.7 and 14.6.1) and by Rovenski [67]. This last book gives details on how to work out envelopes using Maple.
**Remark 5**: _We chose to denote the parameter by \(u\), as in GeoGebra \(t\) has a special role._
#### 4.3.1 An envelope of a family of lines
We consider a family of lines given by the equation \(F(x,y,u)=0\), where \(F(x,y,u)=x+uy+u^{2}\). Such a family has been studied with Derive in [36]; we study it here in order to show the usage of automated commands, if possible. Figure 10(a) shows a first exploration, using the slider and Trace On11. The speed of the mouse on the slider conditions the density of the lines in the output. Not plotting too many lines retains the visual impression of a family of lines, and not a fully colored area.
Footnote 11: [https://www.geogebra.org/f/qt5pdbah42](https://www.geogebra.org/f/qt5pdbah42). Note that the increment for the slider \(a\) has to be put to 0.05 in order to have an accurate plot.
Figure 10 shows a GeoGebra session using its embedded CAS to determine characteristic points, according to Definition 2. These points have coordinates \(M_{u}=(u^{2}+u\varepsilon,-2u-\varepsilon)\). We have: \(\underset{\varepsilon\to 0}{\lim}M_{u+\varepsilon}=(u^{2},-2u)\). This is a parametric presentation of the parabola whose equation is \(y=-4x^{2}\). Figure 10(b) models two neighboring lines \(L_{u}\) and \(L_{u+\varepsilon}\). Their intersection is given by:
\[\begin{cases}x+uy+u^{2}=0\\ x+(u+\varepsilon)y+(u+\varepsilon)^{2}=0\end{cases} \tag{4}\]
The obtained parabola is plotted in Figure 10(c).
Applying this method in the general case shows that Definition 2 implies Definition 4; see [19, 41]. GeoGebra has a command for computing envelopes, but its syntax does not fit the above problem. The command has syntax Envelope(\(<\)Path\(>\),\(<\)Point\(>\)); this means that the command has been constructed to determine the envelope of a family of curves (the Path) depending geometrically on a tracer (the Point). It does not fit a purely algebraic setting, as in the example above.
#### 4.3.2 A nephroid as an envelope.
Let \(\mathcal{U}\) be the unit circle centered at the origin. Consider the point \(A\) on \(\mathcal{U}\) and a circle \(\mathcal{C}_{A}\) centered at \(A\) and tangent to the \(y-\)axis. If it exists, denote by \(\mathcal{E}\) the envelope of the family of circles. Figure 11(a)
Figure 10: Exploration of a family of lines
shows a screen shot of a first experimentation with GeoGebra12, using Trace On for the circles.
Footnote 12: [https://www.geogebra.org/m/fxjgmkqu](https://www.geogebra.org/m/fxjgmkqu)
Figure 11(b) shows a screenshot of the applet after usage of the Envelope command.
* Draw a line through \(A\) perpendicular to the \(y-\)axis;
* Determine the point of intersection of this line with the \(y-\)axis, denoted by \(H\) (this may be different with every new experimentation);
* Plot a circle whose center is \(A\) and which passes through \(H\).
The Envelope command is effective. The output has two components:
* A curve plotted in the geometric window (the dotted curve of the figure);
* An implicit equation in the algebraic window.
Note that the equation is of degree 7. Actually the polynomial is reducible and can be written as follows:
\[x(4x^{6}+12x^{4}y^{2}-12x^{4}+12x^{2}y^{4}-24x^{2}y^{2}-15x^{2}+4y^{6}-12y^{4}+ 12y^{2}-4)=0 \tag{5}\]
This means that the result is the union of the \(y-\)axis (given by the vanishing of the first term) and a sextic. A quick websearch yields that this sextic is a nephroid. Note that, if the Mathcurve site ([https://mathcurve.com/courbes2d.gb/nephroid/nephroid.shtml](https://mathcurve.com/courbes2d.gb/nephroid/nephroid.shtml)) is read, the equation given there has to be modified, because of the respective roles of the coordinate axes. It should be observed that the \(y-\)axis is "too big". It corresponds to the factor \(x\) in the polynomial of degree 7 (Equation (5) obtained in the algebraic window, but the geometric data indicates that only a segment of the axis is relevant. The superfluous parts are a consequence of the algebraic computations which determine a closure in the Zariski topology.
The question is now: can we improve the automated work in order to obtain the "true" answer? The answer is yes, as we show now.
Consider the following parametric presentation for the unit circle \((x,y)=(\cos u,\sin u),\;u\in[0,2\pi]\). Then a generic equation for the circles is
\[(x-\cos(u))^{2}+(y-\sin u)^{2}-\cos^{2}u=0. \tag{6}\]
Figure 11: Exploration of a nephroid as envelope of circles
Denote by \(F(x,y,u)\) the left hand-side in Equation (6). In this case, the system of equations of Definition 4 reads as follows:
\[\begin{cases}(x-\cos(u))^{2}+(y-\sin u)^{2}-\cos^{2}u=0\\ x\sin u-y\cos u+\sin 2u=0\end{cases} \tag{7}\]
Solving the system with Maple, we obtain the following output:
{x = 0, y = sin(u)}, {x = 2*cos(u)^3, y = -2*sin(u)^3 + 3*sin(u)}
The first component describes a segment on the \(y-\)axis, which is really accurate. The second component is a parametric presentation of the nephroid. Figure 12 shows two screenshots of an animation performed by Maple:
For the reader's sake, we include here the Maple code. Note that the axes have been defined explicitly in order to have a more readable plot.
F := (x - cos(u))^2 + (y - sin(u))^2 - cos(u)^2; derF := diff(F, u); expand(%); solve({F = 0, derF = 0}, {x, y}); neph := plot({[0, sin(u), u = 0.. 2*Pi], [2*cos(u)^3, -2*sin(u)^3 + 3*sin(u), u = 0.. 2*Pi]}, scaling = constrained, thickness = 4, color = navy); axes := implicitplot({x = 0, y = 0}, x = -3.. 3, y = -2.. 2); circles := animate(implicit, [F = 0], u = 0.. 2*Pi, thickness = 2); display(axes, neph, circles);
From this point, the derivation of an implicit equation requires applying the methods described in [28, 32, 37, 36], transforming trigonometric functions into rational expressions, then transforming the data into polynomials which generate ideals in a polynomial ring, and finally utilising elimination algorithms.
### Isoptics of plane curves.
Let \(\mathcal{C}\) be a plane curve and \(\theta\) a given angle. The \(\theta-\)isoptic curve of \(\mathcal{C}\) is the geometric locus of points \(M\) in the palen through which passes a pair of tangents making an angle equal to \(\theta\). For conics and \(\theta=90^{o}\), the isoptic curves are called orthoptics and have been known for a long time. These are the directrix of a parabola, the director circle of an ellipse, and the director circle of a hyperbola (when it exists,
Figure 12: Plots of circles and together with the envelope with Maple
depending on the angle between the asymptotes). On Figure 13(a) the isoptics an ellipse are displayed for two complementary angles. Figure 13 shows the orthoptic curve (i.e. the isoptic for right angles) of a Fermat curve of degree 6.
For general \(\theta\), isoptic curves of conics and of Fermat curves have been studied, for example, in [22, 56, 35, 41, 37]. The study relies strongly on the usage of software, but not on isoptics-dedicated commands. The main tools are solvers for non-linear equations, and Grobner packages.
In these works, plots of isoptics are provided, but as a consequence of algebraic computations and on a one by one basis. A first step towards automated exploration has been made, by developing tools for automated coloring [31, 34]. These tools provide visualization together of several isoptics. The variations of the parameter are translated into coloring differences. Confirmation is obtained a posteriori, using GeoGebra's dragging of a point on the given curve13.
Footnote 13: The interested reader can try the applets [https://www.geogebra.org/m/a2zpetsc](https://www.geogebra.org/m/a2zpetsc) for isoptics of an ellipse, [https://www.geogebra.org/m/kvbnpzt3](https://www.geogebra.org/m/kvbnpzt3) for inner isoptics of an ellipse, and [https://www.geogebra.org/m/yjgsbpbz](https://www.geogebra.org/m/yjgsbpbz) for isoptics of a Fermat curve. Other applets are also available on the site.
In every case, the availability of the automated command Locus enabled to plot immediately the curve. Moreover, as in other situations, a slider (or more than one) offer the possibility to explore not only one curve at a time, but a family depending continuously on one (or more) parameter. Nevertheless, this command did not provide an implicit equation of the curve. The derivation of an implicit equation requires algebraic machinery. Here a command to solve equations is needed; as mentioned previously, such a command is a standard in any CAS. Of course, it has to contain pattern recognition, as different problems lead to different algebraic settings (polynomial equations, trigonometric equations, etc...).
The reward of such activities is multiple:
* Work is interactive, i.e. it is performed as a dialog between man and machine. This is a nice opportunity to experience the mutual influence between the user and the software, and the teacher can observe a different processes of instrumental genesis among the students; see [72, 3, 4].
* Generally, the catalogues of curves present them as a discrete set of objects. The automated exploration reveals often that different curves belong actually to one larger family, the various curves corresponding to different values of the parameters. The author described such a situation in [28], where the machine-and-machine communication is crucial.
Figure 13: Isoptic curves
* These activities can stand by themselves, but can also be a promo aimed at broadening horizons. When working in class, a couple of students reacted with a great "waooo!" when the curves appeared.
A central issue for the determination of isoptics and envelopes is solving systems of non linear equations. The result consists in parametric representation of plane curves. these parametric equations are rarely rational. Computational work may sometimes been performed in order to transform the expression into rational expressions and afterwards to obtain polynomials. If possible, Grobner methods (such as Elimination) can be applied to implicitize the parametric presentation. This is not always possible. Even when polynomials can be obtained, Elimination may be too time-consuming or too memory-consuming for the CAS to answer; we experienced that when preparing [52]: the computations have been launched on two different computers with different characteristics, none gave an answer.
## 5 Conclusions and some thoughts for next steps
We presented activities around automated exploration, discovery and proof using GeoGebra and Maple. Actually a few drops in a vast ocean. First, note that we used the current version of GeoGebra, and the companion package called GeoGebra-Discovery, developed by Z. Kovacs. It contains several automated commands, sometimes specific to the package, sometimes an extension of commands existing in GeoGebra. For example, the Relation command exists in GeoGebra and is numerical. The GeoGebra-Discovery version extends it with symbolic algorithms; we showed an example in subsection 1.4. Other systems are also available for the same kind of tasks. For example see [12] on the usage of Sage for automated work. An important decision to be made by the educator is to choose the software which will be used. This choice has multiple faces. First of all, it depends on which systems are available at the teacher's institution. This is a consequence of pedagogical choices, but not less on financial decision of the administrators. It depends also on the teacher's literacy with regards to the different available packages. Of course, not every CAS or DGS is suitable for every level of students. Button-driven software may be easier to use than a software which requires mastering the syntax of the commands. Moreover, button-driven commands are often accompanied by a description of the command when right-clicking on the button, a very helpful feature. The existence of an interactive website for examples and tutorials is a strong help for the student.
The level of the students' background is crucial. Automated methods are not intended to be used as a blackbox, but are intended to incite the user to develop new approaches, and to achieve a more profound insight and understanding of the questions under study. The _black box - white box_ issue has been analyzed for example in [47, 48]). Sometimes, the CAS helps to bypass a lack of theoretical knowledge, but then it is important to "go back" and have the students fill the gap and understand what was hidden in the activities and the software usage [26]. Critical Thinking and Creativity have to be at work together. Before presenting the new methods, the teacher has to be informed of the actual level of the students, and then will be able to construct a curriculum adapted to the exploration of their Zone of Proximal Development (ZPD); see [74]. We recall that "the ZPD refers to the learner's ability to successfully complete tasks with the assistance of more capable other people, and for this reason it is often discussed in relation to assisted or scaffolded learning. The creation of ZPDs involves assistance with the cognitive structuring of learning tasks and sensitivity to the learner's current capabilities" (Walker, [75]). The author of this survey insists that his students work by pairs. The communication between them and their subsequent collaboration contribute to a reinforcement of the benefits that receive from the scaffolding by the teacher. The student's "waoow!" mentioned previously was such a consequence, and the mathematical
discovery made on that occasion was an incitement to create something new. That has been done.
The versatility of exploratory tools offered by a DGS and a CAS enable different students in the same class to experience difference ways to solve the same problem. For STEAM oriented students, this opens numerous opportunities to create models, animations, etc. We illustrated that in subsection 4.3. The scaffolding offered by the teacher has therefore to be more personally adapted, and the exploration of the student's ZPD becomes more and more personal. Some kind of joint ZPD (for 2 students) is also created. Smartphones, and before that walkmen (who remember them? and other electronic devices, created some disconnection between humans. Here, the fact that students are more free to experience and explore with their personal device, and then share their discoveries with their classmates, makes the class experience richer, more meaningful and more interesting. Outdoor activities contribute also to attract students to learn more mathematics, as these are based on their everyday environment and to their cultural background. The author teaches practical courses on "technology in Mathematics education" aimed either at pre-service or in-service teachers. The technological discourse has always to be adapted, according to the mathematical level, the cultural background and other characteristics of the students. Artigue [4] emphasizes also that the new technological knowledge is an integral part of the new mathematical knowledge acquired by the students. New topics or renewed topics can be proposed and explored, at an earlier stage of the student's cursus than in the past, whence a need to analyse new didactic situations and didactic transposition [17, 18]. All this is part of the new paradigms evoked by P. Quaresma [62] (v.s. Section 1). It is not surprising that the instrumental genesis (see [73, 4, 5], etc.) of every single student and of the class as a whole is totally different every year. Of course, this is relevant also for pre-service engineers.
Finally, we should once again emphasize that automated methods for exploration, discovery and proof, are a new approach aimed at developing new skills and to emphasize a new kind of understanding. New developments are always made. Quoting once again P. Quaresma (op.cit.): "Geometric reasoning with such computer applications is one of the most attractive challenges for future accumulation and dissemination of knowledge." For example, recent works are aimed at the study of inequalities [63] and their plots in the plane. Something was already available with Derive, but modern developments are richer. This kind of developments, together with the personal teaching-learning processes deserve study of a new kind of instrumental genesis. From a another point of view, with new achievements for an efficient and fruitful automated dialog between different kinds of software, new pairs CAS-DGS (either distinct or embedded one on the other) provide a new artifact which has to be transformed into an instrument [4, 73]. A new loop in a cognitive-educative spiral.
|
2303.08729 | DACOS-A Manually Annotated Dataset of Code Smells | Researchers apply machine-learning techniques for code smell detection to
counter the subjectivity of many code smells. Such approaches need a large,
manually annotated dataset for training and benchmarking. Existing literature
offers a few datasets; however, they are small in size and, more importantly,
do not focus on the subjective code snippets. In this paper, we present DACOS,
a manually annotated dataset containing 10,267 annotations for 5,192 code
snippets. The dataset targets three kinds of code smells at different
granularity: multifaceted abstraction, complex method, and long parameter list.
The dataset is created in two phases. The first phase helps us identify the
code snippets that are potentially subjective by determining the thresholds of
metrics used to detect a smell. The second phase collects annotations for
potentially subjective snippets. We also offer an extended dataset DACOSX that
includes definitely benign and definitely smelly snippets by using the
thresholds identified in the first phase. We have developed TagMan, a web
application to help annotators view and mark the snippets one-by-one and record
the provided annotations. We make the datasets and the web application
accessible publicly. This dataset will help researchers working on smell
detection techniques to build relevant and context-aware machine-learning
models. | Himesh Nandani, Mootez Saad, Tushar Sharma | 2023-03-15T16:13:40Z | http://arxiv.org/abs/2303.08729v1 | # DACOS--A Manually Annotated Dataset of Code Smells
###### Abstract
Researchers apply machine-learning techniques for code smell detection to counter the subjectivity of many code smells. Such approaches need a large, manually annotated dataset for training and benchmarking. Existing literature offers a few datasets; however, they are small in size and, more importantly, do not focus on the subjective code snippets. In this paper, we present DACOS, a manually annotated dataset containing \(10,267\) annotations for \(5,192\) code snippets. The dataset targets three kinds of code smells at different granularity-_multifaceted abstraction, complex method,_ and _long parameter list_. The dataset is created in two phases. The first phase helps us identify the code snippets that are potentially subjective by determining the thresholds of metrics used to detect a smell. The second phase collects annotations for potentially subjective snippets. We also offer an extended dataset DACOSX that includes definitely benign and definitely snippets by using the thresholds identified in the first phase. We have developed Tagman, a web application to help annotators view and mark the snippets one-by-one and record the provided annotations. We make the datasets and the web application accessible publicly. This dataset will help researchers working on smell detection techniques to build relevant and context-aware machine-learning models.
## I Introduction
Code smells are symptoms of poor design and implementation [1]. Existing literature shows that code smells have a negative impact on maintainability [2, 3], development effort [4, 5], and reliability [6, 7, 8, 9] among other quality attributes. Given its importance, the software engineering community has put significant effort to study various dimensions, such as their characteristics, impact, causes, and detection mechanisms, related to code smells [10].
Many code smells are subjective in nature [10]_i.e.,_ a snippet may exhibit a smell in one context, but a similar snippet may not be considered smelly in another context. _Context_ includes the used programming language, experience of the development team, and quality-related practices followed in an organization. A simple example of smells' subjectivity is a method with, for example, \(80\) lines of code. Based on the context, it could be a _large method_ for some developers; others might not classify the method as a large method. However, a method with \(500\) lines of code will be _definitely_ a large method for all developers.
Currently, the majority of commonly used tools use metrics and heuristics to identify code smells [10]. It is often argued that due to the subjective nature of smells, one cannot come up with universally accepted metric thresholds to classify a snippet in a smelly or benign code in all contexts. [10, 11] To overcome the challenge introduced by the subjective nature of smells, researchers propose smell detection using machine-learning techniques [11, 12, 13, 14, 15, 16]. Such approaches rely on a code smells dataset, ideally manually annotated, to train a machine-learning model. However, existing datasets offer little on multiple fronts. First, the literature offers only a handful of datasets such as Landfill [17]. Second, existing datasets contain a small number of annotated samples; for example, Landfill offers annotations for only \(243\) snippets. A dataset with a small number of annotated samples would help a little to train state-of-the-art deep-learning models with reasonable accuracy. Next, existing code smells datasets do not filter out code snippets that are definitely benign or smelly. For example, a snippet with a very few (say, three) lines of code cannot have a _long method_; similarly, a snippet with a very large (_e.g.,_\(200\)) number of lines of code definitely suffers from a _long method_ smell. Given that the _value_, in terms of effectiveness, of a smell dataset lies in the captured subjectivity, such definite snippets, either definite benign or smelly, reduce the efficacy offered by a dataset. Lastly, the available support for different types of smells is limited; for example, Landfill offers annotated snippets for five types of smells. Given the huge amount of effort involved in annotating code snippets, the software engineering community needs to complement existing smell datasets for other kinds of actively researched smells.
In this paper, we offer a manually annotated dataset of code smells _viz.__DAaTaset of COde Smells_ (DACOS). To create an effective dataset, we filtered the code snippets that are likely to be subjective by removing the snippets that are either definitely benign or smelly. This approach helps us better utilize the annotators' effort by considering their inputs where we actually need them. The dataset offers annotated code snippets for three code smells-- _multifaceted abstraction_[18, 19], _complex method_[20], and _long parameter list_[1]. In addition to a manually annotated dataset on potentially subjective snippets, we offer DACOSX dataset containing a large number of snippets that are either definitely benign or smelly. Furthermore, we developed a web-application _viz._Tagman to make it easy for annotators to see one snippet at a time, and indicate whether a smell is present in the snippet. We have made source code of Tagman1 available publicly.
Footnote 1: [https://github.com/SMART-Dal/Tagman](https://github.com/SMART-Dal/Tagman)
We make the following contributions to the state of the art.
* We offer \(\mathtt{Dacos}\), a manually annotated code smell dataset, containing \(10,267\) annotations for \(5,192\) code snippets for the considered code smells.
* We also provide \(\mathtt{DacosX}\), an extended dataset containing \(207,605\) snippets that are either definitely benign or smelly. These datasets will help researchers in the field to train and validate their machine-learning models.
* A configurable web application \(\mathtt{Tagman}\) for easy smell annotations. The community may use the application for similar code annotation purposes.
## II Dataset construction
Figure 1 provides an overview of the dataset construction process. We elaborate on the steps in detail below.
### _Downloading repositories_
In step 1 from Figure 1, we perform the following tasks to identify and download repositories.
* We use searchgithubrepo[21] python package, which in turn uses the GitHub GraphQL api[22] to filter GitHub repositories.
* To identify high quality Java repositories, we select repositories with more than or equal to \(13\) thousand stars and more than ten thousand lines of code.
* Also, we discard the repositories that are not modified in the last one year.
* In addition, we use QScored [23] to filter out repositories based on their code quality score. QScored assigns a weighted quality score based on the detected smells at various granularities. We select repositories that have a quality score less than ten (the higher the score, the poorer the quality).
* Finally, we obtained ten repositories after applying the filtering criteria. We download the selected repositories.
### _Dividing the repositories into classes and methods_
We need to split a repository into individual methods and classes so that \(\mathtt{Tagman}\) can show individual snippets one by one to an annotator. We use \(\mathtt{CodeSplitJava}\)[24] in step 2 to split each repository into individual methods and classes.
### _Analyzing repositories_
In step 3, we employ a metrics-based filtering process in the phase-2 of manual annotation. We use \(\mathtt{DesigniteJava}\)[25] to compute code quality metrics. \(\mathtt{DesigniteJava}\) computes a variety of code quality metrics and detects smells; it has been used in various studies [26, 27, 28, 29, 30]. We elaborate the process to filter out non-subjective samples in the manual annotation step.
### _Tagman_
\(\mathtt{Tagman}\), shown as 1 in Figure 1, is a web-based tool that we created to facilitate the annotation process. Figure 2 shows a screenshot of the application showing a code snippet and an option to annotate the snippet with a smell. The front-end of \(\mathtt{Tagman}\) is written in _Thymeleaf_ and \(\mathtt{html/css}\). The back-end of the application is developed in _SpringBootJava_ and the data is stored in a MySQL database. Figure 3 shows the schema of the database.
At the beginning of the code smell annotation cycle, we upload a csv file containing the repository names and url of selected GitHub repositories. \(\mathtt{Tagman}\) back-end uses a set of Python scripts2 to download GitHub repositories, split the code into class and method files, and run \(\mathtt{DesigniteJava}\).
Footnote 2: [https://github.com/SMART-Dal/Tagman-scripts](https://github.com/SMART-Dal/Tagman-scripts)
Once the data import is completed, the tool is ready to start accepting annotations. To start an annotation session, a user login (or sign up) to the application. Then, the user is presented with instructions including the definitions of code smells. The user can then start annotating the presented code snippets.
### _Manual annotation_
We employ a snippet selection mechanism to identify potentially subjective snippets _w.r.t._ a code smell. We do so to improve the effectiveness of the resultant dataset by only including manual annotations for potentially subjective code
Fig. 1: Dataset construction process
snippets. Also, such a strategy helps us better utilize the available annotators' time. A potentially subjective snippet is a code snippet that may get classified as benign or smelly based on context. The rest of the snippets that are not identified as potentially subjective snippets are either definitely benign or smelly snippet. For example, _cyclomatic complexity_ (cc) [31] is commonly used to detect _complex method_ smell. A code snippet is definitely benign if cc is very low (_e.g.,_ cc=1); similarly, a snippet is definitely smelly if cc is very high for a method (_e.g.,_ cc=\(30\)). We divide our annotation process into two phases. In the first phase, we show all snippets, _i.e.,_ without any filtering, to annotators to identify metrics thresholds to determine whether a snippet is potentially subjective or not. The second phase uses the identified metrics thresholds and show the filtered code snippets to annotators.
#### Iii-B1 Phase-1
In the first phase, we show code snippets without any filtering to annotators. Tagman presents one snippet at a time to the annotators and collects their response on whether the shown snippet has a code smell or not. We show a code snippet to two randomly chosen annotators and record their responses. The annotators recruited, on a volunteer basis, for this phase were graduate students of Computer Science enrolled in a software engineering course (during summer 2022) that cover code smells extensively. A total of \(110\) annotators participated in this phase.
We compute the minimum and maximum threshold for metrics that are used to decide the presence of a code smell based on the collected responses in Phase-1. We received a total \(17,869\) responses in this phase. We compute the lowest metric value (t\({}_{l}\)) where the smell is identified, for each smell individually, to obtain the threshold at the lower side. Similarly, we extract the highest metric value (t\({}_{h}\)) where the smell is _not_ identified. Then, we compute the standard deviation (sd) of the metric value for the samples where the smell is identified. Finally, we obtain the low threshold using max(m\({}_{l}\), t\({}_{l}\) - sd) and high threshold using min(m\({}_{h}\), t\({}_{h}\) + sd) for subjective snippet identification. Here, m\({}_{l}\) and m\({}_{h}\) represent the lowest and highest possible values of a metric. Table I summarizes the quality metrics corresponding to each code smell and their thresholds for identifying subjective snippets. For instance, for _cyclomatic complexity_ metric, we obtain \(4\) and \(7\) from the above calculation after rounding the values to the nearest integer.
#### Iii-B2 Phase-2
We configure our filtering mechanism based on the thresholds obtained from Phase-1 and invite annotators by advertising the link of Tagman installation on social media platforms such as Twitter and LinkedIn. The invitation was open to all software developers, software engineering students, and researchers who understand Java programming language and at least basic object-oriented concepts. We kept the invitation open for six weeks during Dec-Jan 2022-23. A total of \(82\) annotators participated in this phase. Tagman showed snippets that have metric values falling between the low and high thresholds (inclusive). We configured Tagman to get two annotations for each sample to improve the reliability of the annotations.
### _Dataset information_
After the phase-2, we received a total of \(10,267\) annotations for \(5,192\) samples from \(86\) annotators. Table II presents the number of annotations and samples per smell type.
_Dataset availability:_ The datasets offered in this paper are available online [32]. Also, the repositories containing scripts2 used to prepare code snippets as well as code annotation application3_i.e.,_ Tagman are available online.
Fig. 3: Schema of the Dacos database
Fig. 2: Annotation user interface of Tagman
## III Potential research applications
* **Detecting and validating code smells:** Given the subjective nature, traditional smell detection tools that implement rules and heuristics to identify smells, do poorly. The success of a machine-learning approach to detect smells depends on the availability of a large manually annotated dataset. The presented datasets, Dacos and DacosX, complement existing datasets by offering a large number of subjective code snippets for three code smells that the existing datasets do not cover.
* **Correlating code smells with software engineering aspects:** A variety of exploratory and empirical studies investigating the impact of code smells exists. It includes bug prediction [33], maintainability prediction [34], and maintenance effort [35]. In addition to existing directions, the tools trained or fine-tuned from the offered datasets can be used to effectively establish a relationship between code smells and productivity of a software development team.
* **Extending Tagman for code annotation:** The code annotation application that we developed to create this dataset can be extended for similar kinds of code annotation, for example, to spot vulnerable code and to segregate well-written identifiers.
## IV Related datasets
Software engineering literature offers a small number of manually annotated datasets for code smells. Palomba _et al._[36] offered a dataset Landfill containing annotations for five types of code smells--_divergent change, shotgun surgery, parallel inheritance, blob,_ and _feature emy_. They offered annotations for \(243\) snippets. They also developed an online portal where contributors can annotate code for smells, however as of writing this paper (Jan 2023), the portal is not accessible. Madeyski _et al._[37] proposed mlcq-- a code smell manually annotated dataset. The dataset contains \(14.7\) thousand annotations for \(4,770\) samples. The dataset considered four smells--_blob, data class, long method_, and _feature emy_. Both of the datasets mentioned above do not consider the subjectiveness of a code snippet; hence most of the snippets might not add any new information for a machine-learning classifier when used in training. Also, we chose the code smells that are not covered by any existing code smell dataset and hence complement the existing datasets. There are some code smells datasets such as QScored [23]. Though the QScored dataset is large, the samples are not manually annotated and hence lack the required capturing of context.
## V Threats to validity
_Internal validity_ threats concern the ability to draw conclusions from our experimental results. In phase-2 of manual annotation, we invited volunteers with at least a basic understanding of Java programming language and object-orientation concepts. We advertised the invitation on social media professional channels (Twitter and LinkedIn). Given the anonymity of the exercise, we do not have any mechanism to verify the assumption that the participants has sufficient knowledge to attempt the annotations. However, we offered all the major participants (with at least 50 annotations) to include them as contributors to the dataset; we perceive such a measure would have motivated the annotators to perform the annotations to the best of their abilities. Additionally, we configured Tagman to obtain two annotations per sample so that we can reduce the likelihood of a random annotation.
_External threats_ are concerned with the ability to generalize our results. The proposed dataset is for snippets written in Java. However, our code annotation tool is generic and it can be used to annotate snippets from any programming language. Additionally, scripts used to generate individual snippets can be customized to use any other external tool for splitting the code into methods and classes. Furthermore, the thresholds used in the annotation process to filter snippets based on low and high thresholds of a metric are configurable.
## VI Limitations, conclusions, and future work
We offer Dacos--a manually annotated code smell dataset containing \(10,267\) annotations for \(5,192\) subjective code snippets. We also provide large DacosX dataset containing definitely benign and definitely smelly snippets in addition to those present in Dacos. The paper offers Tagman, a code annotation application, that can be reused in similar contexts.
The proposed dataset covers three code smells. We selected a rather small set of code smells to consider in the dataset because it is better to have more number of annotations for a smell rather than having small number of annotations per smell. Also, we chose a set of smells that are not covered by existing code smells dataset. We configured Tagman to obtain two annotations per sample. Though it improves the reliability of the dataset, one may argue that it may introduce a situation where the annotations are contradictory to each other. We can mitigate the limitation by increasing the number of annotations per sample to three; we will incorporate this mechanism in the future version of the datasets. Additionally, we would like to expand the scope of the dataset in terms of programming language, number of samples, and number of supported smells in the future. |
2310.08611 | Energy estimates for the Einstein-Yang-Mills fields and applications | We prove exterior energy estimates for tensorial non-linear wave equations,
where the background metric is a perturbation of the Minkowski space-time, and
where the derivatives are the Minkowski covariant derivatives. We obtain bounds
in the exterior region of the Minkowski space-time, for the weighted $L^2$ norm
on each component, separately, of the covariant derivative of the tensorial
solutions, and we also control a space-time integral in the exterior of the
covariant tangential derivatives of the solutions. As a special application, we
use here these energy estimates to prove the exterior stability of the
Minkowski space-time, $\mathbb{R}^{1+4}$, as solution to the coupled
Einstein-Yang-Mills system associated to any compact Lie group $G$, in the
Lorenz gauge and in wave coordinates. The bounds in the exterior for the $L^2$
norm on the covariant derivatives of each component, separately, of the tensor
solution, as well as the bound on the space-time integral of the covariant
tangential derivatives, are motivated by a problem that we will address in a
paper that follows to prove the exterior stability of the $(1+3)$-Minkowski
space-time for perturbations governed by the Einstein-Yang-Mills equations. | Sari Ghanem | 2023-10-12T00:57:19Z | http://arxiv.org/abs/2310.08611v1 | # Energy estimates for the Einstein-Yang-Mills fields and applications
###### Abstract.
We prove exterior energy estimates for tensorial non-linear wave equations, where the background metric is a perturbation of the Minkowski space-time, and where the derivatives are the Minkowski covariant derivatives. We obtain bounds in the exterior region of the Minkowski space-time, for the weighted \(L^{2}\) norm on each component, separately, of the covariant derivative of the tensorial solutions, and we also control a space-time integral in the exterior of the covariant tangential derivatives of the solutions. As a special application, we use here these energy estimates to prove the exterior stability of the Minkowski space-time, \(\mathbb{R}^{1+4}\,\), as solution to the coupled Einstein-Yang-Mills system associated to any compact Lie group \(G\,\), in the Lorenz gauge and in wave coordinates. The bounds in the exterior for the \(L^{2}\) norm on the covariant derivatives of each component, separately, of the tensor solution, as well as the bound on the space-time integral of the covariant tangential derivatives, are motivated by a problem that we will address in a paper that follows to prove the exterior stability of the \((1+3)\)-Minkowski space-time for perturbations governed by the Einstein-Yang-Mills equations.
## 1. Introduction
This is the second paper in a series of three, where we study the non-linear stability of the Minkowski space-time solution to the coupled Einstein-Yang-Mills equations. In this paper, we prove exterior energy estimates and apply them, as a special case, to prove the exterior stability of the \((1+4)\)-Minkowski space-time solution to the Einstein-Yang-Mills system. However, our energy estimates are mostly motivated by the next paper to prove the non-linear exterior stability of the \((1+3)\)-Minkowski space-time.
First, we prove exterior energy estimates for a system of coupled non-linear covariant wave equations. More precisely, we consider that we are given a fixed system of coordinates, namely \((t,x^{1},\ldots,x^{n})\;\), that is not necessarily wave coordinates, yet, for our application to the proof of stability of Minkowski, this system will ultimately be chosen to be that of wave coordinates. In this fixed system of coordinates, we define \(m\) to be the Minkowski metric \((-1,+1,\ldots,+1)\) and we define \(\nabla^{(\mathbf{m})}\) to be the covariant derivative associated to the metric \(m\) (see Definition (2.1)). We consider any arbitrary curved space-time \((\mathcal{M},\mathbf{g})\;\), with a smooth Lorentzian metric \(\mathbf{g}\;\), which will ultimately be, in our application to the non-linear exterior stability problem, our unknown Lorentzian manifold solution to the fully coupled Einstein-Yang-Mills system.
We study the following system of non-linear covariant tensorial wave equations for \(\Phi\) on \((\mathcal{M},\mathbf{g})\), where the initial data for the hyperbolic Cauchy problem is given on an initial Cauchy hypersurface \(\Sigma\), and where \(V\) is any vector,
\[g^{\alpha\beta}\nabla^{(\mathbf{m})}{}_{\alpha}\nabla^{(\mathbf{m})}{}_{\beta} \Phi_{V}=S_{V}\;. \tag{1.1}\]
Here the metric \(\mathbf{g}\) is a perturbation of Minkowski in the following sense: if we define (see Definition 2.10),
\[H^{\mu\nu} := g^{\mu\nu}-m^{\mu\nu}\;, \tag{1.2}\]
where \(m^{\mu\nu}\) is the inverse of the Minkowski metric \(m_{\mu\nu}\), that is defined to be \((-1,+1,\dots,+1)\) in our chosen system of coordinates \((x^{0},x^{1},\dots,x^{n})\), where here \(x^{0}=t\), then we assume in our energy estimates that
\[\sum_{\mu,\nu=0}^{n}|H_{\mu\nu}|<\frac{1}{n}\;. \tag{1.3}\]
This condition on the perturbation \(H\) would make the boundary terms of our energy estimates "look like" the \(L^{2}\) norm of the covariant gradient \(\nabla^{(\mathbf{m})}\Phi_{V}\) of the solution of our system (1.1) (see Lemma 2.7).
The goal of our energy estimates is to prove an exterior energy estimate that would allow us to control the \(L^{2}\) norm of \(\nabla^{(\mathbf{m})}\Phi_{V}\) where the integral would be taken on a hypersurface that is the intersection of \(t=constant\), in our fixed system of coordinates \((t,x^{1},\dots,x^{n})\), with the complement of the future domain of dependance for the metric \(m\) of a compact \(\mathcal{K}\subset\Sigma\). This is what we mean by exterior energy estimate: they are bounds on \(L^{2}\) norm in domains that are exterior to the future domain of dependance of \(\mathcal{K}\subset\Sigma\).
We consider a specific non-symmetric tensor (see Definition 2.3) which we contract with a weighted vector (see (2.21)) to get a weighted conservation law (see Lemma 2.5). Here the weightes are defined in Definitions 2.12, 2.13 and 2.14. Thus, we get a weighted conservation law (see Corollary 2.2) that has also a quantity that controls the tangential derivatives of our unknown solution \(\Phi_{V}\) (see Lemma 2.6), thanks to the non-vanishing weight \(\widehat{w}^{\prime}\) (see Lemma 2.3).
The fact the metric \(g\) is assumed to be close enough to the Minkowski metric (see (1.3)), allows us to translate these conservation laws into _weighted_ energy estimates as in Corollary 2.3. Thanks to the non-trivial weight \(\widehat{w}\) (see Definition 2.13), we have control on the weighted space-time integral of the covariant tangential derivatives of \(\Phi_{V}\), namely
\[\int_{t_{1}}^{t_{2}}\int_{\Sigma_{\tau}^{ext}}\Big{(}\frac{1}{2}\Big{(}| \nabla^{(\mathbf{m})}{}_{t}\Phi_{V}+\nabla^{(\mathbf{m})}{}_{r}\Phi_{V}|^{2}+ \delta^{ij}|(\nabla^{(\mathbf{m})}{}_{i}-\frac{x_{i}}{r}\nabla^{(\mathbf{m})}{ }_{r})\Phi_{V}|^{2}\Big{)}\cdot d\tau\cdot\widehat{w}^{\prime}(q)d^{n}x\;,\]
a control that will be used in the next paper for the non-linear stability of the \((1+3)\)-Minkowski space-time governed by the coupled Einstein-Yang-Mills system. Also, thanks to the definition of our tensor \(T\) in Definition 2.3, we get control on the \(L^{2}\) norm on each component, namely,
\[\int_{\Sigma_{\tau}^{ext}}|\nabla^{(\mathbf{m})}\Phi_{V}|^{2}\;\cdot w(q)\cdot d ^{n}x\;.\]
The fact that we can separate the controls for the \(L^{2}\) norm on each component \(\nabla^{({\bf m})}\Phi_{V}\) is necessary for our next paper that treats the case of \(n=3\).
We showed in [17] that the stability of the Minkowski space-time in the Lorenz gauge and in wave coordinates, solution to the Einstein-Yang-Mills equations, could be recasted as the study of non-linear wave equations, in the form of (1.1), on both the Yang-Mills potential \(A\) and the perturbation metric \(h\) (see Definition 2.10) - we refer the reader to the introduction of our previous paper [17].
In fact, the Einstein-Yang-Mills equations are
\[R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R = 8\pi\cdot T_{\mu\nu}\;, \tag{1.4}\]
where
\[T_{\mu\nu}=\frac{1}{4\pi}\cdot(<F_{\mu\beta},F_{\nu}^{\ \beta}>-\frac{1}{4}g_{\mu \nu}<F_{\alpha\beta},F^{\alpha\beta}>)\;, \tag{1.5}\]
and where \(F\) is the Yang-Mills curvature, that is a two-form defined by
\[F_{\alpha\beta}=\nabla_{\alpha}A_{\beta}-\nabla_{\beta}A_{\alpha}+[A_{\alpha},A_{\beta}]\;, \tag{1.6}\]
where \(A\) is valued in the Lie algebra \(\mathcal{G}\) associated to any compact Lie group \(G\), and where \(\nabla\) is the Levi-Civita covariant derivative associated to the unknown metric \({\bf g}\;\).
However, we showed in [17], that the Einstein-Yang-Mills equations in the Lorenz gauge and in wave coordinates, could be written as follows,
\[g^{\lambda\mu}\nabla^{({\bf m})}{}_{\lambda}\nabla^{({\bf m})}{} _{\mu}A_{\sigma}\] \[= (\nabla^{({\bf m})}{}_{\sigma}h^{\alpha\mu})\cdot(\nabla^{({\bf m })}{}_{\alpha}A_{\mu})\] \[+\frac{1}{2}\big{(}\nabla^{({\bf m})\mu}h^{\nu}{}_{\sigma}+\nabla^ {({\bf m})}{}_{\sigma}h^{\nu\mu}-\nabla^{({\bf m})}{}^{\nu}h^{\mu}{}_{\sigma} \big{)}\cdot\big{(}\nabla^{({\bf m})}{}_{\mu}A_{\nu}-\nabla^{({\bf m})}{}_{ \nu}A_{\mu}\big{)}\] \[+\frac{1}{2}\big{(}\nabla^{({\bf m})\mu}h^{\nu}{}_{\sigma}+\nabla^ {({\bf m})}{}_{\sigma}h^{\nu\mu}-\nabla^{({\bf m})}{}^{\nu}h^{\mu}{}_{\sigma} \big{)}\cdot[A_{\mu},A_{\nu}]\] \[-\big{(}[A_{\mu},\nabla^{({\bf m})\mu}A_{\sigma}]+[A^{\mu},\nabla^ {({\bf m})}{}_{\mu}A_{\sigma}-\nabla^{({\bf m})}{}_{\sigma}A_{\mu}]+[A^{\mu}, [A_{\mu},A_{\sigma}]]\big{)}\] \[+O(h\cdot\nabla^{({\bf m})}h\cdot\nabla^{({\bf m})}A)+O(h\cdot \nabla^{({\bf m})}h\cdot A^{2})+O(h\cdot A\cdot\nabla^{({\bf m})}A)+O(h\cdot A ^{3})\;,\]
and
\[g^{\alpha\beta}\nabla^{({\bf m})}{}_{\alpha}\nabla^{({\bf m})}{} _{\beta}h_{\mu\nu}\] \[= P(\nabla^{({\bf m})}{}_{\mu}h,\nabla^{({\bf m})}{}_{\nu}h)+Q_{\mu \nu}(\nabla^{({\bf m})}h,\nabla^{({\bf m})}h)+G_{\mu\nu}(h)(\nabla^{({\bf m})} h,\nabla^{({\bf m})}h)\] \[-4<\nabla^{({\bf m})}{}_{\mu}A_{\beta}-\nabla^{({\bf m})}{}_{ \beta}A_{\mu},\nabla^{({\bf m})}{}_{\nu}A^{\beta}-\nabla^{({\bf m})}{}^{\beta }A_{\nu}>\] \[+m_{\mu\nu}\cdot<\nabla^{({\bf m})}{}_{\alpha}A_{\beta}-\nabla^{( {\bf m})}{}_{\beta}A_{\alpha},\nabla^{({\bf m})}{}_{\alpha}A^{\beta}-\nabla^{( {\bf m})}{}^{\beta}A^{\alpha}>\] \[-4\cdot\big{(}<\nabla^{({\bf m})}{}_{\mu}A_{\beta}-\nabla^{({\bf m })}{}_{\beta}A_{\mu},[A_{\nu},A^{\beta}]>+<[A_{\mu},A_{\beta}],\nabla^{({\bf m })}{}_{\nu}A^{\beta}-\nabla^{({\bf m})}{}^{\beta}A_{\nu}>\big{)}\] \[+m_{\mu\nu}\cdot\big{(}<\nabla^{({\bf m})}{}_{\alpha}A_{\beta}- \nabla^{({\bf m})}{}_{\beta}A_{\alpha},[A^{\alpha},A^{\beta}]>+<[A_{\alpha},A _{\beta}],\nabla^{({\bf m})}{}^{\alpha}A^{\beta}-\nabla^{({\bf m})}{}^{\beta}A^{ \alpha}>\big{)}\] \[-4<[A_{\mu},A_{\beta}],[A_{\nu},A^{\beta}]>+m_{\mu\nu}\cdot<[A_{ \alpha},A_{\beta}],[A^{\alpha},A^{\beta}]>\] \[+O\big{(}h\cdot(\nabla^{({\bf m})}A)^{2}\big{)}+O\big{(}h\cdot A^ {2}\cdot\nabla^{({\bf m})}A\big{)}+O\big{(}h\cdot A^{4}\big{)}\;,\]
where \(P\), \(Q\) and \(G\), as well as the big \(O\) notation, are defined in [17].
Hence, if we look at the Lie derivatives of the source terms which appear for the Yang-Mills potential, where here \(Z^{J}\) is any product of length \(|J|\) of Minkowski vector fields, as explained in [17], we have the following bound,
\[|{\mathcal{L}}_{Z^{I}}(g^{\lambda\mu}\nabla^{({\bf m})}{}_{\lambda }\nabla^{({\bf m})}{}_{\mu}A)|\] \[\leq \sum_{|J|+|K|+|L|+|M|\leq|I|}\big{(}\ O(|\nabla^{({\bf m})}({ \mathcal{L}}_{Z^{J}}h)|\cdot|\nabla^{({\bf m})}({\mathcal{L}}_{Z^{K}}A)|)\] \[+O(|\nabla^{({\bf m})}({\mathcal{L}}_{Z^{J}}h)|\cdot|{\mathcal{L} }_{Z^{K}}A|\cdot|{\mathcal{L}}_{Z^{L}}A|)\] \[+O(|{\mathcal{L}}_{Z^{J}}A|\cdot|\nabla^{({\bf m})}({\mathcal{L} }_{Z^{K}}A)|)+O(|{\mathcal{L}}_{Z^{J}}A|\cdot|{\mathcal{L}}_{Z^{K}}A|\cdot|{ \mathcal{L}}_{Z^{L}}A|)\] \[+O(|{\mathcal{L}}_{Z^{J}}h|\cdot|\nabla^{({\bf m})}({\mathcal{L} }_{Z^{K}}h)|\cdot|\nabla^{({\bf m})}({\mathcal{L}}_{Z^{L}}A)|)\] \[+O(|{\mathcal{L}}_{Z^{J}}h|\cdot|\nabla^{({\bf m})}({\mathcal{L} }_{Z^{K}}h)|\cdot|{\mathcal{L}}_{Z^{L}}A|\cdot{\mathcal{L}}_{Z^{M}}A|\] \[+O(|{\mathcal{L}}_{Z^{J}}h|\cdot|{\mathcal{L}}_{Z^{K}}A|\cdot| \nabla^{({\bf m})}({\mathcal{L}}_{Z^{L}}A)|)+O(|{\mathcal{L}}_{Z^{J}}h|\cdot| {\mathcal{L}}_{Z^{K}}A|\cdot|{\mathcal{L}}_{Z^{L}}A|\cdot|{\mathcal{L}}_{Z^{M }}A|)\ \big{)}\.\]
Unlike the case of the Einstein vacuum equations, already in 4 space-dimensions, i.e. for \(n=4\), we see that the non-linear structure of the Einstein-Yang-Mills potential \(A\), exhibits terms such that \(|A|\cdot|\nabla^{({\bf m})}A|\) and \(|A|^{3}\), which are troublesome in the interior region, inside the outgoing light cone prescribed by \(q:=r-t<0\), where, \(t\) and \(r:=|x|\) are the time and space wave coordinates of the Einstein-Yang-Mills equations in the Lorenz gauge. Here, we chose to fix our system of coordinates as being that of wave coordinates and then we defined the Minkowski space-time to be the Minkowski metric in this system of wave coordinates: thus, these are the time and space coordinates of our Minkowski space-time.
These "troublesome" terms, namely \(|A|\cdot|\nabla^{({\bf m})}A|\) and \(|A|^{3}\), exhibit factors for \(|A|\) which are not integrable in the interior. Indeed, since we are working with wave equations and therefore the energy is at the level of \(\nabla^{({\bf m})}A\) (see (3.5)), the Gronwall inequality that we would like to establish is a one that is at the level of the norm of the gradient of the Lie derivatives of the fields. Hence, trying to use a Hardy type inequality (as the one given in Corollary 4.1), in order to transform \(A\) into \(\nabla^{({\bf m})}A\), we encounter a problem in the interior region that is already there for \(n=4\).
As mentioned earlier, the energy that we defined involves only the gradient of the fields, and not the field itself (see (3.5)). One needs to convert a control on the zero-derivatives in the source terms into a control on the gradient of the fields using a Hardy type inequality. Yet, a Hardy type inequality says that an \(L^{2}\) norm of the field with a weight of \(\frac{1}{(1+|q|)^{2}}\) could be converted into an \(L^{2}\) norm of the gradient of the field, which is the one that interests us. However, in the case of \(n\geq 4\), in order to close the argument to bound the higher order energy, one needs to control \((1+t)^{1+\lambda}\cdot|{\mathcal{L}}_{Z^{I}}(g^{\lambda\mu}\nabla^{({\bf m})} {}_{\lambda}\nabla^{({\bf m})}{}_{\mu}A)|^{2}\), with \(\lambda>0\), where \({\mathcal{L}}_{Z^{I}}\) are the Minkowski Lie derivatives, in order to get a uniform bound on the energy. This implies that a
factor of the field \(A\) should be \(\frac{1}{(1+t)^{1+\lambda}\cdot(1+|q|)}\). This way, we would have
\[(1+t)^{1+\lambda}\cdot\Big{(}\frac{1}{(1+t)^{1+\lambda}\cdot(1+|q| )}\cdot|A|\Big{)}^{2} = \frac{1}{(1+t)^{1+\lambda}\cdot(1+|q|)^{2}}\cdot|A|^{2}\;.\]
Then, by using Hardy inequality, as explained, we could then obtain a control on the integral of the above quantity by an integral of this quantity \(\frac{1}{(1+t)^{1+\lambda}}\cdot|\nabla^{(\mathbf{m})}A|^{2}\), which when appears in a Gronwall type inequality, allows one to conclude that the energy will indeed remain bounded. However, looking at the source terms which appear in the wave equation on \(A\) (see (1.7)), we see that in trying to control the non-linear structure to close a Gronwall type inequality on the energy, we are confronted to terms either of this type
\[\sum_{|J|+|K|\leq|I|}|\mathcal{L}_{Z^{J}}A|\cdot|\nabla^{(\mathbf{ m})}(\mathcal{L}_{Z^{K}}A)|\] \[\lesssim \Big{(}E(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+ 1)\cdot\begin{cases}\frac{\epsilon}{(1+t+|q|)^{\frac{(n-1)}{2}-\delta}(1+|q|) ^{1+\gamma}},\quad\text{when}\quad\;q>0\;,\\ \frac{\epsilon}{(1+t+|q|)^{\frac{(n-1)}{2}-\delta}(1+|q|)^{\frac{1}{2}}}\quad \text{when}\quad\;q<0\;,\end{cases}\Big{)}\] \[\cdot\sum_{|K|\leq|I|}|\mathcal{L}_{Z^{K}}A|\] \[+\Big{(}E(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+ 1)\cdot\begin{cases}\frac{\epsilon}{(1+t+|q|)^{\frac{(n-1)}{2}-\delta}(1+|q|) ^{\gamma}},\quad\text{when}\quad\;q>0\;,\\ \frac{\epsilon\cdot(1+|q|)^{\frac{1}{2}}}{(1+t+|q|)^{\frac{(n-1)}{2}-\delta}} \quad\text{when}\quad\;q<0\;,\end{cases}\Big{)}\] \[\cdot\sum_{|K|\leq|I|}|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K}}A )|\;,\]
or of this type
\[\sum_{|J|+|K|+|L|\leq|I|}|\mathcal{L}_{Z^{J}}A|\cdot|\mathcal{L}_{ Z^{K}}A|\cdot|\mathcal{L}_{Z^{L}}A|\] \[\lesssim \Big{(}E(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+ 1)\cdot\begin{cases}\frac{\epsilon}{(1+t+|q|)^{(n-1)-2\delta}(1+|q|)^{2\gamma }},\quad\text{when}\quad\;q>0\;,\\ \frac{\epsilon\cdot(1+|q|)}{(1+t+|q|)^{(n-1)-2\delta}}\quad\text{when}\quad\; q<0\;,\end{cases}\Big{)}\] \[\cdot\sum_{|K|\leq|I|}|\mathcal{L}_{Z^{K}}A|\;.\]
The decaying factors here are the ones which arise from a weighted version of the Klainerman-Sobolev inequality, and \(E(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1)\) is the constant that bounds the higher order energy norm used in our bootstrap argument for \(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1\) derivatives of the fields, and for now \(\delta=0\) since we are assuming that the energy is uniformly bounded. However, as we can see, both of these terms do not have the right factor in the interior region for \(n=4\), as they enter as \(\frac{1}{(1+t)\cdot(1+|q|)}\) factors for \(|A|\) instead of \(\frac{1}{(1+t)^{1+\lambda}\cdot(1+|q|)}\), \(\lambda>0\;.\)
One could think about fixing the problem by allowing a polynomial growth of the energy by a rate of \((1+t)^{\delta}\), if \(\delta>0\), and therefore, one could relax the need to have the factor in the Gronwall lemma to be integrable but to become instead \(\frac{1}{(1+t)}\). Would this fix the problem? As discussed above, in order to effectively use a Hardy type inequality, one would then need a factor of \(\frac{1}{(1+t)\cdot(1+|q|)}\) in front of the field \(A\). However, still under such a relaxed bootstrap assumption, the factors which appear in front of \(A\) generated from the terms \(|A|\cdot|\nabla^{(\mathbf{m})}A|\) and \(|A|^{3}\) are not good enough in the interior region, as they enter as \(\frac{1}{(1+t)^{1-\delta}\cdot(1+|q|)}\) factor for \(|A|\).
Naively, one could try to counter the problem in the interior by putting a weight in the interior, say of \((1+|q|)^{\beta}\), with \(\beta>0\) for \(q<0\), in the weighted higher order energy so as to obtain a better control in the interior region when using the Klainerman-Sobolev inequality, as we did for the exterior region. While this seems an interesting idea, in reality doing so, would change the sign of the derivative of the weight at \(q=0\), since the weight grows as, say \((1+|q|)^{\alpha}\), with \(\alpha>0\) for \(q\geq 0\), and as \((1+|q|)^{\beta}\), with \(\beta>0\) for \(q<0\), which means that the derivative of the weight is negative in the interior region. This implies that a space-time integral enters with the wrong sign (the negative sign generating from the growing weight in the interior) in the energy estimate that we would like to use in order to establish the Gronwall inequality (see Lemma 2.8 and Corollary 2.3). Therefore, we can only introduce a weight in the exterior region, and as much weight we want, which would allow us to gain more decay in \(|q|\), yet only in the exterior region.
We could then proceed in the exterior to obtain a suitable Gronwall type ineuqlaity on the energy (see Lemma 4.5) that would allow us to close the bootstrap argument that we started, as we do in Proposition 4.1. We will then prove the following theorem.
### The statements
We will prove the energy estimate given in Corollary 2.3 and as a special application for \(n=4\), we will use this energy estimate to prove Proposition 4.1. Thus, based on the set-up detailed in our previous paper [17], we would have by then proved the following theorem.
**Theorem 1**.: _Let \(n\geq 4\). Assume that we are given an initial data set \((\Sigma,\overline{A},\overline{E},\overline{g},\overline{k})\) for (1.4). We assume that \(\Sigma\) is diffeomorphic to \(\mathbb{R}^{n}\). Then, there exists a global system of coordinates \((x^{1},...,x^{n})\in\mathbb{R}^{n}\) for \(\Sigma\). We define_
\[r:=\sqrt{(x^{1})^{2}+...+(x^{n})^{2}}\;. \tag{1.11}\]
_Furthermore, we assume that the data \((\overline{A},\overline{E},\overline{g},\overline{k})\) is smooth and asymptotically flat._
_Let \(\delta_{ij}\) be the Kronecker symbol, and let \(\overline{h}_{ij}\) be defined in this system of coordinates \(x^{i}\), by_
\[\overline{h}_{ij}:=\overline{g}_{ij}-\delta_{ij}\;. \tag{1.12}\]
_We then define the weighted \(L^{2}\) norm on \(\Sigma\), namely \(\overline{\mathcal{E}}_{N}\), for \(\gamma>0\), by_
\[\begin{array}{lcl}&\overline{\mathcal{E}}_{N}\\ :=&\sum_{|I|\leq N}\big{(}\|(1+r)^{1/2+\gamma+|I|}\overline{D}(\overline{D}^{I} \overline{A})\|_{L^{2}(\Sigma)}+\|(1+r)^{1/2+\gamma+|I|}\overline{D}(\overline {D}^{I}\overline{h})\|_{L^{2}(\Sigma)}\big{)}\\ :=&\sum_{|I|\leq N}\big{(}\sum_{i=1}^{n}\|(1+r)^{1/2+\gamma+|I|} \overline{D}(\overline{D}^{I}\overline{A_{i}})\|_{L^{2}(\Sigma)}+\sum_{i,j=1}^ {n}\|(1+r)^{1/2+\gamma+|I|}\overline{D}(\overline{D}^{I}\overline{h}_{ij})\|_ {L^{2}(\Sigma)}\big{)}\;,\end{array} \tag{1.13}\]
_where the integration is taken on \(\Sigma\) with respect to the Lebesgue measure \(dx_{1}\dots dx_{n}\), and where \(\overline{D}\) is the Levi-Civita covariant derivative associated to the given Riemannian metric \(\overline{g}\)._
_We also assume that the initial data set \((\Sigma,\overline{A},\overline{E},\overline{g},\overline{k})\) satisfies the Einstein-Yang-Mills constraint equations, namely_
\[\begin{array}{lcl}\mathcal{R}+\overline{k}^{i}{}_{i}\overline{k}_{j}^{j}- \overline{k}^{ij}\overline{k}_{ij}&=&\frac{4}{(n-1)}<\overline{E}_{i}, \overline{E}^{i}>\\ &&+<\overline{D}_{i}\overline{A}_{j}-\overline{D}_{j}\overline{A}_{i}+[ \overline{A}_{i},\overline{A}_{j}],\overline{D}^{i}\overline{A}^{j}-\overline {D}^{j}\overline{A}^{i}+[\overline{A}^{i},\overline{A}^{j}]>\;,\\ \overline{D}_{i}\overline{k}^{i}{}_{j}-\overline{D}_{j}\overline{k}^{i}{}_{i}& =&2<\overline{E}_{i},\overline{D}_{j}\overline{A}^{i}-\overline{D}^{i} \overline{A}_{j}+[\overline{A}_{j},\overline{A}^{i}]>\;,\\ \overline{D}^{i}\overline{E}_{i}+[\overline{A}^{i},\overline{E}_{i}]&=&0\;. \end{array} \tag{1.14}\]
_For any \(n\geq 4\), and for any \(N\geq 2\lfloor\frac{n}{2}\rfloor+2\), there exists a constant \(\overline{c}(N,\gamma)\) depending on \(N\) and on \(\gamma\), such that if_
\[\overline{\mathcal{E}}_{N}\leq\overline{c}(N,\gamma)\;, \tag{1.15}\]
_then there exists a solution \((\mathcal{M},A,g)\) to the Cauchy problem for the fully coupled Einstein-Yang-Mills equations (1.4) in the future of the whole causal complement of any compact \(\mathcal{K}\subset\Sigma\), converging to the null Yang-Mills potential and to the Minkowski space-time in the following sense: if we define the metric \(m_{\mu\nu}\) to be the Minkowski metric in wave coordinates \((x^{0},x^{1},\dots,x^{n})\) and define \(t=x^{0}\), and if we define in this system of wave coordinates_
\[h_{\mu\nu}:=g_{\mu\nu}-m_{\mu\nu}\;, \tag{1.16}\]
_then, for \(\overline{h}_{ij}\) and \(\overline{A}_{i}\) decaying sufficiently fast as exhibit in Proposition 4.1, we have the following estimates on \(h\), and on \(A\) in the Lorenz gauge, for the norm constructed using wave coordinates, by taking the sum over all indices in wave coordinates. That there exists a constant \(E(N)\), that depends on \(\overline{c}(N,\gamma)\), such that for all \(|I|\leq N-\lfloor\frac{n}{2}\rfloor-1\), we have the following estimates in the whole complement of the future causal of the compact \(\mathcal{K}\subset\Sigma\),_
\[\sum_{\mu=0}^{n}|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{I}}A_{\mu})(t,x )|+\sum_{\mu,\nu=0}^{n}|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{I}}h_{\mu\nu})(t,x)|\] \[\leq C(\mathcal{K})\cdot\frac{E^{ext}(N)}{(1+t+|r-t|)^{\frac{(n-1)}{2} }(1+|r-t|)^{1+\gamma}}\;,\]
_and_
\[\sum_{\mu=0}^{n}|\mathcal{L}_{Z^{I}}A_{\mu}(t,x)|+\sum_{\mu,\nu=0 }^{n}|\mathcal{L}_{Z^{I}}h_{\mu\nu}(t,x)| \leq C(\mathcal{K})\cdot c(\gamma)\cdot\frac{E^{ext}(N)}{(1+t+|r-t|)^ {\frac{(n-1)}{2}}(1+|r-t|)^{\gamma}}\;,\]
_where \(Z^{I}\) are the Minkowski vector fields._
_In particular, the gauge invariant norm on the Yang-Mills curvature decays as follows, for all \(|I|\leq N-\lfloor\frac{n}{2}\rfloor-1\),_
\[\sum_{\mu,\nu=0}^{n}|\mathcal{L}_{Z^{I}}F_{\mu\nu}(t,x)| \leq C(\mathcal{K})\cdot\frac{E^{ext}(N)}{(1+t+|r-t|)^{\frac{(n-1)}{2 }}(1+|r-t|)^{1+\gamma}}\] \[+C(\mathcal{K})\cdot c(\gamma)\cdot\frac{E^{ext}(N)}{(1+t+|r-t|) ^{(n-1)}(1+|r-t|)^{2\gamma}}\;.\]
_Furthermore, if one defines \(w\) as follows,_
\[w(q):=\begin{cases}(1+|r-t|)^{1+2\gamma}\quad\text{when}\quad\;r-t>0\;,\\ 1\quad\text{when}\quad\;r-t<0\;,\end{cases} \tag{1.20}\]
_and if we define \(\Sigma_{t}^{ext}(\mathcal{K})\) as being the time evolution in wave coordinates of \(\Sigma\) in the future of the causal complement of \(\mathcal{K}\), then for all time \(t\), we have_
\[\mathcal{E}_{N}^{ext}(\mathcal{K})(t) \tag{1.21}\] \[:= \sum_{|J|\leq N}\left(\|w^{1/2}\nabla^{(\mathbf{m})}(\mathcal{L}_ {Z^{J}}h(t,\cdot))\|_{L^{2}(\Sigma_{t}^{ext}(\mathcal{K}))}+\|w^{1/2}\nabla^{ (\mathbf{m})}(\mathcal{L}_{Z^{J}}A(t,\cdot))\|_{L^{2}(\Sigma_{t}^{ext}( \mathcal{K}))}\right)\] \[\leq E^{ext}(N)\;.\]
_More precisely, for any constant \(E^{ext}(N)\), there exist two constants, a constant \(c_{1}\) that depends on \(\gamma>0\) and on \(n\geq 5\), and a constant \(c_{2}\) (to bound \(\overline{\mathcal{E}}_{N}(0)\) defined in (1.13)), that depends on \(E^{ext}(N)\), on \(N\geq 2\lfloor\frac{n}{2}\rfloor+2\) and on \(w\) (i.e. depends on \(\gamma\)), such that if_
\[\overline{\mathcal{E}}_{(\lfloor\frac{n}{2}\rfloor+1)}(0)\leq c_{1}(\gamma,n)\;, \tag{1.22}\]
_and if_
\[\overline{\mathcal{E}}_{N}(0)\leq c_{2}(E^{ext}(N),N,\gamma)\;, \tag{1.23}\]
_then, we have for all time \(t\),_
\[\mathcal{E}_{N}^{ext}(\mathcal{K})(t)\leq E^{ext}(N)\;. \tag{1.24}\]
## 2. Energy estimates in the exterior region
The goal is to prove exterior stability of the Minkowski space-time for \(n=4\) in this paper, and for \(n=3\) in the paper that follows. We note that exterior stability for \(n\geq 5\) is implied from the above, and we have already prove global stability for for \(n\geq 5\) in a previous paper. For this, we will derive exterior energy estimates and we will then use a Klainerman-Sobolev inequality in the exterior and make a continuity argument, as before, yet on the energy in the exterior.
### The definitions and notations
**Definition 2.1**.: Let \((x^{0},x^{1},\ldots,x^{n})\) be a fixed system of coordinates (which will ultimately be chosen to be a system of wave coordinates), which we shall also write, sometimes, as \((t,x^{1},\ldots,x^{n})\), where \(t=x^{0}\). We define \(m\) be the Minkowski metric \((-1,+1,\ldots,+1)\) in this fixed system of coordinates. We define \(\nabla^{(\mathbf{m})}\) be the covariant derivative associate to the metric \(m\)..
**Definition 2.2**.: For an arbitrary tensor or arbitrary order, say \(K_{\alpha\beta}\), either valued in the Lie algebra associated to the Lie group \(G\) or a scalar, we define
\[|K|^{2} := \sum_{\alpha,\;\beta\in\{t,x^{1},\ldots,x^{n}\}}|K_{\alpha\beta}| ^{2}\,.\]
Since \(\nabla^{(\mathbf{m})}K_{UV}\) is tensor, and \(\nabla^{(\mathbf{m})}K\) is a tensor of higher order, the definition of the norm gives
\[|\nabla^{(\mathbf{m})}K_{UV}|^{2} = \sum_{\alpha,\;\beta,\,\mu\in\{t,x^{1},\ldots,x^{n}\}}|\nabla^{( \mathbf{m})}{}_{\mu}K_{UV}|^{2}\,,\]
and
\[|\nabla^{(\mathbf{m})}K|^{2} = \sum_{\alpha,\;\beta,\,\mu\in\{t,x^{1},\ldots,x^{n}\}}|\nabla^{( \mathbf{m})}{}_{\mu}K_{\alpha\beta}|^{2}\,.\]
Note that this definition coincides with the definition of the norm that we gave in [17], although we introduced it there in a more general fashion using contractions with a constructed euclidian metric.
**Definition 2.3**.: Let either \(\Phi\) be a tensor either valued in the Lie algebra \(\mathcal{G}\), associated to the Loe group \(G\), or a scalar. In particular, we will have in this paper either \(\Phi_{V}=A_{V}\) or \(\Phi_{UV}=h_{UV}\). We consider the following non-symmetric tensor for wave equations. When \(\Phi=A\), we define
\[T^{(\mathbf{g})\;\mu}{}_{\nu}(\Phi_{V})=g^{\mu\alpha}<\nabla^{(\mathbf{m})}{}_ {\alpha}\Phi_{V},\nabla^{(\mathbf{m})}{}_{\nu}\Phi_{V}>-\frac{1}{2}m^{\mu}{}_ {\nu}\cdot g^{\alpha\beta}<\nabla^{(\mathbf{m})}{}_{\alpha}\Phi_{V},\nabla^{( \mathbf{m})}{}_{\beta}\Phi_{V}>\,, \tag{2.1}\]
and when \(\Phi=h\), we define
\[T^{(\mathbf{g})\;\mu}{}_{\nu}(\Phi_{UV})=g^{\mu\alpha}<\nabla^{(\mathbf{m})}{ }_{\alpha}\Phi_{UV},\nabla^{(\mathbf{m})}{}_{\nu}\Phi_{UV}>-\frac{1}{2}m^{\mu} {}_{\nu}\cdot g^{\alpha\beta}<\nabla^{(\mathbf{m})}{}_{\alpha}\Phi_{UV}, \nabla^{(\mathbf{m})}{}_{\beta}\Phi_{UV}>\,, \tag{2.2}\]
where we raise index with respect to the Minkowski metric \(m\), defined to be Minkowski in wave coordinates. We consider \(\Phi\) to be a field decaying fast enough at spatial infinity, so that there is no contribution at null infinity.
For simplicity of notation, we will write from now on, either \(\Phi\) to say \(\Phi_{V}\), or to say \(\Phi_{UV}\) where \(U\), \(V\) are any vectors, which in this paper will be ultimately chosen to be wave coordinates. We will therefore write
\[T^{(\mathbf{g})\;\mu}_{\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;
in such a way that the stated divergence theorem in what follows hold true: see, for instance, Lemmas 2.2 and 2.5 and Corollary 2.2.
_Remark 2.1_.: Based on the construction of \(N(q_{0})\) in Definition 2.5, we have that the exterior region \(\overline{C}\) includes the region \(\{(t,x)\mid q:=r-t\geq q_{0}\}\).
**Definition 2.10**.: We define \(H\) as the 2-tensor given by
\[H^{\mu\nu} := g^{\mu\nu}-m^{\mu\nu}\;, \tag{2.5}\]
where \(m^{\mu\nu}\) is the inverse of the Minkowski metric \(m_{\mu\nu}\), defined in Definition 2.1. In addition, we define
\[h_{\mu\nu} := g_{\mu\nu}-m_{\mu\nu}\;, \tag{2.6}\] \[h^{\mu\nu} := m^{\mu\mu^{\prime}}m^{\nu\nu^{\prime}}h_{\mu^{\prime}\nu^{ \prime}}\;. \tag{2.7}\]
### Conservation laws for wave equations
**Lemma 2.1**.: _For a vector field \(X^{\nu}\), we have_
\[\int_{t_{1}}^{t_{2}}\int_{\Sigma^{ext}_{x}}\nabla^{\mu}\big{(}X^ {\nu}T^{(\mathbf{g})}_{\mu\nu}\big{)}\cdot dv^{(\mathbf{m})} \tag{2.8}\] \[= \int_{t_{1}}^{t_{2}}\int_{\Sigma^{ext}_{x}}\Big{(}\big{(}{\nabla^ {(\mathbf{m})}}^{\mu}X^{\nu}\big{)}\cdot T^{(\mathbf{g})}_{\mu\nu}+<g^{\mu \alpha}{\nabla^{(\mathbf{m})}}_{\mu}{\nabla^{(\mathbf{m})}}_{\alpha}\Phi_{V},{\nabla^{(\mathbf{m})}}_{\nu}\Phi_{V}>\] \[+({\nabla^{(\mathbf{m})}}_{\mu}H^{\mu\alpha})\cdot<{\nabla^{( \mathbf{m})}}_{\alpha}\Phi_{V},{\nabla^{(\mathbf{m})}}_{\nu}\Phi_{V}>-\frac{1} {2}m^{\mu}_{\;\;\;\nu}\cdot({\nabla^{(\mathbf{m})}}_{\mu}H^{\alpha\beta})\cdot <{\nabla^{(\mathbf{m})}}_{\alpha}\Phi_{V},{\nabla^{(\mathbf{m})}}_{\beta}\Phi_ {V}>\Big{)}\cdot dv^{(\mathbf{m})}\] \[= \int_{\Sigma^{ext}_{x_{1}}}\big{(}X^{\mu}T^{(\mathbf{g})}_{\mu \nu}\big{)}n^{(\mathbf{m}),\nu}_{\Sigma}\cdot dv^{(\mathbf{m})}_{\Sigma}-\int _{\Sigma^{ext}_{x_{2}}}\big{(}X^{\mu}T^{(\mathbf{g})}_{\mu\nu}\big{)}n^{( \mathbf{m}),\nu}_{\Sigma}\cdot dv^{(\mathbf{m})}_{\Sigma}\] \[-\int_{N^{t_{2}}_{t_{1}}}\big{(}X^{\mu}T^{(\mathbf{g})}_{\mu\nu} \big{)}n^{(\mathbf{m}),\nu}_{N}\cdot dv^{(\mathbf{m})}_{N}\;.\]
_where the tensor \(T_{\mu\nu}\) is defined in Definition 2.3._
Proof.: Contracting the stress-energy-momentum tensor with a vector field \(X^{\nu}\), and applying the divergence theorem to \(X^{\nu}T_{\mu\nu}\), one gets
\[\int_{\Sigma^{ext}_{t_{2}}}\big{(}X^{\mu}T^{(\mathbf{g})}_{\mu \nu}\big{)}n^{(\mathbf{g}),\nu}_{\Sigma}\cdot dv^{(\mathbf{g})}_{\Sigma}+\int _{N^{t_{2}}_{t_{1}}}\big{(}X^{\mu}T^{(\mathbf{g})}_{\mu\nu}\big{)}n^{(\mathbf{ g}),\nu}_{N}\cdot dv^{(\mathbf{g})}_{N}+\int^{t_{2}}_{t_{1}}\int_{\Sigma^{ext}_{x}(q)} \nabla^{\mu}\big{(}X^{\nu}T^{(\mathbf{g})}_{\mu\nu}\big{)}\cdot dv^{(\mathbf{ g})}\] \[= \int_{\Sigma^{ext}_{t_{1}}}\big{(}X^{\mu}T^{(\mathbf{g})}_{\mu\nu }\big{)}n^{(\mathbf{g}),\nu}_{\Sigma}\cdot dv^{(\mathbf{g})}_{\Sigma}\;.\]
We then compute
\[{\nabla^{(\mathbf{m})}}^{\mu}T^{(\mathbf{g})}_{\mu\nu} := m^{\mu\lambda}{\nabla^{(\mathbf{m})}}_{\lambda}T_{\mu\nu}={ \nabla^{(\mathbf{m})}}_{\mu}{T^{\mu}}_{\nu}\;. \tag{2.10}\]
We get
\[\nabla^{({\bf m})}{}_{\mu}T^{({\bf g})\;\mu}{}_{\nu}\] \[= (\nabla^{({\bf m})}{}_{\mu}g^{\mu\alpha})\cdot<\nabla^{({\bf m})}{}_ {\alpha}\Phi,\nabla^{({\bf m})}{}_{\nu}\Phi>-\frac{1}{2}m^{\mu}{}_{\nu}\cdot( \nabla^{({\bf m})}{}_{\mu}g^{\alpha\beta})\cdot<\nabla^{({\bf m})}{}_{\alpha} \Phi,\nabla^{({\bf m})}{}_{\beta}\Phi>\] \[+g^{\mu\alpha}<\nabla^{({\bf m})}{}_{\mu}\nabla^{({\bf m})}{}_{ \alpha}\Phi,\nabla^{({\bf m})}{}_{\nu}\Phi>+g^{\mu\alpha}<\nabla^{({\bf m})}{}_ {\alpha}\Phi,\nabla^{({\bf m})}{}_{\mu}\nabla^{({\bf m})}{}_{\nu}\Phi>\] \[-\frac{1}{2}m^{\mu}{}_{\nu}\cdot g^{\alpha\beta}<\nabla^{({\bf m}) }{}_{\mu}\nabla^{({\bf m})}{}_{\alpha}\Phi,\nabla^{({\bf m})}{}_{\beta}\Phi>- \frac{1}{2}m^{\mu}{}_{\nu}\cdot g^{\alpha\beta}<\nabla^{({\bf m})}{}_{\alpha} \Phi,\nabla^{({\bf m})}{}_{\mu}\nabla^{({\bf m})}{}_{\beta}\Phi>\] \[(\mbox{where we used the fact that }\nabla^{({\bf m})}m=0)\] \[= (\nabla^{({\bf m})}{}_{\mu}g^{\mu\alpha})\cdot<\nabla^{({\bf m})}{ }_{\alpha}\Phi,\nabla^{({\bf m})}{}_{\nu}\Phi>-\frac{1}{2}m^{\mu}{}_{\nu}\cdot( \nabla^{({\bf m})}{}_{\mu}g^{\alpha\beta})\cdot<\nabla^{({\bf m})}{}_{\alpha} \Phi,\nabla^{({\bf m})}{}_{\beta}\Phi>\] \[+<g^{\mu\alpha}\nabla^{({\bf m})}{}_{\mu}\nabla^{({\bf m})}{}_{ \alpha}\Phi,\nabla^{({\bf m})}{}_{\nu}\Phi>+g^{\mu\alpha}<\nabla^{({\bf m})}{}_ {\alpha}\Phi,\nabla^{({\bf m})}{}_{\mu}\nabla^{({\bf m})}{}_{\nu}\Phi>\] \[-m^{\mu}{}_{\nu}\cdot g^{\alpha\beta}<\nabla^{({\bf m})}{}_{\mu} \nabla^{({\bf m})}{}_{\alpha}\Phi,\nabla^{({\bf m})}{}_{\beta}\Phi>\] \[(\mbox{using the symmetry of the metric }g)\] \[= (\nabla^{({\bf m})}{}_{\mu}g^{\mu\alpha})\cdot<\nabla^{({\bf m})}{ }_{\alpha}\Phi,\nabla^{({\bf m})}{}_{\nu}\Phi>-\frac{1}{2}m^{\mu}{}_{\nu}\cdot( \nabla^{({\bf m})}{}_{\mu}g^{\alpha\beta})\cdot<\nabla^{({\bf m})}{}_{\alpha} \Phi,\nabla^{({\bf m})}{}_{\beta}\Phi>\] \[+<g^{\mu\alpha}\nabla^{({\bf m})}{}_{\mu}\nabla^{({\bf m})}{}_{ \alpha}\Phi,\nabla^{({\bf m})}{}_{\nu}\Phi>+g^{\mu\alpha}<\nabla^{({\bf m})}{} _{\alpha}\Phi,\nabla^{({\bf m})}{}_{\mu}\nabla^{({\bf m})}{}_{\nu}\Phi>\] \[-g^{\alpha\beta}<\nabla^{({\bf m})}{}_{\nu}\nabla^{({\bf m})}{}_{ \alpha}\Phi,\nabla^{({\bf m})}{}_{\beta}\Phi>\.\]
Now, we can compute in wave coordinates \(\nabla^{({\bf m})}{}_{\nu}\nabla^{({\bf m})}{}_{\alpha}\Phi\), and if the end result for \(\nabla^{({\bf m})}{}^{\mu}T_{\mu\nu}\) gives a tensor in \(\nu\), then the identity that we obtain will be true independently of the system of coordinates. In wave coordinates, the Christoffel symbols are vanishing and therefore, the two derivates commute, i.e.
\[\nabla^{({\bf m})}{}_{\nu}\nabla^{({\bf m})}{}_{\alpha}\Phi=\nabla^{({\bf m})}{ }_{\alpha}\nabla^{({\bf m})}{}_{\nu}\Phi\.\]
Thus, in wave coordinates
\[\nabla^{({\bf m})}{}^{\mu}T^{({\bf g})}_{\mu\nu}\] \[= (\nabla^{({\bf m})}{}_{\mu}g^{\mu\alpha})\cdot<\nabla^{({\bf m})}{ }_{\alpha}\Phi,\nabla^{({\bf m})}{}_{\nu}\Phi>-\frac{1}{2}m^{\mu}{}_{\nu}\cdot( \nabla^{({\bf m})}{}_{\mu}g^{\alpha\beta})\cdot<\nabla^{({\bf m})}{}_{\alpha} \Phi,\nabla^{({\bf m})}{}_{\beta}\Phi>\] \[+<g^{\mu\alpha}\nabla^{({\bf m})}{}_{\mu}\nabla^{({\bf m})}{}_{ \alpha}\Phi,\nabla^{({\bf m})}{}_{\nu}\Phi>+g^{\mu\alpha}<\nabla^{({\bf m})}{}_{ \alpha}\Phi,\nabla^{({\bf m})}{}_{\mu}\nabla^{({\bf m})}{}_{\nu}\Phi>\] \[-g^{\alpha\beta}<\nabla^{({\bf m})}{}_{\alpha}\nabla^{({\bf m})}{} _{\nu}\Phi,\nabla^{({\bf m})}{}_{\beta}\Phi>\] \[= (\nabla^{({\bf m})}{}_{\mu}g^{\mu\alpha})\cdot<\nabla^{({\bf m})}{} _{\alpha}\Phi,\nabla^{({\bf m})}{}_{\nu}\Phi>-\frac{1}{2}m^{\mu}{}_{\nu}\cdot( \nabla^{({\bf m})}{}_{\mu}g^{\alpha\beta})\cdot<\nabla^{({\bf m})}{}_{\alpha} \Phi,\nabla^{({\bf m})}{}_{\beta}\Phi>\] \[+<g^{\mu\alpha}\nabla^{({\bf m})}{}_{\mu}\nabla^{({\bf m})}{}_{ \alpha}\Phi,\nabla^{({\bf m})}{}_{\nu}\Phi>\.\]
Since the end result is a tensor in \(\nu\), we obtain
\[\nabla^{({\bf m})}{}^{\mu}T^{({\bf g})}_{\mu\nu} = (\nabla^{({\bf m})}{}_{\mu}g^{\mu\alpha})\cdot<\nabla^{({\bf m})}{} _{\alpha}\Phi,\nabla^{({\bf m})}{}_{\nu}\Phi>-\frac{1}{2}m^{\mu}{}_{\nu}\cdot( \nabla^{({\bf m})}{}_{\mu}g^{\alpha\beta})\cdot<\nabla^{({\bf m})}{}_{\alpha} \Phi,\nabla^{({\bf m})}{}_{\beta}\Phi>\] \[+<g^{\mu\alpha}\nabla^{({\bf m})}{}_{\mu}\nabla^{({\bf m})}{}_{ \alpha}\Phi,\nabla^{({\bf m})}{}_{\nu}\Phi>\.\]
Contracting the stress-energy-momentum tensor with respect to the first index, with a vector field \(X^{\nu}\), and computing the covariant divergence of \(X^{\nu}T_{\mu\nu}\), one gets
\[{\nabla^{({\bf m})}}^{\mu}\big{(}X^{\nu}T^{({\bf g})}_{\mu\nu}\big{)} = \big{(}{\nabla^{({\bf m})}}^{\mu}X^{\nu}\big{)}\cdot T^{({\bf g})} _{\mu\nu}+\big{(}X^{\nu}\big{)}\cdot{\nabla^{({\bf m})}}^{\mu}T^{({\bf g})}_{ \mu\nu}\] \[= {\nabla^{({\bf m})}}^{\mu}X^{\nu}\cdot T^{({\bf g})}_{\mu\nu}+{ \nabla^{({\bf m})}}^{\mu}T^{({\bf g})}_{\mu X}\;.\]
We can then write
\[{\nabla^{({\bf m})}}^{\mu}\big{(}X^{\nu}T^{({\bf g})}_{\mu\nu}\big{)}\] \[= \big{(}{\nabla^{({\bf m})}}^{\mu}X^{\nu}\big{)}\cdot T^{({\bf g}) }_{\mu\nu}+<g^{\mu\alpha}{\nabla^{({\bf m})}}_{\mu}{\nabla^{({\bf m})}}_{\alpha }\Phi,{\nabla^{({\bf m})}}_{\nu}\Phi>\] \[+({\nabla^{({\bf m})}}_{\mu}g^{\mu\alpha})\cdot<{\nabla^{({\bf m} )}}_{\alpha}\Phi,{\nabla^{({\bf m})}}_{\nu}\Phi>-\frac{1}{2}{m^{\mu}}_{\nu} \cdot({\nabla^{({\bf m})}}_{\mu}g^{\alpha\beta})\cdot<{\nabla^{({\bf m})}}_{ \alpha}\Phi,{\nabla^{({\bf m})}}_{\beta}\Phi>\;.\]
This leads to
\[\int_{t_{1}}^{t_{2}}\int_{\Sigma^{ext}_{\tau}}\nabla^{\mu}\big{(} X^{\nu}T^{({\bf g})}_{\mu\nu}\big{)}\cdot dv^{({\bf m})} \tag{2.12}\] \[= \int_{t_{1}}^{t_{2}}\int_{\Sigma^{ext}_{\tau}}\Big{(}\big{(}{ \nabla^{({\bf m})}}^{\mu}X^{\nu}\big{)}\cdot T^{({\bf g})}_{\mu\nu}+<g^{\mu \alpha}{\nabla^{({\bf m})}}_{\mu}{\nabla^{({\bf m})}}_{\alpha}\Phi,{\nabla^{( {\bf m})}}_{\nu}\Phi>\] \[+({\nabla^{({\bf m})}}_{\mu}g^{\mu\alpha})\cdot<{\nabla^{({\bf m })}}_{\alpha}\Phi,{\nabla^{({\bf m})}}_{\nu}\Phi>-\frac{1}{2}{m^{\mu}}_{\nu} \cdot({\nabla^{({\bf m})}}_{\mu}g^{\alpha\beta})\cdot<{\nabla^{({\bf m})}}_{ \alpha}\Phi,{\nabla^{({\bf m})}}_{\beta}\Phi>\Big{)}\cdot dv^{({\bf m})}\] \[= \int_{\Sigma^{ext}_{t_{1}}}\big{(}X^{\mu}T^{({\bf g})}_{\mu\nu} \big{)}n^{({\bf m}),\nu}_{\Sigma}\cdot dv^{({\bf m})}_{\Sigma}-\int_{\Sigma^{ ext}_{t_{2}}}\big{(}X^{\mu}T^{({\bf g})}_{\mu\nu}\big{)}n^{({\bf m}),\nu}_{ \Sigma}\cdot dv^{({\bf m})}_{\Sigma}\] \[-\int_{N^{t_{2}}_{t_{1}}}\big{(}X^{\mu}T^{({\bf g})}_{\mu\nu} \big{)}n^{({\bf m}),\nu}_{N}\cdot dv^{({\bf m})}_{N}\;.\]
Using the definition of \(H^{\mu\nu}:=g^{\mu\nu}-m^{\mu\nu}\), we get the result
We recall that \(\hat{L}^{\nu}\) is defined as in Definition 2.9.
**Lemma 2.2**.: _We have_
\[\int_{\Sigma^{ext}_{t_{2}}}\Big{(}-\frac{1}{2}g^{tt}<{\nabla^{({ \bf m})}}_{t}\Phi_{V},{\nabla^{({\bf m})}}_{t}\Phi_{V}>+\frac{1}{2}g^{ji}<{ \nabla^{({\bf m})}}_{j}\Phi_{V},{\nabla^{({\bf m})}}_{i}\Phi_{V}>\Big{)}\cdot d ^{n}x\] \[+\int_{N^{t_{2}}_{t_{1}}}\big{(}T^{({\bf g})}_{Lt}\big{)}\cdot dv ^{({\bf m})}_{N}\] \[= \int_{\Sigma^{ext}_{t_{1}}}\Big{(}-\frac{1}{2}g^{tt}<{\nabla^{({ \bf m})}}_{t}\Phi_{V},{\nabla^{({\bf m})}}_{t}\Phi_{V}>+\frac{1}{2}g^{ji}<{ \nabla^{({\bf m})}}_{j}\Phi_{V},{\nabla^{({\bf m})}}_{i}\Phi_{V}>\Big{)}\cdot d ^{n}x\] \[-\int_{t_{1}}^{t_{2}}\int_{\Sigma^{ext}_{\tau}}\Big{(}{\nabla^{({ \bf m})}}^{\mu}T^{({\bf g})}_{\mu t}\Big{)}\cdot d\tau d^{n}x\;.\]
Proof.: Considering the metric \(m\), we know by definition of \(m\) being the Minkowski metric in wave coordinate \(\{t,x^{1},\ldots,x^{n}\}\), that for \(X=\frac{\partial}{\partial t}\), we then have
\[\left(\nabla^{(\mathbf{m})}{}^{\mu}\big{(}\frac{\partial}{\partial t }\big{)}^{\nu}\right)\cdot T^{(\mathbf{g})}_{\mu\nu} = 0\;,\] \[n^{(\mathbf{m}),\nu}_{\Sigma} = \left(\frac{\partial}{\partial t}\right)^{\nu}\;,\] \[dv^{(\mathbf{m})}_{\Sigma} = dx^{1}\ldots dx^{n}:=d^{n}x\;.\]
Hence, the conservation law in Lemma 2.1, obtained through the divergence theorem for the non-symmetric tensor \(T_{\mu t}\), gives
\[\int_{t_{1}}^{t_{2}}\int_{\Sigma^{ext}_{\tau}}\left(\nabla^{( \mathbf{m})}{}^{\mu}T^{(\mathbf{g})}_{\mu t}\right)\cdot d\tau d^{n}x = \int_{\Sigma^{ext}_{t_{2}}}T^{(\mathbf{g})}_{tt}\cdot d^{n}x- \int_{\Sigma^{ext}_{t_{1}}}T^{(\mathbf{g})}_{tt}\cdot d^{n}x\] \[-\int_{N^{t_{2}}_{t_{1}}}\left(T^{(\mathbf{g})}_{Lt}\right)\cdot dv ^{(\mathbf{m})}_{N}\;.\]
We compute
\[T^{(\mathbf{g})}_{tt}=m_{\alpha t}T^{(\mathbf{g})}{}^{\alpha}{}_{t}=m_{tt}T^{( \mathbf{g})}{}^{t}{}_{t}=-T^{(\mathbf{g})}{}^{t}{}_{t}\;.\]
We compute further,
\[T^{(\mathbf{g})}{}^{t}{}_{t}\] \[= g^{t\alpha}<\nabla^{(\mathbf{m})}{}_{\alpha}\Phi,\nabla^{( \mathbf{m})}{}_{t}\Phi>-\frac{1}{2}m^{t}{}_{t}\;.\,g^{\alpha\beta}<\nabla^{( \mathbf{m})}{}_{\alpha}\Phi,\nabla^{(\mathbf{m})}{}_{\beta}\Phi>\] \[= g^{t\alpha}<\nabla^{(\mathbf{m})}{}_{\alpha}\Phi,\nabla^{( \mathbf{m})}{}_{t}\Phi>+\frac{1}{2}m_{tt}\cdot\left(g^{t\beta}<\nabla^{( \mathbf{m})}{}_{t}\Phi,\nabla^{(\mathbf{m})}{}_{\beta}\Phi>+g^{j\beta}<\nabla^ {(\mathbf{m})}{}_{j}\Phi,\nabla^{(\mathbf{m})}{}_{\beta}\Phi>\right)\] \[= \frac{1}{2}g^{t\alpha}<\nabla^{(\mathbf{m})}{}_{\alpha}\Phi, \nabla^{(\mathbf{m})}{}_{t}\Phi>-\frac{1}{2}g^{j\beta}<\nabla^{(\mathbf{m})}{ }_{j}\Phi,\nabla^{(\mathbf{m})}{}_{\beta}\Phi>\] \[= \frac{1}{2}g^{tt}<\nabla^{(\mathbf{m})}{}_{t}\Phi,\nabla^{( \mathbf{m})}{}_{t}\Phi>+\frac{1}{2}g^{tj}<\nabla^{(\mathbf{m})}{}_{j}\Phi, \nabla^{(\mathbf{m})}{}_{t}\Phi>-\frac{1}{2}g^{jt}<\nabla^{(\mathbf{m})}{}_{j} \Phi,\nabla^{(\mathbf{m})}{}_{t}\Phi>\] \[-\frac{1}{2}g^{ji}<\nabla^{(\mathbf{m})}{}_{j}\Phi,\nabla^{( \mathbf{m})}{}_{i}\Phi>\;.\]
Consequently, the conservation law 2.14, with the vector field \(X=\frac{\partial}{\partial t}\,,\) gives the stated result.
**Corollary 2.1**.: _We have_
\[\begin{array}{l}\int_{\Sigma_{t_{2}}^{ext}}\Big{(}-\frac{1}{2}g^{tt}<\nabla^{( \mathbf{m})}{}_{t}\Phi_{V},\nabla^{(\mathbf{m})}{}_{t}\Phi_{V}>+\frac{1}{2}g^{ji} <\nabla^{(\mathbf{m})}{}_{j}\Phi_{V},\nabla^{(\mathbf{m})}{}_{i}\Phi_{V}>\Big{)} \cdot d^{n}x\\ +\int_{N_{t_{1}}^{t_{2}}}\big{(}T_{\hat{L}t}^{(\mathbf{g})}\big{)}\cdot dv_{N}^{ (\mathbf{m})}\\ =&\int_{\Sigma_{t_{1}}^{ext}}\Big{(}-\frac{1}{2}g^{tt}<\nabla^{( \mathbf{m})}{}_{t}\Phi_{V},\nabla^{(\mathbf{m})}{}_{t}\Phi_{V}>+\frac{1}{2}g^{ ji}<\nabla^{(\mathbf{m})}{}_{j}\Phi_{V},\nabla^{(\mathbf{m})}{}_{i}\Phi_{V}> \Big{)}\cdot d^{n}x\\ -\int_{t_{1}}^{t_{2}}\int_{\Sigma_{r}^{ext}}\Big{(}<g^{\mu \alpha}\nabla^{(\mathbf{m})}{}_{\mu}\nabla^{(\mathbf{m})}{}_{\alpha}\Phi_{V}, \nabla^{(\mathbf{m})}{}_{t}\Phi_{V}>\\ +\frac{1}{2}\cdot(\nabla^{(\mathbf{m})}{}_{t}g^{t\alpha})\cdot<\nabla^{( \mathbf{m})}{}_{\alpha}\Phi_{V},\nabla^{(\mathbf{m})}{}_{t}\Phi_{V}>+(\nabla^{ (\mathbf{m})}{}_{j}g^{j\alpha})\cdot<\nabla^{(\mathbf{m})}{}_{\alpha}\Phi_{V}, \nabla^{(\mathbf{m})}{}_{t}\Phi_{V}>\\ -\frac{1}{2}\cdot(\nabla^{(\mathbf{m})}{}_{t}g^{j\beta})\cdot<\nabla^{( \mathbf{m})}{}_{j}\Phi_{V},\nabla^{(\mathbf{m})}{}_{\beta}\Phi_{V}>\Big{)} \cdot d\tau d^{n}x\;.\end{array} \tag{2.16}\]
Proof.: We compute
\[\begin{array}{l}\nabla^{(\mathbf{m})}{}^{\mu}T_{\mu t}^{(\mathbf{g})}\\ =&<g^{\mu\alpha}\nabla^{(\mathbf{m})}{}_{\mu}\nabla^{(\mathbf{m})}{}_{\alpha} \Phi,\nabla^{(\mathbf{m})}{}_{t}\Phi>+(\nabla^{(\mathbf{m})}{}_{\mu}g^{\mu \alpha})\cdot<\nabla^{(\mathbf{m})}{}_{\alpha}\Phi,\nabla^{(\mathbf{m})}{}_{t }\Phi>\\ -\frac{1}{2}m^{\mu}{}_{t}\cdot(\nabla^{(\mathbf{m})}{}_{\mu}g^{ \alpha\beta})\cdot<\nabla^{(\mathbf{m})}{}_{\alpha}\Phi,\nabla^{(\mathbf{m})}{ }_{\beta}\Phi>\;.\end{array}\]
Since \(m^{\mu}{}_{t}=m^{\mu\alpha}\cdot m_{\alpha t}=m^{\mu t}\cdot m_{tt}=-m^{\mu t}\), we compute further by decomposing the sum in wave coordinates,
\[\begin{array}{l}\nabla^{(\mathbf{m})}{}^{\mu}T_{\mu t}^{(\mathbf{g})}\\ =&<g^{\mu\alpha}\nabla^{(\mathbf{m})}{}_{\mu}\nabla^{(\mathbf{m})}{}_{\alpha} \Phi,\nabla^{(\mathbf{m})}{}_{t}\Phi>\\ +(\nabla^{(\mathbf{m})}{}_{t}g^{t\alpha})\cdot<\nabla^{(\mathbf{m})}{}_{ \alpha}\Phi,\nabla^{(\mathbf{m})}{}_{t}\Phi>+(\nabla^{(\mathbf{m})}{}_{j}g^{j \alpha})\cdot<\nabla^{(\mathbf{m})}{}_{\alpha}\Phi,\nabla^{(\mathbf{m})}{}_{t }\Phi>\\ +\frac{1}{2}m^{tt}\cdot(\nabla^{(\mathbf{m})}{}_{t}g^{\alpha\beta})\cdot< \nabla^{(\mathbf{m})}{}_{\alpha}\Phi,\nabla^{(\mathbf{m})}{}_{\beta}\Phi>\\ =&<g^{\mu\alpha}\nabla^{(\mathbf{m})}{}_{\mu}\nabla^{(\mathbf{m})}{}_{\alpha} \Phi,\nabla^{(\mathbf{m})}{}_{t}\Phi>\\ +(\nabla^{(\mathbf{m})}{}_{t}g^{t\alpha})\cdot<\nabla^{(\mathbf{m})}{}_{ \alpha}\Phi,\nabla^{(\mathbf{m})}{}_{t}\Phi>+(\nabla^{(\mathbf{m})}{}_{j}g^{j \alpha})\cdot<\nabla^{(\mathbf{m})}{}_{\alpha}\Phi,\nabla^{(\mathbf{m})}{}_{t }\Phi>\\ -\frac{1}{2}\cdot(\nabla^{(\mathbf{m})}{}_{t}g^{t\beta})\cdot< \nabla^{(\mathbf{m})}{}_{t}\Phi,\nabla^{(\mathbf{m})}{}_{\beta}\Phi>-\frac{1} {2}\cdot(\nabla^{(\mathbf{m})}{}_{t}g^{j\beta})\cdot<\nabla^{(\mathbf{m})}{}_{ j}\Phi,\nabla^{(\mathbf{m})}{}_{\beta}\Phi>\\ =&<g^{\mu\alpha}\nabla^{(\mathbf{m})}{}_{\mu}\nabla^{(\mathbf{m})}{}_{\alpha} \Phi,\nabla^{(\mathbf{m})}{}_{t}\Phi>\\ +\frac{1}{2}\cdot(\nabla^{(\mathbf{m})}{}_{t}g^{t\alpha})\cdot< \nabla^{(\mathbf{m})}{}_{\alpha}\Phi,\nabla^{(\mathbf{m})}{}_{t}\Phi>+(\nabla^{ (\mathbf{m})}{}_{j}g^{j\alpha})\cdot<\nabla^{(\mathbf{m})}{}_{\alpha}\Phi, \nabla^{(\mathbf{m})}{}_{t}\Phi>\\ -\frac{1}{2}\cdot(\nabla^{(\mathbf{m})}{}_{t}g^{j\beta})\cdot< \nabla^{(\mathbf{m})}{}_{j}\Phi,\nabla^{(\mathbf{m})}{}_{\beta}\Phi>\;.\end{array} \tag{2.18}\]
Injecting in Lemma 2.2, we obtain the desired result.
The weighted energy estimate in the exterior for \(g^{\mu\nu}\nabla^{(\mathbf{m})}{}_{\mu}\nabla^{(\mathbf{m})}{}_{\nu}\Phi\)
**Definition 2.11**.: We define
\[q:=r-t\;, \tag{2.19}\]
where \(t\) and \(r\) are defined using the wave coordinates as explained in [17].
**Definition 2.12**.: We define
\[w(q):=\begin{cases}1+|q|)^{1+2\gamma}\quad\text{when}\quad\;q>0,\\ 1\quad\text{when}\quad\;q<0.\end{cases}\]
for some \(\gamma>0\,\).
**Definition 2.13**.: We define \(\widehat{w}\) by
\[\widehat{w}(q) := \begin{cases}(1+|q|)^{1+2\gamma}\quad\text{when}\quad\;q>0,\\ (1+|q|)^{2\mu}\quad\text{when}\quad\;q<0,\end{cases}\] \[= \begin{cases}(1+q)^{1+2\gamma}\quad\text{when}\quad\;q>0,\\ (1-q)^{2\mu}\quad\text{when}\quad\;q<0,\end{cases}\]
for \(\gamma>0\) and \(\mu<0\,\). Note that the factor \(\mu\neq 0\) is constructed so that for \(q<0\), the derivative \(\frac{\partial\widehat{w}}{\partial q}\) is non-vanishing, so as to have a control on certain tangential derivatives, which is needed for the case \(n=3\), which we will treat in a paper the follows (see Corollary 2.3) - yet, we will not use this control here in the case of \(n=4\). This being said, note that the definition of \(\widehat{w}\,\), is also so that for \(\gamma\neq-\frac{1}{2}\) and \(\mu\neq 0\) (which is assumed here), we would have
\[\widehat{w}^{\prime}(q)\sim\frac{\widehat{w}(q)}{(1+|q|)}\;,\]
(see Lemma 2.4) - this is will determine the kind of control that we will have on the tangential derivatives, control that we will use in the next paper for space-dimension \(n=3\).
_Remark 2.2_.: We take \(\mu<0\) (instead of \(\mu>0\)), because we want the derivative \(\frac{\partial\widehat{w}}{\partial q}>0\,\), as we will see that this is what we need in order to obtain an energy estimate on the fields (see Corollary 2.3). In other words, \(\mu<0\) is a necessary condition to ensure that \(\widehat{w}^{\prime}(q)\) enters with the right sign in the energy estimate (see (3.18)).
**Definition 2.14**.: We define \(\widetilde{w}\) by
\[\widetilde{w}(q) := \widehat{w}(q)+w(q)\] \[:= \begin{cases}2(1+|q|)^{1+2\gamma}\quad\text{when}\quad\;q>0,\\ 1+(1+|q|)^{2\mu}\quad\text{when}\quad\;q<0.\end{cases}\]
Note that the definition of \(\widetilde{w}\) is constructed so that Lemma 2.3 holds, which we need in order to obtain (3.18).
**Lemma 2.3**.: _We have_
\[\widetilde{w}^{\prime} \sim \widehat{w}^{\prime}\;.\]
_Furthermore, for \(\mu<0\), we have_
\[\widetilde{w}(q) \sim w(q)\;.\]
Proof.: We compute the derivative with respect to \(q\),
\[\widetilde{w}^{\prime} = \widehat{w}^{\prime}(q)+w^{\prime}(q)\] \[= \begin{cases}2\cdot\widehat{w}^{\prime}(q)\quad\text{when}\quad q> 0,\\ \widehat{w}^{\prime}(q)\quad\text{when}\quad q<0.\end{cases}\]
Consequently,
\[\widetilde{w}^{\prime} \sim \widehat{w}^{\prime}\;.\]
Now, on one hand, since \(\widehat{w}\geq 0\), we have
\[\widetilde{w}(q) \geq w(q)\;.\]
On the other hand, since \(\mu<0\), we have
\[\widetilde{w}(q) = \begin{cases}2(1+|q|)^{1+2\gamma}\quad\text{when}\quad\;q>0,\\ 1+(1+|q|)^{2\mu}\quad\text{when}\quad\;q<0.\end{cases}\] \[\leq \begin{cases}2(1+|q|)^{1+2\gamma}\quad\text{when}\quad\;q>0,\\ 2\quad\text{when}\quad\;q<0.\end{cases}\] \[\leq 2w(q)\]
Thus
\[\widetilde{w}(q) \sim w(q)\;.\]
**Lemma 2.4**.: _Let \(\widehat{w}\) be defined as in Definition 2.13. We have, for \(\gamma\neq-\frac{1}{2}\) and \(\mu\neq 0\),_
\[\widehat{w}^{\prime}(q)\sim\frac{\widehat{w}(q)}{(1+|q|)}\;.\]
Proof.: We have
\[\widehat{w}(q) = \begin{cases}(1+q)^{1+2\gamma}\quad\text{when}\quad\;q>0,\\ (1-q)^{2\mu}\quad\text{when}\quad\;q<0.\end{cases}\]
We compute,
\[\widehat{w}^{\prime}(q) = \begin{cases}(1+2\gamma)(1+|q|)^{2\gamma}\quad\text{when}\quad\;q >0,\\ -2\mu(1+|q|)^{2\mu-1}\quad\text{when}\quad\;q<0.\end{cases}\] \[= \begin{cases}(1+2\gamma)\frac{w(q)}{(1+|q|)}\quad\text{when}\quad \;q>0,\\ -2\mu\frac{w(q)}{(1+|q|)}\quad\text{when}\quad\;q<0.\end{cases}\]
Thus,
\[\min\{(1+2\gamma),-2\mu\}\cdot\frac{\widehat{w}(q)}{(1+|q|)}\leq\widehat{w}^{ \prime}(q)\leq\max\{(1+2\gamma),-2\mu\}\cdot\frac{\widehat{w}(q)}{(1+|q|)}\;,\]
and hence, for \(\min\{(1+2\gamma),-2\mu\}\neq 0\) and \(\max\{(1+2\gamma),-2\mu\}\neq 0\,,\)
\[\widehat{w}^{\prime}(q)\sim\frac{\widehat{w}(q)}{(1+|q|)}\;.\]
We now establish a conservation law with the weight \(\widetilde{w}\).
**Lemma 2.5**.: _We have_
\[\int_{N_{t_{1}}^{t_{2}}}\big{(}T_{\widetilde{L}t}^{(\mathbf{g})} \big{)}\cdot\widetilde{w}(q)\cdot dv_{N}^{(\mathbf{m})}+\int_{\Sigma_{t_{2}}^{ ext}}T_{tt}^{(\mathbf{g})}\cdot\widetilde{w}(q)\cdot d^{n}x\] \[= \int_{\Sigma_{t_{1}}^{ext}}T_{tt}^{(\mathbf{g})}\cdot\widetilde{ w}(q)\cdot d^{n}x-\int_{t_{1}}^{t_{2}}\int_{\Sigma_{r}^{ext}}\Big{(}T_{tt}^{( \mathbf{g})}+T_{rt}^{(\mathbf{g})}\Big{)}\cdot d\tau\cdot\widetilde{w}^{ \prime}(q)\cdot d^{n}x\] \[-\int_{t_{1}}^{t_{2}}\int_{\Sigma_{r}^{ext}}\Big{(}\nabla^{( \mathbf{m})}{}^{\mu}T_{\mu t}^{(\mathbf{g})}\Big{)}\cdot d\tau\cdot\widetilde{ w}(q)\cdot d^{n}x\;.\]
Proof.: Considering again the Minkowski metric \(m\) in the coordinates \(\{t,x^{1},\ldots,x^{n}\}\), and instead of contracting \(T_{\mu\nu}^{(\mathbf{g})}\) with \(\frac{\partial}{\partial t}\) with respect to the second component \(\nu\), we contract with the weighted vector
\[X=\widetilde{w}(q)\frac{\partial}{\partial t}\;. \tag{2.21}\]
By then, we have in the coordinates \(\mu,\nu\in\{t,x^{1},\ldots,x^{n}\}\),
\[\big{(}\nabla^{(\mathbf{m})}{}^{\mu}\big{(}\widetilde{w}(q)\frac {\partial}{\partial t}\big{)}{}^{\nu}\big{)} = \widetilde{w}^{\prime}(q)\cdot\nabla^{(\mathbf{m})}{}^{\mu}(q) \cdot\big{(}\frac{\partial}{\partial t}\big{)}{}^{\nu}+\widetilde{w}q)\nabla^ {(\mathbf{m})}{}^{\mu}\big{(}\frac{\partial}{\partial t}\big{)}{}^{\nu}\] \[= \widetilde{w}^{\prime}(q)\cdot m^{\mu\alpha}\nabla^{(\mathbf{m}) }{}_{\alpha}(q)\cdot\big{(}\frac{\partial}{\partial t}\big{)}{}^{\nu}\;.\]
For \(\mu=t=x^{0}\), we have since \(q=r-t\),
\[m^{\mu\alpha}\nabla^{(\mathbf{m})}{}_{\alpha}(q)=m^{tt}\nabla^{(\mathbf{m})}{ }_{t}(q)=-(-1)=1\;.\]
For \(\mu=x^{j}\), we have since \(q=r-t\),
\[m^{\mu\alpha}\nabla^{(\mathbf{m})}{}_{\alpha}(q)=m^{jj}\nabla^{(\mathbf{m})}{ }_{j}(q)=\frac{x^{j}}{r}\;.\]
Thus,
\[\big{(}\nabla^{(\mathbf{m})}{}^{\mu}\big{(}\widetilde{w}(q)\frac {\partial}{\partial t}\big{)}{}^{\nu}\big{)}\cdot T_{\mu\nu}^{(\mathbf{g})} = \Big{(}\widetilde{w}^{\prime}(q)\cdot m^{\mu\alpha}\nabla^{( \mathbf{m})}{}_{\alpha}(q)\cdot\big{(}\frac{\partial}{\partial t}\big{)}{}^{ \nu}\Big{)}\cdot T_{\mu\nu}^{(\mathbf{g})} \tag{2.22}\] \[= \widetilde{w}^{\prime}(q)\cdot T_{tt}^{(\mathbf{g})}+\widetilde {w}^{\prime}(q)\cdot\frac{x^{j}}{r}\cdot T_{jt}^{(\mathbf{g})}\] \[= \widetilde{w}^{\prime}(q)\cdot\Big{(}T_{tt}^{(\mathbf{g})}+T_{rt }^{(\mathbf{g})}\Big{)}\;.\]
We still have
\[n_{\Sigma}^{(\mathbf{m}),\nu} = \big{(}\frac{\partial}{\partial t}\big{)}^{\nu}\,\] \[dv_{\Sigma}^{(\mathbf{m})} = dx^{1}\ldots dx^{n}:=d^{n}x\] \[n_{N}^{(\mathbf{m}),\nu} = \hat{L}^{\nu}\.\]
Consequently, the conservation law with the weighted vector \(w(q)\frac{\partial}{\partial t}\) contracted with the second component of the non-symmetric tensor \(T_{\mu\nu}^{(\mathbf{g})}\), gives the following equality
\[\int_{t_{1}}^{t_{2}}\int_{\Sigma_{\tau}^{ext}}\Big{(}T_{tt}^{( \mathbf{g})}+T_{rt}^{(\mathbf{g})}\Big{)}\cdot d\tau\cdot\widetilde{w}^{\prime }(q)\cdot d^{n}x+\int_{t_{1}}^{t_{2}}\int_{\Sigma_{\tau}^{ext}}\Big{(}\nabla^{ (\mathbf{m})\,\mu}T_{\mu t}^{(\mathbf{g})}\Big{)}\cdot d\tau\cdot\widetilde{w} (q)\cdot d^{n}x\] \[= \int_{\Sigma_{t_{1}}^{ext}}T_{tt}^{(\mathbf{g})}\cdot\widetilde{ w}(q)\cdot d^{n}x-\int_{\Sigma_{t_{2}}^{ext}(q)}T_{tt}^{(\mathbf{g})}\cdot \widetilde{w}(q)\cdot d^{n}x\] \[-\int_{N_{t_{1}}^{t_{2}}}\big{(}T_{\hat{L}t}^{(\mathbf{g})}\big{)} \cdot\widetilde{w}(q)\cdot dv_{N}^{(\mathbf{m})}\.\]
**Corollary 2.2**.: _We have_
\[\int_{N_{t_{1}}^{t_{2}}}\big{(}T_{\hat{L}t}^{(\mathbf{g})}\big{)} \cdot\widetilde{w}(q)\cdot dv_{N}^{(\mathbf{m})}\] \[+\int_{\Sigma_{t_{2}}^{ext}}\big{(}-\frac{1}{2}(m^{tt}+H^{tt})< \nabla^{(\mathbf{m})}{}_{t}\Phi_{V},\nabla^{(\mathbf{m})}{}_{t}\Phi_{V}>\] \[\qquad\qquad+\frac{1}{2}(m^{ji}+H^{ji})<\nabla^{(\mathbf{m})}{}_ {j}\Phi_{V},\nabla^{(\mathbf{m})}{}_{i}\Phi_{V}>\big{)}\cdot\widetilde{w}(q) \cdot d^{n}x\] \[= \int_{\Sigma_{t_{1}}^{ext}}\big{(}-\frac{1}{2}(m^{tt}+H^{tt})< \nabla^{(\mathbf{m})}{}_{t}\Phi_{V},\nabla^{(\mathbf{m})}{}_{t}\Phi_{V}>\] \[\qquad\qquad+\frac{1}{2}(m^{ji}+H^{ji})<\nabla^{(\mathbf{m})}{}_ {j}\Phi_{V},\nabla^{(\mathbf{m})}{}_{i}\Phi_{V}>\big{)}\cdot\widetilde{w}(q) \cdot d^{n}x\] \[-\int_{t_{1}}^{t_{2}}\int_{\Sigma_{\tau}^{ext}}\Big{(}T_{tt}^{( \mathbf{g})}+T_{rt}^{(\mathbf{g})}\Big{)}\cdot d\tau\cdot\widetilde{w}^{\prime }(q)d^{n}x\] \[-\int_{t_{1}}^{t_{2}}\int_{\Sigma_{\tau}^{ext}}\Big{(}<g^{\mu \alpha}\nabla^{(\mathbf{m})}{}_{\mu}\nabla^{(\mathbf{m})}{}_{\alpha}\Phi_{V}, \nabla^{(\mathbf{m})}{}_{t}\Phi_{V}>+(\nabla^{(\mathbf{m})}{}_{\mu}H^{\mu \alpha})\cdot<\nabla^{(\mathbf{m})}{}_{\alpha}\Phi_{V},\nabla^{(\mathbf{m})}{ }_{t}\Phi_{V}>\] \[-\frac{1}{2}m^{\mu}{}_{t}\cdot(\nabla^{(\mathbf{m})}{}_{\mu}H^{ \alpha\beta})\cdot<\nabla^{(\mathbf{m})}{}_{\alpha}\Phi_{V},\nabla^{(\mathbf{m })}{}_{\beta}\Phi_{V}>\Big{)}\cdot d\tau\cdot\widetilde{w}(q)d^{n}x\.\]
Proof.: We want to evaluate the terms in (2.5). We have shown based on (2.15), that
\[T_{tt}^{(\mathbf{g})} = -\frac{1}{2}g^{tt}<\nabla^{(\mathbf{m})}{}_{t}\Phi,\nabla^{( \mathbf{m})}{}_{t}\Phi>+\frac{1}{2}g^{ji}<\nabla^{(\mathbf{m})}{}_{j}\Phi, \nabla^{(\mathbf{m})}{}_{i}\Phi>\] \[= -\frac{1}{2}(m^{tt}+H^{tt})<\nabla^{(\mathbf{m})}{}_{t}\Phi, \nabla^{(\mathbf{m})}{}_{t}\Phi>+\frac{1}{2}(m^{ji}+H^{ji})<\nabla^{(\mathbf{m })}{}_{j}\Phi,\nabla^{(\mathbf{m})}{}_{i}\Phi>\,\]
and based on (2.17), that
\[\nabla^{(\mathbf{m})}{}^{\mu}T^{(\mathbf{g})}_{\mu t}\] \[= <g^{\mu\alpha}\nabla^{(\mathbf{m})}{}_{\mu}\nabla^{(\mathbf{m})}{}_ {\alpha}\Phi,\nabla^{(\mathbf{m})}{}_{t}\Phi>+(\nabla^{(\mathbf{m})}{}_{\mu}H ^{\mu\alpha})\cdot<\nabla^{(\mathbf{m})}{}_{\alpha}\Phi,\nabla^{(\mathbf{m})}{ }_{t}\Phi>\] \[-\frac{1}{2}m^{\mu}{}_{t}\cdot(\nabla^{(\mathbf{m})}{}_{\mu}H^{ \alpha\beta})\cdot<\nabla^{(\mathbf{m})}{}_{\alpha}\Phi,\nabla^{(\mathbf{m})}{ }_{\beta}\Phi>\.\]
Injecting these in Lemma (2.5), we get the stated result.
Now, we would like to evaluate in Corollary 2.2 the term with the weight \(w^{\prime}(q)\).
**Lemma 2.6**.: _We have_
\[T^{(\mathbf{g})}_{tt}+T^{(\mathbf{g})}_{rt}\] \[= \frac{1}{2}\Big{(}|\nabla^{(\mathbf{m})}{}_{t}\Phi_{V}+\nabla^{( \mathbf{m})}{}_{r}\Phi_{V}|^{2}+\delta^{ij}|\nabla^{(\mathbf{m})}{}_{i}-\frac {x_{i}}{r}\nabla^{(\mathbf{m})}{}_{r})\Phi_{V}|^{2}\Big{)}\] \[-\frac{1}{2}H^{tt}<\nabla^{(\mathbf{m})}{}_{t}\Phi_{V},\nabla^{( \mathbf{m})}{}_{t}\Phi_{V}>+\frac{1}{2}H^{ji}<\nabla^{(\mathbf{m})}{}_{j}\Phi_ {V},\nabla^{(\mathbf{m})}{}_{i}\Phi_{V}>\] \[+H^{rt}<\nabla^{(\mathbf{m})}{}_{t}\Phi_{V},\nabla^{(\mathbf{m})}{ }_{t}\Phi_{V}>+H^{rj}<\nabla^{(\mathbf{m})}{}_{j}\Phi_{V},\nabla^{(\mathbf{m})} {}_{t}\Phi_{V}>\.\]
Proof.: We compute
\[T^{(\mathbf{g})}_{rt}=m_{rr}\cdot T^{(\mathbf{g})^{r}}{}_{t}=T^{(\mathbf{g})^ {r}}{}_{t}\.\]
\[T^{(\mathbf{g})^{r}}{}_{t} = g^{r\alpha}<\nabla^{(\mathbf{m})}{}_{\alpha}\Phi,\nabla^{( \mathbf{m})}{}_{t}\Phi>-\frac{1}{2}m^{r}{}_{t}\cdot g^{\alpha\beta}<\nabla^{( \mathbf{m})}{}_{\alpha}\Phi,\nabla^{(\mathbf{m})}{}_{\beta}\Phi> \tag{2.24}\] \[= g^{r\alpha}<\nabla^{(\mathbf{m})}{}_{\alpha}\Phi,\nabla^{( \mathbf{m})}{}_{t}\Phi>\.\]
Thus,
\[T^{(\mathbf{g})}_{rt}=g^{rt}<\nabla^{(\mathbf{m})}{}_{t}\Phi,\nabla^{(\mathbf{ m})}{}_{t}\Phi>+g^{rj}<\nabla^{(\mathbf{m})}{}_{j}\Phi,\nabla^{(\mathbf{m})}{}_{t} \Phi>. \tag{2.25}\]
Consequently,
\[T_{tt}^{(\mathbf{g})}+T_{rt}^{(\mathbf{g})}\] \[= -\frac{1}{2}g^{tt}<\nabla^{(\mathbf{m})}{}_{t}\Phi,\nabla^{(\mathbf{ m})}{}_{t}\Phi>+\frac{1}{2}g^{ji}<\nabla^{(\mathbf{m})}{}_{j}\Phi,\nabla^{( \mathbf{m})}{}_{i}\Phi>\] \[+g^{rt}<\nabla^{(\mathbf{m})}{}_{t}\Phi,\nabla^{(\mathbf{m})}{}_{t }\Phi>+g^{rj}<\nabla^{(\mathbf{m})}{}_{j}\Phi,\nabla^{(\mathbf{m})}{}_{t}\Phi>\] \[= -\frac{1}{2}m^{tt}<\nabla^{(\mathbf{m})}{}_{t}\Phi,\nabla^{( \mathbf{m})}{}_{t}\Phi>+\frac{1}{2}m^{ji}<\nabla^{(\mathbf{m})}{}_{j}\Phi, \nabla^{(\mathbf{m})}{}_{i}\Phi>\] \[+m^{rt}<\nabla^{(\mathbf{m})}{}_{t}\Phi,\nabla^{(\mathbf{m})}{}_{ t}\Phi>+m^{rj}<\nabla^{(\mathbf{m})}{}_{j}\Phi,\nabla^{(\mathbf{m})}{}_{t}\Phi>\] \[-\frac{1}{2}H^{tt}<\nabla^{(\mathbf{m})}{}_{t}\Phi,\nabla^{( \mathbf{m})}{}_{t}\Phi>+\frac{1}{2}H^{ji}<\nabla^{(\mathbf{m})}{}_{j}\Phi, \nabla^{(\mathbf{m})}{}_{i}\Phi>\] \[+H^{rt}<\nabla^{(\mathbf{m})}{}_{t}\Phi,\nabla^{(\mathbf{m})}{}_{ t}\Phi>+H^{rj}<\nabla^{(\mathbf{m})}{}_{j}\Phi,\nabla^{(\mathbf{m})}{}_{t}\Phi>\.\]
Thus,
\[T_{tt}^{(\mathbf{g})}+T_{rt}^{(\mathbf{g})}\] \[= \frac{1}{2}<\nabla^{(\mathbf{m})}{}_{t}\Phi,\nabla^{(\mathbf{m})}{ }_{t}\Phi>+\frac{1}{2}\delta^{ji}<\nabla^{(\mathbf{m})}{}_{j}\Phi,\nabla^{( \mathbf{m})}{}_{i}\Phi>+m^{rj}<\nabla^{(\mathbf{m})}{}_{j}\Phi,\nabla^{( \mathbf{m})}{}_{t}\Phi>\] \[-\frac{1}{2}H^{tt}<\nabla^{(\mathbf{m})}{}_{t}\Phi,\nabla^{( \mathbf{m})}{}_{t}\Phi>+\frac{1}{2}H^{ji}<\nabla^{(\mathbf{m})}{}_{j}\Phi, \nabla^{(\mathbf{m})}{}_{i}\Phi>\] \[+H^{rt}<\nabla^{(\mathbf{m})}{}_{t}\Phi,\nabla^{(\mathbf{m})}{}_{ t}\Phi>+H^{rj}<\nabla^{(\mathbf{m})}{}_{j}\Phi,\nabla^{(\mathbf{m})}{}_{t}\Phi>\.\]
Yet,
\[m^{rj}=m^{rr}m^{ij}m_{ri}=m_{rj}=\frac{x^{j}}{r}\]
Therefore,
\[m^{rj}<\nabla^{(\mathbf{m})}{}_{j}\Phi,\nabla^{(\mathbf{m})}{}_ {t}\Phi> = \frac{x^{j}}{r}<\nabla^{(\mathbf{m})}{}_{j}\Phi,\nabla^{(\mathbf{ m})}{}_{t}\Phi>=<\nabla^{(\mathbf{m})}{}_{r}\Phi,\nabla^{(\mathbf{m})}{}_{t} \Phi>\.\]
Therefore, we have
\[T_{tt}^{(\mathbf{g})}+T_{rt}^{(\mathbf{g})} = \frac{1}{2}|\nabla^{(\mathbf{m})}\Phi|^{2}+<\nabla^{(\mathbf{m}) }{}_{r}\Phi,\nabla^{(\mathbf{m})}{}_{t}\Phi>\] \[-\frac{1}{2}H^{tt}<\nabla^{(\mathbf{m})}{}_{t}\Phi,\nabla^{( \mathbf{m})}{}_{t}\Phi>+\frac{1}{2}H^{ji}<\nabla^{(\mathbf{m})}{}_{j}\Phi, \nabla^{(\mathbf{m})}{}_{i}\Phi>\] \[+H^{rt}<\nabla^{(\mathbf{m})}{}_{t}\Phi,\nabla^{(\mathbf{m})}{}_{t }\Phi>+H^{rj}<\nabla^{(\mathbf{m})}{}_{j}\Phi,\nabla^{(\mathbf{m})}{}_{t}\Phi>\.\]
However, we have shown that for a scalar, we have \(\delta^{ij}<\partial_{i}\Phi,\partial_{j}\Phi>=\delta^{ij}<(\partial_{i}-\frac {x_{i}}{r}\partial_{r})\Phi,(\partial_{j}-\frac{x_{j}}{r}\partial_{r})\Phi>+< \partial_{r}\Phi,\partial_{r}\Phi>\). However, since \(\nabla^{(\mathbf{m})}\) is the Minkowski covariant derivative, computing the trace with respect to wave coordinates \(\{t,x^{1},\dots,x^{n}\}\), we get
\[\delta^{ij}<\nabla^{(\mathbf{m})}{}_{i}\Phi,\nabla^{(\mathbf{m})} {}_{j}\Phi>\] \[= \delta^{ij}<(\nabla^{(\mathbf{m})}{}_{i}-\frac{x_{i}}{r}\nabla^{( \mathbf{m})}{}_{r})\Phi,(\nabla^{(\mathbf{m})}{}_{j}-\frac{x_{j}}{r}\nabla^{( \mathbf{m})}{}_{r})\Phi>+<\nabla^{(\mathbf{m})}{}_{r}\Phi,\nabla^{(\mathbf{m}) }{}_{r}\Phi>\.\]
Hence,
\[<\nabla^{(\mathbf{m})}{}_{t}\Phi+\nabla^{(\mathbf{m})}{}_{r}\Phi, \nabla^{(\mathbf{m})}{}_{t}\Phi+\nabla^{(\mathbf{m})}{}_{r}\Phi>+\delta^{ij}<( \nabla^{(\mathbf{m})}{}_{i}-\frac{x_{i}}{r}\nabla^{(\mathbf{m})}{}_{r})\Phi,( \nabla^{(\mathbf{m})}{}_{j}-\frac{x_{j}}{r}\nabla^{(\mathbf{m})}{}_{r})\Phi> \tag{2.28}\] \[= <\nabla^{(\mathbf{m})}{}_{t}\Phi,\nabla^{(\mathbf{m})}{}_{t}\Phi> +<\nabla^{(\mathbf{m})}{}_{r}\Phi,\nabla^{(\mathbf{m})}{}_{r}\Phi>+2<\nabla^{( \mathbf{m})}{}_{t}\Phi,\nabla^{(\mathbf{m})}{}_{r}\Phi>\] \[+\delta^{ij}<\nabla^{(\mathbf{m})}{}_{i}\Phi,\nabla^{(\mathbf{m} )}{}_{j}\Phi>-<\nabla^{(\mathbf{m})}{}_{r}\Phi,\nabla^{(\mathbf{m})}{}_{r}\Phi>\] \[= <\nabla^{(\mathbf{m})}{}_{t}\Phi,\nabla^{(\mathbf{m})}{}_{t}\Phi> +2<\nabla^{(\mathbf{m})}{}_{t}\Phi,\nabla^{(\mathbf{m})}{}_{r}\Phi>+\delta^{ ij}<\nabla^{(\mathbf{m})}{}_{i}\Phi,\nabla^{(\mathbf{m})}{}_{j}\Phi>\] \[= |\nabla^{(\mathbf{m})}\Phi|^{2}+2<\nabla^{(\mathbf{m})}{}_{t} \Phi,\nabla^{(\mathbf{m})}{}_{r}\Phi>\.\]
Therefore,
\[T^{(\mathbf{g})}_{tt}+T^{(\mathbf{g})}_{rt}\] \[= \frac{1}{2}\Big{(}<\nabla^{(\mathbf{m})}{}_{t}\Phi+\nabla^{( \mathbf{m})}{}_{r}\Phi,\nabla^{(\mathbf{m})}{}_{t}\Phi+\nabla^{(\mathbf{m})}{ }_{r}\Phi>\] \[+\delta^{ij}<(\nabla^{(\mathbf{m})}{}_{i}-\frac{x_{i}}{r}\nabla^{ (\mathbf{m})}{}_{r})\Phi,(\nabla^{(\mathbf{m})}{}_{j}-\frac{x_{j}}{r}\nabla^{ (\mathbf{m})}{}_{r})\Phi>\Big{)}\] \[-\frac{1}{2}H^{tt}<\nabla^{(\mathbf{m})}{}_{t}\Phi,\nabla^{( \mathbf{m})}{}_{t}\Phi>+\frac{1}{2}H^{ji}<\nabla^{(\mathbf{m})}{}_{j}\Phi, \nabla^{(\mathbf{m})}{}_{i}\Phi>\] \[+H^{rt}<\nabla^{(\mathbf{m})}{}_{t}\Phi,\nabla^{(\mathbf{m})}{}_{ t}\Phi>+H^{rj}<\nabla^{(\mathbf{m})}{}_{j}\Phi,\nabla^{(\mathbf{m})}{}_{t}\Phi>\.\]
We recapitulate the following lemma that he fact that we proved in [17].
**Lemma 2.7**.: _Assume that the perturbation of the Minkowski metric is such that \(H^{\mu\nu}=g^{\mu\nu}-m^{\mu\nu}\) is bounded by a constant \(C<\frac{1}{n}\), where \(n\) is the space dimension, i.e._
\[|H|\leq C<\frac{1}{n}. \tag{2.29}\]
_Then we have_
\[|\nabla^{(\mathbf{m})}\Phi_{V}|^{2}\sim-(m^{tt}+H^{tt})<\nabla^{(\mathbf{m})}{ }_{t}\Phi_{V},\nabla^{(\mathbf{m})}{}_{t}\Phi_{V}>+(m^{ij}+H^{ij})<\nabla^{( \mathbf{m})}{}_{i}\Phi_{V},\nabla^{(\mathbf{m})}{}_{j}\Phi_{V}>\,\]
_where the scalar product of the partial derivatives is as in Definition 2.7._
**Lemma 2.8**.: _For \(H^{\mu\nu}=g^{\mu\nu}-m^{\mu\nu}\) satisfying_
\[|H|<\frac{1}{n}\, \tag{2.30}\]
_w
_where \(n\) is the space dimension, and for \(\Phi\) decaying sufficiently fast at spatial infinity, we have_
\[\int_{\Sigma_{t_{2}}^{ext}}|\nabla^{(\mathbf{m})}\Phi_{V}|^{2}\ \cdot \widetilde{w}(q)\cdot d^{n}x\] \[+\int_{t_{1}}^{t_{2}}\int_{\Sigma_{r}^{ext}}\Big{(}\frac{1}{2} \Big{(}|\nabla^{(\mathbf{m})}{}_{t}\Phi_{V}+\nabla^{(\mathbf{m})}{}_{r}\Phi_{V }|^{2}+\delta^{ij}|(\nabla^{(\mathbf{m})}{}_{i}-\frac{x_{i}}{r}\nabla^{( \mathbf{m})}{}_{r})\Phi_{V}|^{2}\Big{)}\cdot d\tau\cdot\widetilde{w}^{\prime}(q )d^{n}x\] \[\lesssim \int_{\Sigma_{t_{1}}^{ext}}|\nabla^{(\mathbf{m})}\Phi_{V}|^{2}\ \cdot \widetilde{w}(q)\cdot d^{n}x+\int_{t_{1}}^{t_{2}}\int_{\Sigma_{r}^{ext}}|H|\ \cdot|\nabla^{(\mathbf{m})}\Phi_{V}|^{2}\cdot \widetilde{w}^{\prime}(q)\cdot d^{n}x\cdot d\tau\] \[+\int_{t_{1}}^{t_{2}}\int_{\Sigma_{r}^{ext}}\Big{(}|\ g^{\mu \alpha}\nabla^{(\mathbf{m})}{}_{\mu}\nabla^{(\mathbf{m})}{}_{\alpha}\Phi_{V} |\cdot|\nabla^{(\mathbf{m})}\Phi_{V}|+|\nabla^{(\mathbf{m})}H|\cdot|\nabla^{( \mathbf{m})}\Phi_{V}|^{2}\Big{)}\cdot\widetilde{w}(q)\cdot d^{n}x\cdot d\tau\;.\]
Proof.: By injecting the expression obtained in Lemma 2.6 and the expression obtained in 2.18 using (2.17), in Corollary 2.2, we obtain the following conservation law
\[\int_{\Sigma_{t_{2}}^{ext}}\Big{(}-\frac{1}{2}(m^{tt}+H^{tt})< \nabla^{(\mathbf{m})}{}_{t}\Phi,\nabla^{(\mathbf{m})}{}_{t}\Phi>+\frac{1}{2}( m^{ji}+H^{ji})<\nabla^{(\mathbf{m})}{}_{j}\Phi,\nabla^{(\mathbf{m})}{}_{i}\Phi> \Big{)}\cdot\widetilde{w}(q)\cdot d^{n}x\] \[+\int_{t_{1}}^{t_{2}}\int_{\Sigma_{r}^{ext}}\Big{(}\frac{1}{2} \Big{(}<\nabla^{(\mathbf{m})}{}_{t}\Phi+\nabla^{(\mathbf{m})}{}_{r}\Phi, \nabla^{(\mathbf{m})}{}_{t}\Phi+\nabla^{(\mathbf{m})}{}_{r}\Phi>\] \[+\delta^{ij}<(\nabla^{(\mathbf{m})}{}_{i}-\frac{x_{i}}{r}\nabla^{ (\mathbf{m})}{}_{r})\Phi,(\nabla^{(\mathbf{m})}{}_{j}-\frac{x_{j}}{r}\nabla^{ (\mathbf{m})}{}_{r})\Phi>\Big{)}\] \[-\frac{1}{2}H^{tt}<\nabla^{(\mathbf{m})}{}_{t}\Phi,\nabla^{( \mathbf{m})}{}_{t}\Phi>+\frac{1}{2}H^{ji}<\nabla^{(\mathbf{m})}{}_{j}\Phi, \nabla^{(\mathbf{m})}{}_{i}\Phi>\] \[+H^{rt}<\nabla^{(\mathbf{m})}{}_{t}\Phi,\nabla^{(\mathbf{m})}{}_{t }\Phi>+H^{rj}<\nabla^{(\mathbf{m})}{}_{j}\Phi,\nabla^{(\mathbf{m})}{}_{t}\Phi >\Big{)}\cdot d\tau\cdot\widetilde{w}^{\prime}(q)d^{n}x\] \[= \int_{\Sigma_{t_{1}}^{ext}}\Big{(}-\frac{1}{2}(m^{tt}+H^{tt})< \nabla^{(\mathbf{m})}{}_{t}\Phi,\nabla^{(\mathbf{m})}{}_{t}\Phi>+\frac{1}{2}( m^{ji}+H^{ji})<\nabla^{(\mathbf{m})}{}_{j}\Phi,\nabla^{(\mathbf{m})}{}_{i}\Phi> \Big{)}\cdot\widetilde{w}(q)\cdot d^{n}x\] \[-\int_{t_{1}}^{t_{2}}\int_{\Sigma_{r}^{ext}}\Big{(}<g^{\mu\alpha} \nabla^{(\mathbf{m})}{}_{\mu}\nabla^{(\mathbf{m})}{}_{\alpha}\Phi,\nabla^{( \mathbf{m})}{}_{t}\Phi>+\frac{1}{2}\cdot(\nabla^{(\mathbf{m})}{}_{t}g^{t \alpha})\cdot<\nabla^{(\mathbf{m})}{}_{\alpha}\Phi,\nabla^{(\mathbf{m})}{}_{t}\Phi>\] \[+(\nabla^{(\mathbf{m})}{}_{j}g^{j\alpha})\cdot<\nabla^{(\mathbf{m })}{}_{\alpha}\Phi,\nabla^{(\mathbf{m})}{}_{t}\Phi>-\frac{1}{2}\cdot(\nabla^{( \mathbf{m})}{}_{t}g^{j\beta})\cdot<\nabla^{(\mathbf{m})}{}_{j}\Phi,\nabla^{( \mathbf{m})}{}_{\beta}\Phi>\Big{)}\cdot d\tau\cdot\widetilde{w}(q)d^{n}x\] \[-\int_{N_{t_{1}}^{t_{2}}}\big{(}T_{Lt}^{(\mathbf{g})}\big{)}\cdot \widetilde{w}(q)\cdot dv_{N}^{(\mathbf{m})}\;.\]
We get,
\[\int_{\Sigma^{ext}_{t_{2}}}\Big{(}-\frac{1}{2}(m^{tt}+H^{tt})<\nabla^ {(\mathbf{m})}{}_{t}\Phi,\nabla^{(\mathbf{m})}{}_{t}\Phi>+\frac{1}{2}(m^{ji}+H^{ ji})<\nabla^{(\mathbf{m})}{}_{j}\Phi,\nabla^{(\mathbf{m})}{}_{i}\Phi>\Big{)} \cdot\widetilde{w}(q)\cdot d^{n}x\] \[+\int_{t_{1}}^{t_{2}}\int_{\Sigma^{ext}_{r}}\Big{(}\frac{1}{2} \Big{(}|\nabla^{(\mathbf{m})}{}_{t}\Phi+\nabla^{(\mathbf{m})}{}_{r}\Phi|^{2}+ \delta^{ij}|(\nabla^{(\mathbf{m})}{}_{i}-\frac{x_{i}}{r}\nabla^{(\mathbf{m})}{ }_{r})\Phi|^{2}\Big{)}\cdot d\tau\cdot\widetilde{w}^{\prime}(q)d^{n}x\] \[= \int_{\Sigma^{ext}_{t_{1}}}\Big{(}-\frac{1}{2}(m^{tt}+H^{tt})< \nabla^{(\mathbf{m})}{}_{t}\Phi,\nabla^{(\mathbf{m})}{}_{t}\Phi>+\frac{1}{2}(m ^{ji}+H^{ji})<\nabla^{(\mathbf{m})}{}_{j}\Phi,\nabla^{(\mathbf{m})}{}_{i}\Phi> \Big{)}\cdot\widetilde{w}(q)\cdot d^{n}x\] \[-\int_{t_{1}}^{t_{2}}\int_{\Sigma^{ext}_{r}}\Big{(}-\frac{1}{2}H ^{tt}<\nabla^{(\mathbf{m})}{}_{t}\Phi,\nabla^{(\mathbf{m})}{}_{t}\Phi>+\frac{ 1}{2}H^{ji}<\nabla^{(\mathbf{m})}{}_{j}\Phi,\nabla^{(\mathbf{m})}{}_{i}\Phi>\] \[+H^{rt}<\nabla^{(\mathbf{m})}{}_{t}\Phi,\nabla^{(\mathbf{m})}{}_ {t}\Phi>+H^{rj}<\nabla^{(\mathbf{m})}{}_{j}\Phi,\nabla^{(\mathbf{m})}{}_{t} \Phi>\Big{)}\cdot d\tau\cdot\widetilde{w}^{\prime}(q)d^{n}x\] \[-\int_{t_{1}}^{t_{2}}\int_{\Sigma^{ext}_{r}}\Big{(}<g^{\mu\alpha} \nabla^{(\mathbf{m})}{}_{\mu}\nabla^{(\mathbf{m})}{}_{\alpha}\Phi,\nabla^{( \mathbf{m})}{}_{t}\Phi>+\frac{1}{2}\cdot(\nabla^{(\mathbf{m})}{}_{t}g^{t\alpha })\cdot<\nabla^{(\mathbf{m})}{}_{\alpha}\Phi,\nabla^{(\mathbf{m})}{}_{t}\Phi>\] \[+(\nabla^{(\mathbf{m})}{}_{j}g^{j\alpha})\cdot<\nabla^{(\mathbf{ m})}{}_{\alpha}\Phi,\nabla^{(\mathbf{m})}{}_{t}\Phi>-\frac{1}{2}\cdot(\nabla^{( \mathbf{m})}{}_{t}g^{j\beta})\cdot<\nabla^{(\mathbf{m})}{}_{j}\Phi,\nabla^{( \mathbf{m})}{}_{\beta}\Phi>\Big{)}\cdot d\tau\cdot\widetilde{w}(q)d^{n}x\] \[-\int_{N^{t_{2}}_{t_{1}}}\big{(}T^{(\mathbf{g})}_{\hat{L}t}\big{)} \cdot\widetilde{w}(q)\cdot dv^{(\mathbf{m})}_{N}\;.\]
Based on Lemma 2.7, we have that for \(|H|\leq C<\frac{1}{n}\;,\) the following equivalence,
\[-\frac{1}{2}(m^{tt}+H^{tt})<\nabla^{(\mathbf{m})}{}_{t}\Phi_{V}, \nabla^{(\mathbf{m})}{}_{t}\Phi_{V}>+\frac{1}{2}(m^{ji}+H^{ji})<\nabla^{( \mathbf{m})}{}_{j}\Phi_{V},\nabla^{(\mathbf{m})}{}_{i}\Phi_{V}> \tag{2.32}\] \[\sim |\nabla^{(\mathbf{m})}\Phi_{V}|^{2}\geq 0\;.\]
By choosing choosing the vectors \(U\,,\,V\) to be wave coordinates vector fields, and by we summing over all of them, we get the following energy estimate,
\[\int_{\Sigma^{ext}_{t_{2}}}|\nabla^{(\mathbf{m})}\Phi|^{2}\;\cdot w (q)\cdot d^{n}x+\int_{N^{t_{2}}_{t_{1}}}\big{(}T^{(\mathbf{g})}_{\hat{L}t} \big{)}\cdot\widetilde{w}(q)\cdot dv^{(\mathbf{m})}_{N}\] \[+\int_{t_{1}}^{t_{2}}\int_{\Sigma^{ext}_{r}}\Big{(}\frac{1}{2} \Big{(}|\nabla^{(\mathbf{m})}{}_{t}\Phi+\nabla^{(\mathbf{m})}{}_{r}\Phi|^{2}+ \delta^{ij}|(\nabla^{(\mathbf{m})}{}_{i}-\frac{x_{i}}{r}\nabla^{(\mathbf{m})} {}_{r})\Phi|^{2}\Big{)}\cdot d\tau\cdot\widetilde{w}^{\prime}(q)d^{n}x\] \[\lesssim \int_{\Sigma^{ext}_{t_{1}}}|\nabla^{(\mathbf{m})}\Phi|^{2}\; \cdot\widetilde{w}(q)\cdot d^{n}x\] \[+\int_{t_{1}}^{t_{2}}\int_{\Sigma^{ext}_{r}}|\Big{(}-\frac{1}{2}H ^{tt}<\nabla^{(\mathbf{m})}{}_{t}\Phi,\nabla^{(\mathbf{m})}{}_{t}\Phi>+\frac{1} {2}H^{ji}<\nabla^{(\mathbf{m})}{}_{j}\Phi,\nabla^{(\mathbf{m})}{}_{i}\Phi>\] \[+H^{rt}<\nabla^{(\mathbf{m})}{}_{t}\Phi,\nabla^{(\mathbf{m})}{}_{t }\Phi>+H^{rj}<\nabla^{(\mathbf{m})}{}_{j}\Phi,\nabla^{(\mathbf{m})}{}_{t}\Phi> \Big{)}|\cdot d\tau\cdot\widetilde{w}^{\prime}(q)d^{n}x\] \[+\int_{t_{1}}^{t_{2}}\int_{\Sigma^{ext}_{r}}|\Big{(}<g^{\mu\alpha} \nabla^{(\mathbf{m})}{}_{\mu}\nabla^{(\mathbf{m})}{}_{\alpha}\Phi,\nabla^{( \mathbf{m})}{}_{t}\Phi>+\frac{1}{2}\cdot(\nabla^{(\mathbf{m})}{}_{t}H^{t\alpha })\cdot<\nabla^{(\mathbf{m})}{}_{\alpha}\Phi,\nabla^{(\mathbf{m})}{}_{t}\Phi>\] \[+(\nabla^{(\mathbf{m})}{}_{j}H^{j\alpha})\cdot<\nabla^{(\mathbf{ m})}{}_{\alpha}\Phi,\nabla^{(\mathbf{m})}{}_{t}\Phi>-\frac{1}{2}\cdot(\nabla^{(\mathbf{m})}{}_{t}H^{j \beta})\cdot<\nabla^{(\mathbf{m})}{}_{j}\Phi,\nabla^{(\mathbf{m})}{}_{\beta}\Phi> \Big{)}|\cdot d\tau\cdot\widetilde{w}(q)d^{n}x\;.\]
Using the fact that we have by construction \(T^{(\mathbf{g})}_{Lt}\geq 0\) (see Definition 2.9 and also Definition 2.4), we then get the result.
**Corollary 2.3**.: _For_
\[|H|<\frac{1}{n}\;, \tag{2.33}\]
_where \(n\) is the space dimension, and for \(\Phi\) decaying sufficiently fast at spatial infinity, we have_
\[\int_{\Sigma^{ext}_{t_{2}}}|\nabla^{(\mathbf{m})}\Phi_{V}|^{2}\; \cdot w(q)\cdot d^{n}x\] \[+\int_{t_{1}}^{t_{2}}\int_{\Sigma^{ext}_{\tau}}\Big{(}\frac{1}{2} \Big{(}|\nabla^{(\mathbf{m})}{}_{t}\Phi_{V}+\nabla^{(\mathbf{m})}{}_{r}\Phi_{V }|^{2}+\delta^{ij}|(\nabla^{(\mathbf{m})}{}_{i}-\frac{x_{i}}{r}\nabla^{( \mathbf{m})}{}_{r})\Phi_{V}|^{2}\Big{)}\cdot d\tau\cdot\widehat{w}^{\prime}(q)d ^{n}x\] \[\lesssim \int_{\Sigma^{ext}_{t_{1}}}|\nabla^{(\mathbf{m})}\Phi_{V}|^{2}\; \cdot w(q)\cdot d^{n}x+\int_{t_{1}}^{t_{2}}\int_{\Sigma^{ext}_{\tau}}|H|\;\cdot |\nabla^{(\mathbf{m})}\Phi_{V}|^{2}\cdot\widehat{w}^{\prime}(q)\cdot d^{n}x \cdot d\tau\] \[+\int_{t_{1}}^{t_{2}}\int_{\Sigma^{ext}_{\tau}}\Big{(}|\;g^{\mu \alpha}\nabla^{(\mathbf{m})}{}_{\mu}\nabla^{(\mathbf{m})}{}_{\alpha}\Phi_{V}| \cdot|\nabla^{(\mathbf{m})}\Phi_{V}|+|\nabla^{(\mathbf{m})}H|\cdot|\nabla^{( \mathbf{m})}\Phi_{V}|^{2}\Big{)}\cdot w(q)\cdot d^{n}x\cdot d\tau\;.\]
Proof.: Using Lemma 2.3 and injecting in Lemma 2.8, we get the stated result.
## 3. Ingredients of the proof of the exterior stability of the Minkowski space-time for \(n\geq 4\)
### The Minkowski vector fields
First, we refer the reader to [17] for more details. Let
\[x_{\beta} = m_{\mu\beta}x^{\mu}\;,\] \[Z_{\alpha\beta} = x_{\beta}\partial_{\alpha}-x_{\alpha}\partial_{\beta}\;,\] \[S = t\partial_{t}+\sum_{i=1}^{n}x^{i}\partial_{i}\;.\]
The Minkowski vector fields are the vectors of the following set
\[\mathcal{Z}:=\left\{Z_{\alpha\beta}\;,\;S\;,\;\partial_{\alpha}\;|\;\alpha\;, \;\beta\in\{0,\ldots,n\}\right\}\;. \tag{3.1}\]
Vectors belonging to \(\mathcal{Z}\) will be denoted by \(Z\;\).
**Definition 3.1**.: We define
\[Z^{I}:=Z^{\iota_{1}}\ldots Z^{\iota_{k}}\quad\text{for}\quad I=(\iota_{1}, \ldots,\iota_{k}), \tag{3.2}\]
where \(\iota_{i}\) is an \(\frac{(n^{2}+3n+4)}{2}\)-dimensional integer index, with \(|\iota_{i}|=1\), and \(Z^{\iota_{i}}\) representing each a vector field from the family \(\mathcal{Z}\).
For a tensor \(T\), of arbitrary order, either a scalar or valued in the Lie algebra, we define the Lie derivative as
\[\mathcal{L}_{Z^{I}}T:=\mathcal{L}_{Z^{\iota_{1}}}\ldots\mathcal{L}_{Z^{\iota_{k }}}T\quad\text{for}\quad I=(\iota_{1},\ldots,\iota_{k}). \tag{3.3}\]
### The bootstrap argument in the exterior
We look at the case where \(n\geq 4\). As in [17], we define
\[h_{\mu\nu}=g_{\mu\nu}-m_{\mu\nu}\;. \tag{3.4}\]
We then define the weighted energy as follows, yet this time, it is restricted to the exterior region,
\[\mathcal{E}^{ext}_{|I|}(\tau):=\sum_{|J|\leq|I|}\big{(}\|w^{1/2}\nabla^{( \mathbf{m})}(\mathcal{L}_{Z^{J}}h(t,\cdot))\|_{L^{2}(\Sigma^{ext}_{t})}+\|w^{ 1/2}\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{J}}A(t,\cdot))\|_{L^{2}(\Sigma^{ext}_ {t})}\big{)}\,,\]
where \(\Sigma^{ext}_{t}\) is defined in Definition 2.7. We will run a bootstrap argument on \(\mathcal{E}^{ext}_{|I|}\). We assume for some \(|I|\geq N\), which we will determine later that
\[\mathcal{E}^{ext}_{N}(t)\leq E^{ext}(N)\cdot\epsilon\cdot(1+t)^{\delta}, \tag{3.5}\]
where \(E^{ext}(N)\) is a constant that depends on \(N\) to be chosen later. In this paper, we choose
\[\delta = 0\;, \tag{3.6}\] \[\epsilon = 1\;. \tag{3.7}\]
We will then show that we can improve the constant \(E^{ext}(N)\) to \(\frac{1}{2}\cdot E^{ext}(N)\), and obtain
\[\mathcal{E}^{ext}_{N}(t)\leq\frac{E^{ext}(N)}{2}\cdot\epsilon\cdot(1+t)^{ \delta}\;.\]
We will now make use of the Klainerman-Sobolev inequality which holds also true in the exterior region \(\overline{C}\), the complementary of \(C\) (the future causal domain of dependance for the metric \(g\), of the compact \(K\subset\Sigma_{t_{1}}\)). We will then run the same bootstrap argument for the exterior energy.
### Weighted Klainerman-Sobolev inequality in the exterior
The weight is defined again as being \(w\) (see Definition (2.12)), for some \(\gamma>0\). However, the integration for the \(L^{2}\) will be supported only on the exterior regions \(\Sigma^{ext}_{t}=\Sigma_{t}\cap\overline{C}\). Then, we have globally the following pointwise estimate in the
exterior region \(\overline{C}\) for any smooth scalar function \(\phi\) vanishing at spatial infinity, i.e. \(\lim_{r\to\infty}\phi(t,x^{1},\dots,x^{n})=0\),
\[|\phi(t,x)|\cdot(1+t+|q|)^{\frac{(n-1)}{2}}\cdot\left[(1+|q|)\cdot w(q)\right]^{ 1/2}\leq C\sum_{|I|\leq\lfloor\frac{n}{2}\rfloor+1}\|\big{(}w(q)\big{)}^{1/2}Z ^{I}\phi(t,\cdot)\|_{L^{2}(\Sigma_{t}^{ext})}\;, \tag{3.8}\]
where here the \(L^{2}(\Sigma_{t}^{ext})\) norm is taken on \(\Sigma_{t}^{ext}\) slice.
### The a priori estimates
Based on the calculations that we showed in [17], we have the following exterior versions of the a prior estimates that we derived in [17], by using the Klainerman-Sovolev inequality in the exterior with the energy defined in the exterior.
**Lemma 3.1**.: _Under the bootstrap assumption (3.5), taken for \(N=|I|+\lfloor\frac{n}{2}\rfloor+1\), if for all \(\mu,\nu\in(t,x^{1},\dots,x^{n})\), and for any functions \(\partial_{\mu}\mathcal{L}_{Z^{I}}h_{\nu}^{1}\), \(\partial_{\mu}\mathcal{L}_{Z^{I}}A_{\nu}\in C_{0}^{\infty}(\mathbb{R}^{n})\), then we have_
\[|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{I}}A)(t,x)| \leq \begin{cases}C(|I|)\cdot E^{ext}(|I|+\lfloor\frac{n}{2}\rfloor+1) \cdot\frac{\epsilon}{(1+t+|q|)^{\frac{(n-1)}{2}-\delta}(1+|q|)^{1+\gamma}},& \text{when }\quad q>0\;,\\ C(|I|)\cdot E^{ext}(|I|+\lfloor\frac{n}{2}\rfloor+1)\cdot\frac{\epsilon}{(1+t +|q|)^{\frac{(n-1)}{2}-\delta}(1+|q|)^{\frac{1}{2}}}&\text{when }\quad q<0\;,\end{cases} \tag{3.9}\]
_and_
\[|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{I}}h)(t,x)| \leq \begin{cases}C(|I|)\cdot E^{ext}(|I|+\lfloor\frac{n}{2}\rfloor+1) \cdot\frac{\epsilon}{(1+t+|q|)^{\frac{(n-1)}{2}-\delta}(1+|q|)^{1+\gamma}},& \text{when }\quad q>0\;,\\ C(|I|)\cdot E^{ext}(|I|+\lfloor\frac{n}{2}\rfloor+1)\cdot\frac{\epsilon}{(1+t +|q|)^{\frac{(n-1)}{2}-\delta}(1+|q|)^{\frac{1}{2}}}&\text{when }\quad q<0\;.\end{cases}\]
**Lemma 3.2**.: _For \(k=|I|+\lfloor\frac{n}{2}\rfloor+1\), and for \(\gamma>0\) and with initial data such that_
\[|A(0,x)|+|h^{1}(0,x)| \lesssim \frac{\epsilon}{(1+r)^{\frac{(n-1)}{2}+\gamma-\delta}}\;,\]
_then, we have for all \(|I|\),_
\[|\mathcal{L}_{Z^{I}}A(t,x)|+|\mathcal{L}_{Z^{I}}h(t,x)|\] \[\leq \begin{cases}c(\gamma)\cdot C(|I|)\cdot E^{ext}(|I|+\lfloor\frac{ n}{2}\rfloor+1)\cdot\frac{\epsilon}{(1+t+|q|)^{\frac{(n-1)}{2}-\delta}(1+|q|)^{ \gamma}},&\text{when }\quad q>0\;,\\ C(|I|)\cdot E^{ext}(|I|+\lfloor\frac{n}{2}\rfloor+1)\cdot\frac{\epsilon\cdot (1+|q|)^{\frac{(n-1)}{2}-\delta}}{(1+t+|q|)^{\frac{(n-1)}{2}-\delta}}&\text{ when }\quad q<0\;.\end{cases}\]
_Remark 3.1_.: Under the bootstrap assumption, and therefore under the a priori estimates in Lemmas 3.1 and 3.2, we have that for any \(q_{0}\in\mathbb{R}\), there exists a point \((t,r=0)\) such that \(N\) (defined in Definition 2.5) whose tip is \((t,r=0)\) is contained in the region \(\{(t,x)\mid q:=r-t\leq q_{0}\}\).
**Definition 3.2**.: Based on Remark 3.1, we have that the exterior region \(\overline{C}\) includes the region \(\{(t,x)\mid q:=r-t\geq q_{0}\}\). We refer in what follows to \(q_{0}\) as being a choice of such \(q\), from which we construct the exterior region as containing \(\{q\geq q_{0}\}\).
### The main exterior energy estimate
We now fix the space dimensions as being \(\mathrm{n}\geq 4\).
**Lemma 3.3**.: _For \(H^{\mu\nu}=g^{\mu\nu}-m^{\mu\nu}\) satisfying_
\[|H|\leq\frac{1}{n}\;, \tag{3.12}\]
_and for \(\Phi\) decaying sufficiently fast at spatial infinity, for \(\gamma>0\) and \(\mu<0\), we have_
\[\int_{\Sigma_{t_{2}}^{ext}}|\nabla^{(\mathbf{m})}\Phi|^{2}\;\cdot w (q)\cdot d^{n}x\] \[+\int_{t_{1}}^{t_{2}}\int_{\Sigma_{\tau}^{ext}}\Big{(}\frac{1}{2 }\Big{(}|\nabla^{(\mathbf{m})}{}_{t}\Phi+\nabla^{(\mathbf{m})}{}_{r}\Phi|^{2} +\delta^{ij}|(\nabla^{(\mathbf{m})}{}_{i}-\frac{x_{i}}{r}\nabla^{(\mathbf{m})} {}_{r})\Phi|^{2}\Big{)}\cdot\frac{\widehat{w}(q)}{(1+|q|)}\cdot d^{n}x\cdot d\tau\] \[\lesssim \int_{\Sigma_{t_{1}}^{ext}}|\nabla^{(\mathbf{m})}\Phi|^{2}\;\cdot w (q)\cdot d^{n}x\] \[+C(q_{0})\cdot c(\delta)\cdot c(\gamma)\cdot E^{ext}(\lfloor\frac {n}{2}\rfloor+1)\cdot\int_{t_{1}}^{t_{2}}\frac{\epsilon}{(1+t)^{\frac{n}{2}}} \;\cdot\int_{\Sigma_{\tau}^{ext}}|\nabla^{(\mathbf{m})}\Phi|^{2}\cdot\frac{w( q)}{(1+|q|)}\cdot d^{n}x\cdot d\tau\] \[+\int_{t_{1}}^{t_{2}}\int_{\Sigma_{\tau}^{ext}}\mid g^{\mu\alpha} \nabla^{(\mathbf{m})}{}_{\mu}\nabla^{(\mathbf{m})}{}_{\alpha}\Phi|\cdot| \nabla^{(\mathbf{m})}\Phi|\cdot w(q)\cdot d^{n}x\cdot d\tau\;.\]
Proof.: Using the bootstrap assumption on \(H\) in the exterior, combined with the Klainerman-Sobolev inequality in the exterior region \(q\geq q_{0}\), we obtain in the exterior region \(\overline{C}\), as shown in [17], that
\[|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{I}}H)(t,x)| \leq \begin{cases}C(|I|)\cdot E^{ext}(|I|+\lfloor\frac{n}{2}\rfloor+1) \cdot\frac{\frac{\epsilon}{(1+t+|q|)^{\frac{(n-1)}{2}-\delta}(1+|q|)^{1+ \gamma}}}{\rm when}\quad\quad q>0,\\ C(|I|)\cdot E^{ext}(|I|+\lfloor\frac{n}{2}\rfloor+1)\cdot\frac{\epsilon}{(1+t +|q|)^{\frac{(n-1)}{2}-\delta}(1+|q|)^{\frac{1}{2}}}{\rm when}\quad\quad q<0, \end{cases}\]
and
\[|\mathcal{L}_{Z^{I}}H(t,x)| \leq \begin{cases}c(\delta)\cdot c(\gamma)\cdot C(|I|)\cdot E^{ext}(|I |+\lfloor\frac{n}{2}\rfloor+1)\cdot\frac{\frac{\epsilon}{(1+t+|q|)^{\frac{(n-1 )}{2}-\delta}\cdot(1+|q|)^{\gamma}}}{\rm when}\quad\quad q>0,\\ C(|I|)\cdot E^{ext}(|I|+\lfloor\frac{n}{2}\rfloor+1)\cdot\frac{\frac{ \epsilon}{(1+t+|q|)^{\frac{(n-1)}{2}-\delta}}\cdot(1+|q|)^{\frac{1}{2}}}{ \rm when}\quad\quad q<0.\end{cases}\]
Taking \(\delta=0\), we have \(0\leq\frac{(4-2)}{2}=1\leq\frac{(n-2)}{2}\), for \(n\geq 4\). Thus, for \(n\geq 4\), in \(\overline{C}\), we have
\[|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{I}}H)(t,x)| \leq C(q_{0})\cdot C(|I|)\cdot E^{ext}(|I|+\lfloor\frac{n}{2}\rfloor+ 1)\cdot\frac{\epsilon}{(1+t+|q|)^{\frac{3}{2}}\cdot(1+|q|)^{1+\gamma}}\;,\]
and
\[|\mathcal{L}_{Z^{I}}H(t,x)| \leq C(q_{0})\cdot c(\delta)\cdot c(\gamma)\cdot C(|I|)\cdot E^{ext}(|I| +\lfloor\frac{n}{2}\rfloor+1)\cdot\frac{\epsilon}{(1+t+|q|)^{\frac{3}{2}}\cdot(1 +|q|)^{\gamma}}\;.\]
Yet, given the weighted energy estimate that we showed in Corollary 2.3, by taking \(n\geq 4\) and by injecting the a priori estimates, considering that \(\gamma>0\), we get that for
\[|H|<\frac{1}{n}\;, \tag{3.17}\]
and for \(\Phi\) decaying sufficiently fast at spatial infinity, that
\[\int_{\Sigma^{ext}_{t_{2}}}|\nabla^{(\mathbf{m})}\Phi|^{2}\; \cdot w(q)\cdot d^{n}x\] \[+\int_{t_{1}}^{t_{2}}\int_{\Sigma^{ext}_{r}}\Big{(}\frac{1}{2} \Big{(}|\nabla^{(\mathbf{m})}{}_{t}\Phi+\nabla^{(\mathbf{m})}{}_{r}\Phi|^{2}+ \delta^{ij}|(\nabla^{(\mathbf{m})}{}_{i}-\frac{x_{i}}{r}\nabla^{(\mathbf{m})}{ }_{r})\Phi|^{2}\Big{)}\cdot\widehat{w}^{\prime}(q)\cdot d^{n}x\cdot d\tau\] \[\lesssim \int_{\Sigma^{ext}_{t_{1}}}|\nabla^{(\mathbf{m})}\Phi|^{2}\; \cdot w(q)\cdot d^{n}x+\int_{t_{1}}^{t_{2}}\int_{\Sigma^{ext}_{r}}|H|\;\cdot| \nabla^{(\mathbf{m})}\Phi|^{2}\cdot\widehat{w}^{\prime}(q)\cdot d^{n}x\cdot d\tau\] \[+\int_{t_{1}}^{t_{2}}\int_{\Sigma^{ext}_{r}}\Big{(}\mid g^{\mu \alpha}\nabla^{(\mathbf{m})}{}_{\mu}\nabla^{(\mathbf{m})}{}_{\alpha}\Phi| \cdot|\nabla^{(\mathbf{m})}\Phi|+|\nabla^{(\mathbf{m})}H|\cdot|\nabla^{( \mathbf{m})}\Phi|^{2}\Big{)}\cdot w(q)\cdot d^{n}x\cdot d\tau\] \[\lesssim \int_{\Sigma^{ext}_{t_{1}}}|\nabla^{(\mathbf{m})}\Phi|^{2}\; \cdot w(q)\cdot d^{n}x\] \[+C(q_{0})\cdot c(\delta)\cdot c(\gamma)\cdot E^{ext}(\lfloor \frac{n}{2}\rfloor+1)\cdot\int_{t_{1}}^{t_{2}}\int_{\Sigma^{ext}_{r}}\frac{ \epsilon}{(1+t)^{\frac{3}{2}}}\;\cdot|\nabla^{(\mathbf{m})}\Phi|^{2}\cdot \big{(}\widehat{w}^{\prime}(q)+\frac{w(q)}{(1+|q|)}\big{)}\cdot d^{n}x\cdot d\tau\] \[+\int_{t_{1}}^{t_{2}}\int_{\Sigma^{ext}_{r}}\mid g^{\mu\alpha} \nabla^{(\mathbf{m})}{}_{\mu}\nabla^{(\mathbf{m})}{}_{\alpha}\Phi|\cdot| \nabla^{(\mathbf{m})}\Phi|\cdot w(q)\cdot d^{n}x\cdot d\tau\;.\]
However, we showed in Lemma 2.4, that for
\[\widehat{w}(q) := \begin{cases}(1+|q|)^{1+2\gamma}\quad\text{when}\quad\;q>0,\\ (1+|q|)^{2\mu}\quad\text{when}\quad\;q<0,\end{cases}\]
we have, for \(\gamma\neq-\frac{1}{2}\) and \(\mu\neq 0\),
\[\widehat{w}^{\prime}(q)\sim\frac{\widehat{w}(q)}{(1+|q|)}\;.\]
Furthermore, for \(\mu<0\), we have \(\widehat{w}(q)\leq w(q)\), thus,
\[\widehat{w}^{\prime}(q)\leq\frac{w(q)}{(1+|q|)}\;.\]
Consequently, for \(\gamma>0\) and \(\mu<0\),
\[\int_{\Sigma^{ext}_{t_{2}}}|\nabla^{(\mathbf{m})}\Phi|^{2}\ \cdot w(q)\cdot d^{n}x\] \[+\int_{t_{1}}^{t_{2}}\int_{\Sigma^{ext}_{\tau}}\Big{(}\frac{1}{2} \Big{(}|\nabla^{(\mathbf{m})}{}_{t}\Phi+\nabla^{(\mathbf{m})}{}_{r}\Phi|^{2}+ \delta^{ij}|(\nabla^{(\mathbf{m})}{}_{i}-\frac{x_{i}}{r}\nabla^{(\mathbf{m})}{ }_{r})\Phi|^{2}\Big{)}\cdot\widehat{w}^{\prime}(q)\cdot d^{n}x\cdot d\tau\] \[\lesssim \int_{\Sigma^{ext}_{t_{1}}}|\nabla^{(\mathbf{m})}\Phi|^{2}\ \cdot w(q)\cdot d^{n}x\] \[+C(q_{0})\cdot c(\delta)\cdot c(\gamma)\cdot E^{ext}(\lfloor \frac{n}{2}\rfloor+1)\cdot\int_{t_{1}}^{t_{2}}\frac{\epsilon}{(1+t)^{\frac{3}{2 }}}\ \cdot\int_{\Sigma^{ext}_{\tau}}|\nabla^{(\mathbf{m})}\Phi|^{2}\cdot \frac{w(q)}{(1+|q|)}\cdot d^{n}x\cdot d\tau\] \[+\int_{t_{1}}^{t_{2}}\int_{\Sigma^{ext}_{\tau}}|\ g^{\mu\alpha} \nabla^{(\mathbf{m})}{}_{\mu}\nabla^{(\mathbf{m})}{}_{\alpha}\Phi|\cdot| \nabla^{(\mathbf{m})}\Phi|\cdot w(q)\cdot d^{n}x\cdot d\tau\;.\]
We now state the following lemma that is an exterior version of an estimate on the commutator term that we showed in [17].
**Lemma 3.4**.: _For \(n\geq 4\), let \(H\) such that for all time \(t\), for \(\gamma\neq 0\) and \(0<\lambda\leq\frac{1}{2}\),_
\[\int_{\mathbb{S}^{n-1}}\lim_{r\to\infty}\Big{(}\frac{r^{n-1}}{(1+t +r)^{2-\lambda}\cdot(1+|q|)}\cdot w(q)\cdot|H|^{2}\Big{)}d\sigma^{n-1}(t) = 0\;, \tag{3.19}\]
_and let \(h\) such that for all time \(t\), for all \(|K|\leq|I|\),_
\[\int_{\mathbb{S}^{n-1}}\lim_{r\to\infty}\Big{(}\frac{r^{n-1}}{(1+|q|)}\cdot w (q)\cdot|\mathcal{L}_{Z^{K}}h|^{2}\Big{)}d\sigma^{n-1}(t) = 0\;, \tag{3.20}\]
_then, for \(\delta=0\), for either \(\Phi=H\) or \(\Phi=A\), using the bootstrap assumption on \(\Phi\), we have_
\[\int_{0}^{t}\Big{(}\int_{\Sigma^{ext}_{\tau}}(1+t)^{1+\lambda} \cdot|g^{\alpha\beta}\nabla^{(\mathbf{m})}{}_{\alpha}\nabla^{(\mathbf{m})}{}_ {\beta}(\mathcal{L}_{Z^{I}}\Phi)|^{2}\cdot w\cdot dx^{1}\ldots dx^{n}\Big{)} \cdot dt\] \[\lesssim \int_{0}^{t}\frac{\epsilon}{(1+t)^{2-\lambda}}\cdot C(|I|)\cdot E (\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1)\cdot c(\gamma)\] \[\times\Big{(}\sum_{|J|\leq|I|}\int_{\Sigma_{t}}\big{(}|\nabla^{( \mathbf{m})}(\mathcal{L}_{Z^{J}}h)|^{2}+|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z ^{K}}\Phi)|^{2}\big{)}\cdot w\cdot dx^{1}\ldots dx^{n}\Big{)}\cdot dt\] \[+\int_{0}^{t}\Big{(}\int_{\Sigma^{ext}_{t}}(1+t)^{1+\lambda}\cdot \sum_{|K|\leq|I|}\ \mathcal{L}_{Z^{K}}g^{\alpha\beta}\nabla^{(\mathbf{m})}{}_{ \alpha}\nabla^{(\mathbf{m})}{}_{\beta}\Phi|^{2}\cdot w\cdot dx^{1}\ldots dx^{n }\Big{)}\cdot dt\;.\]
Proof.: Based on our previous calculations in [17] for \(n\geq 4\) for the commutator term, also using the exterior Hardy type inequality that we showed in [17] and that we re-state here in Corollary 4.1), it is straightforward to show the stated exterior estimate on the commutator term.
Now, we have the following exterior estimate on the commutator term.
**Lemma 3.5**.: _For \(n\geq 4\), let \(H\) such that for all time \(t\), for \(\gamma\neq 0\) and \(0<\lambda\leq\frac{1}{2}\),_
\[\int_{\mathbb{S}^{n-1}}\lim_{r\to\infty}\Big{(}\frac{r^{n-1}}{(1+t +r)^{2-\lambda}\cdot(1+|q|)}w(q)\cdot|H|^{2}\Big{)}d\sigma^{n-1}(t) = 0\;, \tag{3.22}\]
_and such that that for all \(|K|\leq|I|\),_
\[\int_{\mathbb{S}^{n-1}}\lim_{r\to\infty}\Big{(}\frac{r^{n-1}}{(1+t +r)^{2-\lambda}\cdot(1+|q|)}\cdot w(q)\cdot\Big{(}|\mathcal{L}_{Z^{K}}A|^{2}+| \mathcal{L}_{Z^{K}}h|^{2}\Big{)}d\sigma^{n-1}(t) = 0\;, \tag{3.23}\]
_then, for \(\delta=0\), using the bootstrap assumption on \(A\) and \(h\), we have_
\[\int_{0}^{t}\Big{(}\int_{\Sigma^{ext}_{t}}\frac{(1+t)^{1+\lambda} }{\epsilon}\cdot|g^{\alpha\beta}\nabla^{(\mathbf{m})}{}_{\alpha}\nabla^{( \mathbf{m})}{}_{\beta}(\mathcal{L}_{Z^{I}}A)|^{2}\cdot w\cdot dx^{1}\ldots dx^ {n}\Big{)}\cdot dt\] \[+\int_{0}^{t}\Big{(}\int_{\Sigma^{ext}_{t}}\frac{(1+t)^{1+ \lambda}}{\epsilon}\cdot|g^{\alpha\beta}\nabla^{(\mathbf{m})}{}_{\alpha} \nabla^{(\mathbf{m})}{}_{\beta}(\mathcal{L}_{Z^{I}}h)|^{2}\cdot w\cdot dx^{1} \ldots dx^{n}\Big{)}\cdot dt\] \[\lesssim C(|I|)\cdot E^{ext}(|\frac{|I|}{2}]+\lfloor\frac{n}{2}\rfloor+1) \cdot c(\gamma)\cdot\int_{0}^{t}\frac{\epsilon}{(1+\tau)^{2-\lambda}}\cdot \mathcal{E}^{ext}_{|I|}(\tau)\cdot d\tau\] \[+\int_{0}^{t}\Big{(}\int_{\Sigma^{ext}_{t}}\frac{(1+\tau)^{1+ \lambda}}{\epsilon}\cdot\sum_{|K|\leq|I|}\mid\mathcal{L}_{Z^{K}}g^{\alpha\beta }\nabla^{(\mathbf{m})}{}_{\alpha}\nabla^{(\mathbf{m})}{}_{\beta}A|^{2}\cdot w \cdot dx^{1}\ldots dx^{n}\Big{)}\cdot d\tau\] \[+\int_{0}^{t}\Big{(}\int_{\Sigma^{ext}_{t}}\frac{(1+\tau)^{1+ \lambda}}{\epsilon}\cdot\sum_{|K|\leq|I|}\mid\mathcal{L}_{Z^{K}}g^{\alpha\beta }\nabla^{(\mathbf{m})}{}_{\alpha}\nabla^{(\mathbf{m})}{}_{\beta}h|^{2}\cdot w \cdot dx^{1}\ldots dx^{n}\Big{)}\cdot d\tau\;.\]
Proof.: Using Lemma 3.4, we get the desired result. We notice that since the needed estimates on the metric hold for \(n\geq 4\) everywhere (i.e. in the interior as well as in the exterior), we do not have dependance on \(q_{0}\), i.e. there is no constant \(C(q_{0})\). More precisely,
For \(n\geq 4\), let \(H\) such that for all time \(t\), for \(\gamma\neq 0\) and \(0<\lambda\leq\frac{1}{2}\),
\[\int_{\mathbb{S}^{n-1}}\lim_{r\to\infty}\Big{(}\frac{r^{n-1}}{(1+t +r)^{2-\lambda}\cdot(1+|q|)}w(q)\cdot|H|^{2}\Big{)}d\sigma^{n-1}(t) = 0\;, \tag{3.25}\]
then, for \(\delta=0\), for \(\Phi=H\) or \(\Phi=A\), using the bootstrap assumption on \(\Phi\), we have
\[\int_{0}^{t}\Big{(}\int_{\Sigma_{t}^{ext}}\frac{(1+\tau)^{1+\lambda} }{\epsilon}\cdot|g^{\alpha\beta}\nabla^{(\mathbf{m})}{}_{\alpha}\nabla^{( \mathbf{m})}{}_{\beta}(\mathcal{L}_{Z^{I}}\Phi)|^{2}\cdot w\cdot dx^{1}\dots dx^ {n}\Big{)}\cdot d\tau\] \[\lesssim \int_{0}^{t}\frac{\epsilon}{(1+\tau)^{2-\lambda}}\cdot C(|I|) \cdot E^{ext}(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1)\cdot c(\gamma)\] \[\cdot\Big{(}\sum_{|J|\leq|I|}\int_{\Sigma_{t}^{ext}}\big{(}| \nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{J}}h)|^{2}+|\nabla^{(\mathbf{m})}( \mathcal{L}_{Z^{K}}\Phi)|^{2}\big{)}\cdot w\cdot dx^{1}\dots dx^{n}\Big{)} \cdot d\tau\] \[+\int_{0}^{t}\Big{(}\int_{\Sigma_{t}^{ext}}\frac{(1+\tau)^{1+ \lambda}}{\epsilon}\cdot\sum_{|K|\leq|I|}|\ \mathcal{L}_{Z^{K}}g^{\alpha\beta}\nabla^{(\mathbf{m})}{}_{\alpha}\nabla^{( \mathbf{m})}{}_{\beta}\Phi|^{2}\cdot w\cdot dx^{1}\dots dx^{n}\Big{)}\cdot d \tau\;.\]
**Lemma 3.6**.: _For \(H^{\mu\nu}=g^{\mu\nu}-m^{\mu\nu}\) satisfying_
\[|H|\leq\frac{1}{n}\;, \tag{3.27}\]
_and such that for \(n\geq 4\), for \(\gamma\neq 0\) and \(0<\lambda\leq\frac{1}{2}\),_
\[\int_{\mathbb{S}^{n-1}}\lim_{r\to\infty}\Big{(}\frac{r^{n-1}}{(1+t+r)^{2- \lambda}\cdot(1+|q|)}w(q)\cdot|H|^{2}\Big{)}d\sigma^{n-1}(t) = 0\;, \tag{3.28}\]
_and such that that for all \(|K|\leq|I|\),_
\[\int_{\mathbb{S}^{n-1}}\lim_{r\to\infty}\Big{(}\frac{r^{n-1}}{(1+t +r)^{2-\lambda}\cdot(1+|q|)}\cdot w(q)\cdot\Big{(}|\mathcal{L}_{Z^{K}}A|^{2}+| \mathcal{L}_{Z^{K}}h|^{2}\Big{)}d\sigma^{n-1}(t) = 0\;. \tag{3.29}\]
_Then, for \(\mathcal{L}_{Z^{J}}A\) and \(\mathcal{L}_{Z^{J}}h^{1}\) decaying sufficiently fast at spatial infinity, for \(\gamma>0\) and for \(0<\lambda\leq\frac{1}{2}\), we have_
\[\big{(}\mathcal{E}^{ext}_{|I|}\big{)}^{2}(t_{2})\] \[\lesssim \big{(}\mathcal{E}^{ext}_{|I|}\big{)}^{2}(t_{1})+C(|I|)\cdot C(q_ {0})\cdot c(\gamma)\cdot E^{ext}(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n}{2 }\rfloor+1)\cdot\int_{t_{1}}^{t_{2}}\frac{\epsilon}{(1+t)^{1+\lambda}}\cdot \big{(}\mathcal{E}^{ext}_{|I|}\big{)}^{2}(\tau)\cdot d\tau\] \[+C(|I|)\cdot\int_{0}^{t}\Big{(}\int_{\Sigma_{t}^{ext}}\frac{(1+ \tau)^{1+\lambda}}{\epsilon}\cdot\sum_{|K|\leq|I|}|\ \mathcal{L}_{Z^{K}}g^{\alpha\beta}\nabla^{(\mathbf{m})}{}_{\alpha}\nabla^{( \mathbf{m})}{}_{\beta}A|^{2}\cdot w\cdot dx^{1}\dots dx^{n}\Big{)}\cdot d\tau\] \[+C(|I|)\cdot\int_{0}^{t}\Big{(}\int_{\Sigma_{t}^{ext}}\frac{(1+ \tau)^{1+\lambda}}{\epsilon}\cdot\sum_{|K|\leq|I|}|\ \mathcal{L}_{Z^{K}}g^{\alpha\beta}\nabla^{(\mathbf{m})}{}_{\alpha}\nabla^{( \mathbf{m})}{}_{\beta}h|^{2}\cdot w\cdot dx^{1}\dots dx^{n}\Big{)}\cdot d\tau\;,\]
_where_
\[\mathcal{E}^{ext}_{|I|}(\tau):=\sum_{|J|\leq|I|}\big{(}\|w^{1/2}\nabla^{( \mathbf{m})}(\mathcal{L}_{Z^{J}}h^{1}(t,\cdot))\|_{L^{2}(\Sigma_{t}^{ext})}+\| w^{1/2}\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{J}}A(t,\cdot))\|_{L^{2}( \Sigma_{t}^{ext})}\big{)}\,.\]
Proof.: By taking \(\Phi=\mathcal{L}_{Z^{J}}A\) and another \(\Phi=\mathcal{L}_{Z^{J}}h^{1}\), decaying sufficiently fast at spatial infinity, and using the energy estimate that we have shown in Lemma 3.3, for \(|H|\leq\frac{1}{n}\), where \(n\) is the space dimension, and for \(\gamma>0\,\), we have
\[\int_{\Sigma_{t_{2}}^{ext}}\big{(}|\nabla^{(\mathbf{m})}(\mathcal{ L}_{Z^{J}}A)|^{2}\ +|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{J}}h)|^{2}\big{)}\cdot w(q)\cdot d^{n}x\] \[+\int_{t_{1}}^{t_{2}}\int_{\Sigma_{r}^{ext}}\Big{(}\frac{1}{2} \Big{(}|\nabla^{(\mathbf{m})}{}_{t}(\mathcal{L}_{Z^{J}}A)+\nabla^{(\mathbf{m}) }{}_{r}(\mathcal{L}_{Z^{J}}A)|^{2}+\delta^{ij}|(\nabla^{(\mathbf{m})}{}_{i}- \frac{x_{i}}{r}\nabla^{(\mathbf{m})}{}_{r})(\mathcal{L}_{Z^{J}}A)|^{2}\] \[+|\nabla^{(\mathbf{m})}{}_{t}(\mathcal{L}_{Z^{J}}h^{1})+\nabla^{ (\mathbf{m})}{}_{r}(\mathcal{L}_{Z^{J}}h)|^{2}+\delta^{ij}|(\nabla^{(\mathbf{ m})}{}_{i}-\frac{x_{i}}{r}\nabla^{(\mathbf{m})}{}_{r})(\mathcal{L}_{Z^{J}}h)|^{2} \Big{)}\cdot\frac{\widehat{w}(q)}{(1+|q|)}\cdot d^{n}x\cdot d\tau\] \[\lesssim \int_{\Sigma_{t_{1}}^{ext}}\big{(}|\nabla^{(\mathbf{m})}( \mathcal{L}_{Z^{J}}A)|^{2}\ +|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{J}}h)|^{2}\big{)}\ \cdot w(q)\cdot d^{n}x\] \[+C(q_{0})\cdot c(\delta)\cdot c(\gamma)\cdot E^{ext}(\lfloor\frac {n}{2}\rfloor+1)\] \[\cdot\int_{t_{1}}^{t_{2}}\frac{\epsilon}{(1+t)^{\frac{3}{2}}} \cdot\int_{\Sigma_{\tau}^{ext}}\big{(}|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{ J}}A)|^{2}\ +|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{J}}h)|^{2}\big{)}\cdot\frac{w(q)}{(1+|q|)} \cdot d^{n}x\cdot d\tau\] \[+\int_{t_{1}}^{t_{2}}\int_{\Sigma_{\tau}^{ext}}\big{(}\frac{(1+t) ^{1+\lambda}}{\epsilon}\cdot|\ g^{\mu\alpha}\nabla^{(\mathbf{m})}{}_{\mu} \nabla^{(\mathbf{m})}{}_{\alpha}(\mathcal{L}_{Z^{J}}A)|^{2}+\frac{\epsilon}{(1 +t)^{1+\lambda}}|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{J}}A)|^{2}\big{)}\cdot w (q)\cdot d^{n}x\cdot d\tau\] \[+\int_{t_{1}}^{t_{2}}\int_{\Sigma_{\tau}^{ext}}\big{(}\frac{(1+t) ^{1+\lambda}}{\epsilon}\cdot|\ g^{\mu\alpha}\nabla^{(\mathbf{m})}{}_{\mu} \nabla^{(\mathbf{m})}{}_{\alpha}(\mathcal{L}_{Z^{J}}h)|^{2}+\frac{\epsilon}{(1 +t)^{1+\lambda}}|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{J}}h)|^{2}\big{)}\cdot w (q)\cdot d^{n}x\cdot d\tau\.\]
Hence, for \(0<\lambda\leq\frac{1}{2}\,\),
\[(\mathcal{E}_{|I|}^{ext})^{2}(t_{2})\] \[\lesssim (\mathcal{E}_{|I|}^{ext})^{2}(t_{1})+C(q_{0})\cdot c(\delta) \cdot c(\gamma)\cdot E^{ext}(\lfloor\frac{n}{2}\rfloor+1)\cdot\int_{t_{1}}^{t_ {2}}\frac{\epsilon}{(1+t)^{1+\lambda}}\cdot\int_{\Sigma_{\tau}^{ext}}(\mathcal{ E}_{|I|}^{ext})^{2}(\tau)\cdot d\tau\] \[+\sum_{|J|\leq|I|}\int_{t_{1}}^{t_{2}}\int_{\Sigma_{\tau}^{ext}} \frac{(1+t)^{1+\lambda}}{\epsilon}\cdot|\ g^{\mu\alpha}\nabla^{(\mathbf{m})}{}_{ \mu}\nabla^{(\mathbf{m})}{}_{\alpha}(\mathcal{L}_{Z^{J}}A)|^{2}\cdot w(q)\cdot d ^{n}x\cdot d\tau\] \[+\sum_{|J|\leq|I|}\int_{t_{1}}^{t_{2}}\int_{\Sigma_{\tau}^{ext}} \frac{(1+t)^{1+\lambda}}{\epsilon}\cdot|\ g^{\mu\alpha}\nabla^{(\mathbf{m})}{}_{ \mu}\nabla^{(\mathbf{m})}{}_{\alpha}(\mathcal{L}_{Z^{J}}h)|^{2}\cdot w(q)\cdot d ^{n}x\cdot d\tau\.\]
Now, for \(n\geq 4\), let \(H\) such that for all time \(t\), for \(\gamma\neq 0\) and \(0<\lambda\leq\frac{1}{2}\),
\[\int_{\mathbb{S}^{n-1}}\lim_{r\to\infty}\Big{(}\frac{r^{n-1}}{(1+t+r)^{2- \lambda}\cdot(1+|q|)}w(q)\cdot|H|^{2}\Big{)}d\sigma^{n-1}(t) = 0\, \tag{3.30}\]
we get based on the exterior estimate we have established on the commutator term,
\[(\mathcal{E}_{|I|}^{ext})^{2}(t_{2})\] \[\lesssim (\mathcal{E}_{|I|}^{ext})^{2}(t_{1})+C(q_{0})\cdot c(\delta)\cdot c (\gamma)\cdot E^{ext}(\lfloor\frac{n}{2}\rfloor+1)\cdot\int_{t_{1}}^{t_{2}} \frac{\epsilon}{(1+t)^{1+\lambda}}\cdot\int_{\Sigma_{\tau}^{ext}}\left( \mathcal{E}_{|I|}^{ext}\right)^{2}(\tau)\cdot d\tau\] \[+C(|I|)\cdot E^{ext}(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n}{ 2}\rfloor+1)\cdot c(\gamma)\cdot\int_{0}^{t}\frac{\epsilon}{(1+\tau)^{2- \lambda}}\cdot(\mathcal{E}_{|I|}^{ext})^{2}(\tau)\cdot d\tau\] \[+\sum_{|K|\leq|I|}\int_{0}^{t}\Big{(}\int_{\Sigma_{t}^{ext}}\frac {(1+\tau)^{1+\lambda}}{\epsilon}\cdot|\ \mathcal{L}_{Z^{K}}g^{\alpha\beta}\nabla^{( \mathbf{m})}{}_{\alpha}\nabla^{(\mathbf{m})}{}_{\beta}A|^{2}\cdot w\cdot dx^{ 1}\ldots dx^{n}\Big{)}\cdot d\tau\] \[+\sum_{|K|\leq|I|}\int_{0}^{t}\Big{(}\int_{\Sigma_{t}^{ext}}\frac {(1+\tau)^{1+\lambda}}{\epsilon}\cdot|\ \mathcal{L}_{Z^{K}}g^{\alpha\beta}\nabla^{( \mathbf{m})}{}_{\alpha}\nabla^{(\mathbf{m})}{}_{\beta}h|^{2}\cdot w\cdot dx^{ 1}\ldots dx^{n}\Big{)}\cdot d\tau\;.\]
Fixing \(\delta=0\) and \(0<\lambda\leq\frac{1}{2}\), we get the result.
### The source terms for \(n\geq 4\)
We proved the following two lemmas in [17].
**Lemma 3.7**.: _We have_
\[|\mathcal{L}_{Z^{I}}(g^{\lambda\mu}\nabla^{(\mathbf{m})}{}_{ \lambda}\nabla^{(\mathbf{m})}{}_{\mu}A)|\] \[\lesssim \sum_{|K|\leq|I|}\Big{(}|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K} }A)|\Big{)}\cdot E(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1)\] \[\cdot\Big{(}\Big{(}\begin{cases}\frac{\epsilon}{(1+t+|q|)^{\frac{( n-1)}{2}-\delta}(1+|q|)^{1+\gamma}},&\text{when }\quad q>0,\\ \frac{\epsilon\cdot(1+|q|)^{\frac{(n-1)}{2}}}{(1+t+|q|)^{\frac{(n-1)}{2}-\delta }}\quad\text{when }\quad q<0.\end{cases}\Big{)}\] \[+\Big{(}\begin{cases}\frac{\epsilon}{(1+t+|q|)^{\frac{(n-1)}{2}- \delta}(1+|q|)^{\frac{(n-1)}{2}-\delta}},&\text{when }\quad q>0,\\ \frac{\epsilon\cdot(1+|q|)^{\frac{(n-1)}{2}}}{(1+t+|q|)^{\frac{(n-1)}{2}- \delta}}\quad\text{when }\quad q<0.\end{cases}\Big{)}\] \[+\Big{(}\begin{cases}\frac{\epsilon}{(1+t+|q|)^{\frac{(n-1)}{2}- \delta}(1+|q|)^{\frac{(n-1)}{2}-\delta+2\gamma}},&\text{when }\quad q>0,\\ \frac{\epsilon\cdot(1+|q|)}{(1+t+|q|)^{\frac{(n-1)}{2}-\delta}}\quad\text{when }\quad q<0.\end{cases}\Big{)}\]
\[+\sum_{|K|\leq|I|}\Big{(}|\mathcal{L}_{Z^{K}}A|\Big{)}\cdot E(\lfloor\frac{|I|}{2} \rfloor+\lfloor\frac{n}{2}\rfloor+1)\] \[\cdot\Big{(}\Big{(}\left\{\begin{array}{ll}\frac{\epsilon}{(1+t+ |q|)^{\frac{(n-1)}{2}-\delta}(1+|q|)^{\frac{(n-1)}{2}-\delta+1+2\gamma}},&\mbox {when }\quad q>0,\\ \frac{\epsilon}{(1+t+|q|)^{\frac{(n-1)}{2}-\delta}(1+|q|)^{\frac{(n-1)}{2}- \delta}}\quad\mbox{when }\quad q<0.\end{array}\right)\] \[+\Big{(}\left\{\begin{array}{ll}\frac{\epsilon}{(1+t+|q|)^{ \frac{(n-1)}{2}-\delta}(1+|q|)^{\frac{(n-1)}{2}-\delta}},&\mbox{when }\quad q>0,\\ \frac{\epsilon}{(1+t+|q|)^{\frac{(n-1)}{2}-\delta}(1+|q|)^{\frac{(n-1)}{2}- \delta}}\quad\mbox{when }\quad q<0.\end{array}\right)\] \[+\Big{(}\left\{\begin{array}{ll}\frac{\epsilon}{(1+t+|q|)^{ \frac{(n-1)}{2}-\delta}(1+|q|)^{\frac{(n-1)}{2}-\delta}},&\mbox{when }\quad q>0,\\ \frac{\epsilon\cdot(1+|q|)^{\frac{1}{2}}}{(1+t+|q|)^{\frac{(n-1)}{2}-\delta} }\quad\mbox{when }\quad q<0.\end{array}\right)\] \[+\Big{(}\left\{\begin{array}{ll}\frac{\epsilon}{(1+t+|q|)^{ \frac{(n-1)}{2}-\delta}(1+|q|)^{(n-1)-2\delta+1+3\gamma}},&\mbox{when }\quad q>0,\\ \frac{\epsilon\cdot(1+|q|)^{\frac{1}{2}}}{(1+t+|q|)^{\frac{(n-1)}{2}-\delta} }\quad\mbox{when }\quad q<0.\end{array}\right)\] \[+\Big{(}\left\{\begin{array}{ll}\frac{\epsilon}{(1+t+|q|)^{ \frac{(n-1)}{2}-\delta}(1+|q|)^{\frac{(n-1)}{2}-\delta+1+2\gamma}},&\mbox{when }\quad q>0,\\ \frac{\epsilon\cdot(1+|q|)^{\frac{(n-1)}{2}-\delta}},&\mbox{when }\quad q<0.\end{array}\right)\Big{)}\] \[+\Big{(}\left\{\begin{array}{ll}\frac{\epsilon}{(1+t+|q|)^{ \frac{(n-1)}{2}-\delta}(1+|q|)^{(n-1)-2\delta+3\gamma}},&\mbox{when }\quad q>0,\\ \frac{\epsilon\cdot(1+|q|)^{\frac{(n-1)}{2}}}{(1+t+|q|)^{\frac{(n-1)}{2}- \delta}}\quad\mbox{when }\quad q<0.\end{array}\right)\Big{)}\] \[+\Big{(}\left\{\begin{array}{ll}\frac{\epsilon}{(1+t+|q|)^{ \frac{(n-1)}{2}-\delta}(1+|q|)^{(n-1)-2\delta}},&\mbox{when }\quad q>0,\\ \frac{\epsilon\cdot(1+|q|)^{\frac{(n-1)}{2}-\delta}},&\mbox{when }\quad q<0.\end{array}\right)\Big{)}\] \[+\Big{(}\left\{\begin{array}{ll}\frac{\epsilon}{(1+t+|q|)^{ \frac{(n-1)}{2}-\delta}(1+|q|)^{(n-1)-2\delta}},&\mbox{when }\quad q>0,\\ \frac{\epsilon\cdot(1+|q|)^{\frac{(n-1)}{2}-\delta}},&\mbox{when }\quad q<0.\end{array}\right)\Big{)}\] \[+\Big{(}\left\{\begin{array}{ll}\frac{\epsilon}{(1+t+|q|)^{ \frac{(n-1)}{2}-\delta}(1+|q|)^{(n-1)-2\delta}},&\mbox{when }\quad q>0,\\ \frac{\epsilon\cdot(1+|q|)^{\frac{(n-1)}{2}-\delta}},&\mbox{when }\quad q<0.\end{array}\right)\Big{)}\] \[+\Big{(}\left\{\begin{array}{ll}\frac{\epsilon}{(1+t+|q|)^{ \frac{(n-1)}{2}-\delta}(1+|q|)^{(n-1)-2\delta}},&\mbox{when }\quad q>0,\\ \frac{\epsilon\cdot(1+|q|)^{\frac{(n-1)}{2}-\delta}},&\mbox{when }\quad q<0.\end{array}\right)\Big{)}\] \[+\Big{(}\left\{\begin{array}{ll}\frac{\epsilon}{(1+t+|q|)^{ \frac{(n-1)}{2}-\delta}(1+|q|)^{(n-1)-2\delta}},&\mbox{when }\quad q>0,\\ \frac{\epsilon\cdot(1+|q|)^{\frac{(n-1)}{2}-\delta}},&\mbox{when }\quad q<0.\end{array}\right.\Big{)}\Big{)}\] \[+\Big{(}\left\{\begin{array}{ll}\frac{\epsilon}{(1+t+|q|)^{ \frac{(n-1)}{2}-\delta}(1+|q|)^{(n-1)-2\delta}},&\mbox{when }\quad q>0,\\ \frac{\epsilon\cdot(1+|q|)^{\frac{(n-1)}{2}-\delta}},&\mbox{when }\quad q<0.\end{array}\right.\Big{)}\Big{)}\] \[+\Big{(}\left\{\begin{array}{ll}\frac{\epsilon}{(1+t+|q|)^{ \frac{(n-1)}{2}-\delta}(1+|q|)^{(n-1)-2\delta}},&\mbox{when }\quad q>0,\\ \frac{\epsilon\cdot(1+|q|)^{\frac{(n-1)}{2}-\delta}},&\mbox{when }\quad q<0.\end{array}\right.\Big{)}\Big{)}\] \[+\Big{(}\left\{\begin{array}{ll}\frac{\epsilon}{(1+t+|q|)^{ \frac{(n-1)}{2}-\delta}(1+|q|)^{(n-1)-2\delta}},&\mbox{when }\quad q>0,\\ \frac{\epsilon\cdot(1+|q|)^{\frac{(n-1)}{2}-\delta}},&\mbox{when }\quad q<0.\end{array}\right.\Big{)}\Big{)}\] \[+\Big{(}\left\{\begin{array}{ll}\frac{\epsilon}{(1+t+|q|)^{ \frac{(n-1)}{2}-\delta}(1+|q|)^{(n-1)-2\delta}},&\mbox{when }\quad q>0,\\ \frac{\epsilon\cdot(1+|q|)^{\frac{(n-1)}{2}-\delta}},&\mbox{when }\quad q<0.\end{array}\right.\Big{)}\Big{)}\] \[+\Big{(}\left\{\begin{array}{ll}\frac{\epsilon}{(1+t+|q|)^{ \frac{(n-1)}{2}-\delta}(1+|q|)^{(n-1)-2\delta}},&\mbox{when }\quad q>0,\\ \frac{\epsilon\cdot(1+|q|)^{\frac{(n-1)}{2}-\delta}},&\mbox{when }\quad q<0.\end{array}\right.
\[+\Big{(}\sum_{|K|\leq|I|}|\mathcal{L}_{Z^{K}}h|\Big{)}\cdot E(\lfloor\frac{|I|}{2} \rfloor+\lfloor\frac{n}{2}\rfloor+1)\] \[\cdot\Big{(}\Big{(}\left\{\frac{\frac{\varepsilon}{(1+t+|q|)^{(n-1) -2\delta}(1+|q|)^{2+2\gamma}}}{\frac{\varepsilon}{(1+t+|q|)^{(n-1)-2\delta}(1+ |q|)}}\quad\text{when}\quad q<0.\right.\] \[\left.+\Big{(}\left\{\frac{\frac{\varepsilon}{(1+t+|q|)^{(n-1)-2 \delta}(1+|q|)^{\frac{(n-1)}{2}-\delta+1+3\gamma}}}{\frac{\varepsilon(1+|q|)^{ \frac{(n-1)}{2}}}{(1+t+|q|)^{(n-1)-2\delta}(1+|q|)^{\frac{(n-1)}{2}-\delta}}} \quad\text{when}\quad q<0.\right.\] \[\left.+\Big{(}\left\{\frac{\frac{\varepsilon}{(1+t+|q|)^{(n-1)-2 \delta}(1+|q|)^{\frac{(n-1)}{2}-\delta+3\gamma}}}{\frac{\varepsilon(1+|q|)^{ \frac{(n-1)}{2}}}{\varepsilon(1+t+|q|)^{(n-1)-2\delta}(1+|q|)^{\frac{(n-1)}{2 }}}}\quad\text{when}\quad q<0.\right.\]
**Lemma 3.8**.: _We have_
\[|\mathcal{L}_{Z^{I}}\big{(}g^{\lambda\mu}\nabla^{(\mathbf{m})}{}_ {\lambda}\nabla^{(\mathbf{m})}{}_{\mu}h\big{)}|\] \[\lesssim \Big{(}\sum_{|K|\leq|I|}|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K} }A)|\Big{)}\cdot E(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1)\] \[\cdot\Big{(}\Big{(}\left\{\frac{\varepsilon}{(1+t+|q|)^{\frac{(n -1)}{2}-\delta}(1+|q|)^{1+\gamma}},\quad\text{when}\quad q>0,\right.\] \[\left.+\Big{(}\left\{\frac{\varepsilon}{(1+t+|q|)^{\frac{(n-1)} {2}-\delta}(1+|q|)^{\frac{(n-1)}{2}-\delta+2\gamma}},\quad\text{when}\quad q> 0,\right.\] \[\left.+\Big{(}\left\{\frac{\varepsilon}{(1+t+|q|)^{\frac{(n-1)} {2}-\delta}(1+|q|)^{\frac{(n-1)}{2}-\delta+2\gamma}},\quad\text{when}\quad q> 0,\right.\] \[\left.+\Big{(}\left\{\frac{\varepsilon}{(1+t+|q|)^{\frac{(n-1)} {2}-\delta}(1+|q|)^{\frac{(n-1)}{2}-\delta+1+2\gamma}},\quad\text{when}\quad q> 0,\right.\] \[\left.+\Big{(}\left\{\frac{\varepsilon}{(1+t+|q|)^{\frac{(n-1)} {2}-\delta}(1+|q|)^{\frac{(n-1)}{2}-\delta+1+2\gamma}},\quad\text{when}\quad q> 0,\right.\] \[\left.+\Big{(}\left\{\frac{\varepsilon}{(1+t+|q|)^{\frac{(n-1)} {2}-\delta}(1+|q|)^{\frac{(n-1)}{2}-\delta+3\gamma}},\quad\text{when}\quad q> 0,\right.\] \[\left.+\Big{(}\left\{\frac{\varepsilon}{(1+t+|q|)^{\frac{(n-1)} {2}-\delta}(1+|q|)^{(n-1)-2\delta+3\gamma}},\quad\text{when}\quad q>0,\right.\] \[\left.+\Big{(}\left\{\frac{\varepsilon}{(1+t+|q|)^{\frac{(n-1)} {2}-\delta}(1+|q|)^{(n-1)-2\delta+3\gamma}},\quad\text{when}\quad q>0,\right.\] \[\left.+\Big{(}\left\{\frac{\varepsilon}{(1+t+|q|)^{\frac{(n-1)} {2}-\delta}(1+|q|)^{(n-1)-2\delta+3\gamma}},\quad\text{when}\quad q>0,\right.\] \[\left.+\Big{(}\left\{\frac{\varepsilon}{(1+t+|q|)^{\frac{(n-1)} {2}-\delta}(1+|q|)^{(n-1)-2\delta+3\gamma}},\quad\text{when}\quad q>0,\right.\] \[\left.+\Big{(}\left\{\frac{\varepsilon}{(1+t+|q|)^{\frac{(n-1)} {2}-\delta}(1+|q|)^{(n-1)-2\delta+3\gamma}},\quad\text{when}\quad q>0,\right.\] \[\left.+\Big{(}\left\{\frac{\varepsilon}{(1+t+|q|)^{\frac{(n-1)} {2}-\delta}(1+|q|)^{(n-1)-2\delta+3\gamma}},\quad\text{when}\quad q>0,\right.\] \[\left.+\Big{(}\left\{\frac{\varepsilon}{(1+t+|q|)^{\frac{(n-1)} {2}-\delta}(1+|q|)^{(n-1)-2\delta+3\gamma}},\quad\text{when}\quad q>0,\right.\] \[\left.+\Big{(}\left\{\frac{\varepsilon}{(1+t+|q|)^{\frac{(n-1)} {2}-\delta}(1+|q|)^{(n-1)-2\delta+3\gamma}},\quad\text{when}\quad q>0,\right.\] \[\left.+\Big{(}\left\{\frac{\varepsilon}{(1+t+|q|)^{\frac{(n-1)} {2}-\delta}(1+|q|)^{(n-1)-2\delta+3\gamma}},\quad\text{when}\quad q>0,\right.\] \[\left.+\Big{(}\left\{\frac{\varepsilon}{(1+t+|q|)^{\frac{(n-1)} {2}-\delta}(1+|q|)^{(n-1)}-2\delta+3\gamma},\quad\text{when}\quad q>0,\right.
\[+\Big{(}\sum_{|K|\leq|I|}|\mathcal{L}_{Z^{K}}A|\Big{)}\cdot E(\lfloor\frac{|I|}{2} \rfloor+\lfloor\frac{n}{2}\rfloor+1)\] \[\cdot\Big{(}\Big{(}\left\{\frac{\frac{e}{(1+t+|q|)^{(n-1)-2\delta} }(1+|q|)^{1+2\gamma}}{(1+t+|q|)^{(n-1)-2\delta}}\quad\text{when}\quad q<0. \right.\Big{)}\Big{)}\] \[+\Big{(}\left\{\frac{\frac{e}{(1+t+|q|)^{(n-1)-2\delta}}(1+|q|)^{ \frac{(n-1)}{2}-\delta+3\gamma}}{(1+t+|q|)^{(n-1)-2\delta}(1+|q|)^{\frac{(n-1)} {2}-\delta}}\quad\text{when}\quad q<0.\right.\Big{)}\Big{)}\] \[+\Big{(}\left\{\frac{\frac{e}{(1+t+|q|)^{(n-1)-2\delta}}(1+|q|)^{ \frac{(n-1)}{2}-\delta+1+3\gamma}}{(1+t+|q|)^{(n-1)-2\delta}(1+|q|)^{(n-1)-2 \delta}}\quad\text{when}\quad q<0.\right.\Big{)}\Big{)}\] \[+\Big{(}\left\{\frac{\frac{e}{(1+t+|q|)^{(n-1)-2\delta}}(1+|q|)^{ (n-1)-2\delta+4\gamma}}{(1+t+|q|)^{(n-1)-2\delta}(1+|q|)^{(n-1)-2\delta}} \quad\text{when}\quad q<0.\right.\Big{)}\Big{)}\] \[+\Big{(}\left\{\frac{\frac{e}{(1+t+|q|)^{(n-1)-2\delta}}(1+|q|)^{ (n-1)-2\delta+4\gamma}}{(1+t+|q|)^{(n-1)-2\delta}(1+|q|)^{(n-1)-2\delta}} \quad\text{when}\quad q<0.\right.\Big{)}\Big{)}\] \[+\Big{(}\left\{\frac{\frac{e}{(1+t+|q|)^{(n-1)-2\delta}}(1+|q|)^{ (n-1)-2\delta+4\gamma}}{(1+t+|q|)^{(n-1)-2\delta}(1+|q|)^{(n-1)-2\delta}} \quad\text{when}\quad q<0.\right.\Big{)}\Big{)}\] \[+\Big{(}\left\{\frac{\frac{e}{(1+t+|q|)^{(n-1)}}(1+|q|)^{(n-1)}} \frac{e}{(1+t+|q|)^{(n-1)}}\frac{e}{(1+t+|q|)^{(n-1)}}\frac{e}{(1+t+|q|)^{(n-1) -2\delta}}\quad\text{when}\quad q<0.\right.\Big{)}\Big{)}\] \[+\Big{(}\left\{\frac{\frac{e}{(1+t+|q|)^{(n-1)}}(1+|q|)^{(n-1)}} \frac{e}{(1+t+|q|)^{(n-1)}}\frac{e}{(1+t+|q|)^{(n-1)}}\frac{e}{(1+t+|q|)^{(n-1) -2\delta}}\quad\text{when}\quad q<0.\right.\Big{)}\Big{)}\] \[+\Big{(}\left\{\frac{\frac{e}{(1+t+|q|)^{(n-1)}}(1+|q|)^{(n-1)}} \frac{e}{(1+t+|q|)^{(n-1)}}\frac{e}{(1+t+|q|)^{(n-1)}}\frac{e}{(1+t+|q|)^{(n-1) -2\delta}(1+|q|)^{(n-1)-2\delta}}\quad\text{when}\quad q<0.\right.\Big{)}\Big{)}\] \[+\Big{(}\left\{\frac{\frac{e}{(1+t+|q|)^{(n-1)-2\delta}}(1+|q|)^{ (n-1)}}\frac{e}{(1+t+|q|)^{(n-1)}}\frac{e}{(1+t+|q|)^{(n-1)}}\frac{e}{(1+t+|q|) ^{(n-1)-2\delta}(1+|q|)^{(n-1)-2\delta}}\quad\text{when}\quad q<0.\right.\]
Now, we look at the case, where \(n\geq 4\).
**Lemma 3.9**.: _For \(n\geq 4\), we have_
\[\frac{(1+t)}{\epsilon}\cdot|\mathcal{L}_{Z^{I}}(g^{\lambda\mu}\nabla^ {(\mathbf{m})}{}_{\lambda}\nabla^{(\mathbf{m})}{}_{\mu}A)|^{2}\] \[\lesssim \sum_{|K|\leq|I|}\Big{(}|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K}} A)|^{2}+|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K}}h)|^{2}\Big{)}\cdot E(\lfloor\frac{|I|}{2} \rfloor+\lfloor\frac{n}{2}\rfloor+1)\] \[\cdot\Big{(}\left\{\frac{\epsilon}{(1+t+|q|)^{2-2\delta}(1+|q|)^{ 2\gamma}},\quad\text{when}\quad q>0,\atop\frac{\epsilon}{(1+t+|q|)^{2-2\delta} \cdot(1+|q|)^{-1}}\quad\text{when}\quad q<0.\right.\Big{)}\]
\[+\sum_{|K|\leq|I|}\Big{(}|\mathcal{L}_{Z^{K}}A|^{2}+|\mathcal{L}_ {Z^{K}}h|^{2}\Big{)}\cdot E(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n}{2} \rfloor+1)\] \[\cdot\Big{(}\left\{\frac{\epsilon}{(1+t+|q|)^{2-2\delta}(1+|q|)^{ 2+2\gamma}},\quad\text{when}\quad q>0,\atop\frac{\epsilon}{(1+t+|q|)^{2-2 \delta}\cdot(1+|q|)^{1-2\delta}}\quad\text{when}\quad q<0.\right.\Big{)}\.\]
Proof.: For \(n\geq 4\), we examine one by one the terms in \(\frac{(1+t)}{\epsilon}\cdot|\mathcal{L}_{Z^{I}}(g^{\lambda\mu}\nabla^{( \mathbf{m})}{}_{\lambda}\nabla^{(\mathbf{m})}{}_{\mu}A)|^{2}\), we get
\[\sum_{|K|\leq|I|}\Big{(}|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K} }A)|^{2}\Big{)}\cdot E(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1)\] \[\cdot\Big{(}\left\{\frac{\epsilon}{(1+t+|q|)^{2-2\delta}(1+|q|)^{ 2+2\gamma}},\quad\text{when}\quad q>0,\atop\frac{\epsilon}{(1+t+|q|)^{2-2 \delta}(1+|q|)}\quad\text{when}\quad q<0.\right.\Big{)}\] \[+\Big{(}\left\{\frac{\epsilon}{(1+t+|q|)^{2-2\delta}(1+|q|)^{2 \gamma}},\quad\text{when}\quad q>0,\atop\frac{\epsilon\cdot(1+|q|)}{(1+t+|q|)^{ 2-2\delta}}\quad\text{when}\quad q<0.\right.\Big{)}\] \[+\Big{(}\left\{\frac{\epsilon}{(1+t+|q|)^{2-2\delta}(1+|q|)^{3+2( \gamma-\delta)+2\gamma}},\quad\text{when}\quad q>0,\atop\frac{\epsilon(1+|q|) }{(1+t+|q|)^{2-2\delta}(1+|q|)^{3-2\delta}}\quad\text{when}\quad q<0.\right. \Big{)}\Big{)}\] \[\lesssim \sum_{|K|\leq|I|}\Big{(}|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K }}A)|^{2}\Big{)}\cdot E(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n}{2} \rfloor+1)\] \[\cdot\Big{(}\left\{\frac{\epsilon}{(1+t+|q|)^{2-2\delta}(1+|q|)^{ 2\gamma}},\quad\text{when}\quad q>0,\atop\frac{\epsilon}{(1+t+|q|)^{2-2 \delta}\cdot(1+|q|)^{-1}}\quad\text{when}\quad q<0.\right.\Big{)}\.\]
And,
\[\sum_{|K|\leq|I|}\Big{(}|\mathcal{L}_{Z^{K}}A|^{2}\Big{)}\cdot E( \lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1)\] \[\cdot\Big{(}\Big{(}\begin{cases}\frac{\epsilon}{(1+t+|q|)^{2-2 \delta}(1+|q|)^{5+2(\gamma-\delta)+2\gamma}},\quad\text{when}\quad q>0,\\ \frac{\epsilon}{(1+t+|q|)^{2-2\delta}(1+|q|)^{3-2\delta}}\quad\text{when}\quad q <0.\end{cases}\Big{)}\] \[+\Big{(}\begin{cases}\frac{\epsilon}{(1+t+|q|)^{2-2\delta}(1+|q|)^ {2+2\gamma}},\quad\text{when}\quad q>0,\\ \frac{\epsilon}{(1+t+|q|)^{2-2\delta}(1+|q|)}\quad\text{when}\quad q<0.\end{cases} \Big{)}\] \[+\Big{(}\begin{cases}\frac{\epsilon}{(1+t+|q|)^{2-2\delta}(1+|q|)^ {3+2(\gamma-\delta)+2\gamma}},\quad\text{when}\quad q>0,\\ \frac{\epsilon(1+|q|)^{2}}{(1+t+|q|)^{2-2\delta}(1+|q|)^{3-2\delta}}\quad \text{when}\quad q<0.\end{cases}\Big{)}\] \[+\Big{(}\begin{cases}\frac{(1+t+|q|)^{2-2\delta}(1+|q|)^{8+4( \gamma-\delta)+2\gamma}}{(1+t+|q|)^{2-2\delta}(1+|q|)^{6-4\delta}}\quad\text{ when}\quad q<0.\end{cases}\Big{)}\] \[+\Big{(}\begin{cases}\frac{\epsilon}{(1+t+|q|)^{2-2\delta}(1+|q|)^ {5+2(\gamma-\delta)+2\gamma}},\quad\text{when}\quad q>0,\\ \frac{\epsilon}{(1+t+|q|)^{2-2\delta}(1+|q|)^{3-2\delta}}\quad\text{when}\quad q <0.\end{cases}\Big{)}\] \[+\Big{(}\begin{cases}\frac{\epsilon}{(1+t+|q|)^{2-2\delta}(1+|q|)^ {6+4(\gamma-\delta)+2\gamma}},\quad\text{when}\quad q>0,\\ \frac{\epsilon(1+|q|)^{3}}{(1+t+|q|)^{2-2\delta}(1+|q|)^{6-4\delta}}\quad\text{ when}\quad q<0.\end{cases}\Big{)}\] \[\lesssim \sum_{|K|\leq|I|}\Big{(}|\mathcal{L}_{Z^{K}}A|^{2}\Big{)}\cdot E (\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1)\] \[\cdot\Big{(}\begin{cases}\frac{\epsilon}{(1+t+|q|)^{2-2\delta}(1 +|q|)^{2+2\gamma}},\quad\text{when}\quad q>0,\\ \frac{\epsilon}{(1+t+|q|)^{2-2\delta}(1+|q|)^{1-2\delta}}\quad\text{when} \quad q<0.\end{cases}\Big{)}\]
(where we used the fact that \(\gamma\geq\delta\)).
And,
\[\Big{(}\sum_{|K|\leq|I|}|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K}}h)|^ {2}\Big{)}\cdot E(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1)\] \[\cdot\Big{(}\Big{(}\begin{cases}\frac{\epsilon}{(1+t+|q|)^{2-2 \delta}(1+|q|)^{2+2\gamma}},\quad\text{when}\quad q>0,\\ \frac{\epsilon}{(1+t+|q|)^{2-2\delta}(1+|q|)}\quad\text{when}\quad q<0.\end{cases} \Big{)}\] \[+\Big{(}\begin{cases}\frac{\epsilon}{(1+t+|q|)^{2-2\delta}(1+|q|)^ {3+2(\gamma-\delta)+2\gamma}},\quad\text{when}\quad q>0,\\ \frac{\epsilon}{(1+t+|q|)^{2-2\delta}(1+|q|)^{3-2\delta}}\quad\text{when} \quad q<0.\end{cases}\Big{)}\] \[+\Big{(}\begin{cases}\frac{\epsilon}{(1+t+|q|)^{2-2\delta}(1+|q|)^ {5+2(\gamma-\delta)+2\gamma}},\quad\text{when}\quad q>0,\\ \frac{\epsilon}{(1+t+|q|)^{2-2\delta}(1+|q|)^{3-2\delta}},\quad\text{when} \quad q<0.\end{cases}\Big{)}\] \[+\Big{(}\begin{cases}\frac{\epsilon}{(1+t+|q|)^{5-4\delta}(1+|q|)^ {3+2(\gamma-\delta)+4\gamma}},\quad\text{when}\quad q>0,\\ \frac{\epsilon\cdot(1+|q|)^{3}}{(1+t+|q|)^{2-2\delta}(1+|q|)^{6-4\delta}} \quad\text{when}\quad q<0.\end{cases}\Big{)}\Big{)}\] \[\lesssim \Big{(}\sum_{|K|\leq|I|}|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K} }h)|^{2}\Big{)}\cdot E(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1)\] \[\cdot\Big{(}\begin{cases}\frac{\epsilon}{(1+t+|q|)^{2-2\delta}(1+ |q|)^{2+2\gamma}},\quad\text{when}\quad q>0,\\ \frac{\epsilon}{(1+t+|q|)^{2-2\delta}(1+|q|)^{1-2\delta}}\quad\text{when} \quad q<0.\end{cases}\Big{)}\]
(using the fact that \(\gamma\geq\delta\)).
Also,
\[\Big{(}\sum_{|K|\leq|I|}|\mathcal{L}_{Z^{K}}h|^{2}\Big{)}\cdot E( \lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1)\] \[\cdot\Big{(}\Big{(}\begin{cases}\frac{\epsilon}{(1+t+|q|)^{5-4 \delta}(1+|q|)^{4+4\gamma}},\quad\text{when}\quad q>0,\\ \frac{\epsilon}{(1+t+|q|)^{5-4\delta}(1+|q|)^{2}}\quad\text{when}\quad q<0. \end{cases}\Big{)}\] \[+\Big{(}\begin{cases}\frac{\epsilon}{(1+t+|q|)^{5-4\delta}(1+|q|) ^{5+2(\gamma-\delta)+4\gamma}},\quad\text{when}\quad q>0,\\ \frac{\epsilon\cdot(1+|q|)}{(1+t+|q|)^{5-4\delta}(1+|q|)^{3-2\delta}}\quad \text{when}\quad q<0.\end{cases}\Big{)}\] \[+\Big{(}\begin{cases}\frac{\epsilon}{(1+t+|q|)^{5-4\delta}(1+|q|) ^{2+4\gamma}},\quad\text{when}\quad q>0,\\ \frac{\epsilon}{(1+t+|q|)^{5-4\delta}}\quad\text{when}\quad q<0.\end{cases} \Big{)}\] \[+\Big{(}\begin{cases}\frac{\epsilon}{(1+t+|q|)^{5-4\delta}(1+|q|) ^{3+2(\gamma-\delta)+4\gamma}},\quad\text{when}\quad q>0,\\ \frac{\epsilon\cdot(1+|q|)^{3}}{(1+t+|q|)^{5-4\delta}(1+|q|)^{3-2\delta}} \quad\text{when}\quad q<0.\end{cases}\Big{)}\Big{)}\] \[\lesssim \Big{(}\sum_{|K|\leq|I|}|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K }}h)|^{2}\Big{)}\cdot E(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n}{2} \rfloor+1)\] \[\cdot\Big{(}\begin{cases}\frac{\epsilon}{(1+t+|q|)^{2-2\delta}(1+ |q|)^{2+2\gamma}},\quad\text{when}\quad q>0,\\ \frac{\epsilon}{(1+t+|q|)^{2-2\delta}(1+|q|)^{1-2\delta}}\quad\text{when} \quad q<0.\end{cases}\Big{)}\]
(using the fact that \(\gamma\geq\delta\)).
Also,
\[\Big{(}\sum_{|K|\leq|I|}|\mathcal{L}_{Z^{K}}h|^{2}\Big{)}\cdot E( \lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1)\] \[\cdot\Big{(}\Big{(}\begin{cases}\frac{\epsilon}{(1+t+|q|)^{5-4 \delta}(1+|q|)^{4+4\gamma}},\quad\text{when}\quad q>0,\\ \frac{\epsilon}{(1+t+|q|)^{5-4\delta}(1+|q|)^{2}}\quad\text{when}\quad q<0. \end{cases}\Big{)}\] \[+\Big{(}\begin{cases}\frac{\epsilon}{(1+t+|q|)^{5-4\delta}(1+|q|) ^{5+2(\gamma-\delta)+4\gamma}},\quad\text{when}\quad q>0,\\ \frac{\epsilon}{(1+t+|q|)^{5-4\delta}},\quad\text{when}\quad q<0.\end{cases} \Big{)}\] \[+\Big{(}\begin{cases}\frac{\epsilon}{(1+t+|q|)^{5-4\delta}(1+|q|) ^{3+2(\gamma-\delta)+4\gamma}},\quad\text{when}\quad q>0,\\ \frac{\epsilon\cdot(1+|q|)^{3}}{(1+t+|q|)^{5-4\delta}(1+|q|)^{3-2\delta}} \quad\text{when}\quad q<0.\end{cases}\Big{)}\] \[+\Big{(}\begin{cases}\frac{\epsilon}{(1+t+|q|)^{5-4\delta}(1+|q|) ^{3+2(\gamma-\delta)+4\gamma}},\quad\text{when}\quad q>0,\\ \frac{\epsilon\cdot(1+|q|)^{3}}{(1+t+|q|)^{5-4\delta}(1+|q|)^{3-2\delta}}\quad \text{when}\quad q<0.\end{cases}\Big{)}\Big{)}\] \[\lesssim \Big{(}\sum_{|K|\leq|I|}|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K}}h)|^ {2}\Big{)}\cdot E(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1)\] \[\cdot\Big{(}\begin{cases}\frac{\epsilon}{(1+t+|q|)^{2-2\delta}(1+|q|) ^{2+2\gamma}},\quad\text{when}\quad q>0,\\ \frac{\epsilon}{(1+t+|q|)^{2-2\delta}(1+|q|)^{1-2\delta}},\quad\text{when} \quad q<0.\end{cases}\Big{)}\]
(using the fact that \(\gamma\geq\delta\)).
Also,
\[\Big{(}\sum_{|K|\leq|I|}|\mathcal{L}_{Z^{K}}h|^{2}\Big{)}\cdot E( \lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1)\] \[\cdot\Big{(}\Big{(}\begin{cases}\frac{\epsilon}{(1+t+|q|)^{5-4 \delta}(1+|q|)^{4+4\gamma}},\quad\text{when}\quad q>0,\\ \frac{\epsilon}{(1+t+|q|)^{5-4\delta}(1+|q|)^{5-4\delta}},\quad\text{when} \quad q<0.\end{cases}\Big{)}\] \[+\Big{(}\begin{cases}\frac{\epsilon}{(1+t+|q|)^{5-4\delta}(1+|q|) ^{5+2(\gamma-\delta)+4\gamma}},\quad\text{when}\quad q>0,\\ \frac{\epsilon}{(1+t+|q|)^{5-4\delta}},\quad\text{when}\quad q<0.\end{cases}\Big{)}\] \[+\Big{(}\begin{cases}\frac{\epsilon}{(1+t+|q|)^{5-4\delta}(1+|q|) ^{3+2(\gamma-\delta)+4\gamma}},\quad\text{when}\quad q>0,\\ \frac{\epsilon\cdot(1+|q|)^{3}}{(1+t+|q|)^{3-2\delta}},\quad\text{when}\quad q<0. \end{cases}\Big{)}\] \[+\Big{(}\begin{cases}\frac{\epsilon}{(1+t+|q|)^{5-4\delta}(1+|q|) ^{3+2(\gamma-\delta)+4\gamma}},\quad\text{when}\quad q>0,\\ \frac{\epsilon\cdot(1+|q|)^{3}}{(1+t+|q|)^{3-2\delta}},\quad\text{when}\quad q<0. \end{cases}\Big{)}\Big{)}\] \[\lesssim \Big{(}\sum_{|K|\leq|I|}|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K}}h)|^ {2}\Big{)}\cdot E(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1)\] \[\cdot\Big{(}
\[\lesssim \Big{(}\sum_{|K|\leq|I|}|\mathcal{L}_{Z^{K}}h|^{2}\Big{)}\cdot E( \lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1)\] \[\cdot\Big{(}\left\{\frac{\frac{\epsilon}{(1++|q|)^{5-4\delta}(1+|q| )^{2+4\gamma}}}{(1++|q|)^{5-4\delta}(1+|q|)^{-2\delta}}\quad\text{when}\quad q <0.\right.\Big{)}\,.\]
**Lemma 3.10**.: _For \(n\geq 4\),_
\[\frac{(1+t)}{\epsilon}\cdot|\mathcal{L}_{Z^{I}}(g^{\lambda\mu} \nabla^{(\mathbf{m})}{}_{\lambda}\nabla^{(\mathbf{m})}{}_{\mu}h)|^{2}\] \[\lesssim \sum_{|K|\leq|I|}\Big{(}|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K} }A)|^{2}+|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K}}h)|^{2}\Big{)}\cdot E( \lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1)\] \[\cdot\Big{(}\left\{\frac{\epsilon}{(1++|q|)^{2-2\delta}(1+|q|)^{ 2+2\gamma}},\quad\text{when}\quad q>0,\right.\Big{)}\] \[+\sum_{|K|\leq|I|}\Big{(}|\mathcal{L}_{Z^{K}}A|^{2}+|\mathcal{L}_ {Z^{K}}h|^{2}\Big{)}\cdot E(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n}{2} \rfloor+1)\] \[\cdot\Big{(}\left\{\frac{\epsilon}{(1++|q|)^{5-4\delta}(1+|q|)^{ 2+4\gamma}},\quad\text{when}\quad q>0,\right.\Big{)}\,.\]
Proof.: For \(n\geq 4\), we examine the terms in \(\frac{(1+t)}{\epsilon}\cdot|\mathcal{L}_{Z^{I}}(g^{\lambda\mu}\nabla^{( \mathbf{m})}{}_{\lambda}\nabla^{(\mathbf{m})}{}_{\mu}h)|^{2}\), one by one. We have
\[\Big{(}\sum_{|K|\leq|I|}|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K} }A)|^{2}\Big{)}\cdot E(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1)\] \[\cdot\Big{(}\Big{(}\left\{\frac{\epsilon}{(1++|q|)^{2-2\delta}(1 +|q|)^{2+2\gamma}},\quad\text{when}\quad q>0,\right.\Big{)}\] \[+\Big{(}\left\{\frac{\epsilon}{(1++|q|)^{2-2\delta}(1+|q|)^{3+2( \gamma-\delta)+2\gamma}},\quad\text{when}\quad q>0,\right.\Big{)}\,\Big{)}\] \[+\Big{(}\left\{\frac{\epsilon}{(1++|q|)^{2-2\delta}(1+|q|)^{3+2( \gamma-\delta)+2\gamma}},\quad\text{when}\quad q>0,\right.\Big{)}\,\Big{)}\] \[+\Big{(}\left\{\frac{\epsilon}{(1++|q|)^{2-2\delta}(1+|q|)^{5+2( \gamma-\delta)+2\gamma}},\quad\text{when}\quad q<0.\right.\Big{)}\,\Big{)}\] \[+\Big{(}\left\{\frac{\epsilon}{(1++|q|)^{2-2\delta}(1+|q|)^{5+4( \gamma-\delta)+2\gamma}},\quad\text{when}\quad q>0,\right.\Big{)}\,\Big{)}\] \[+\Big{(}\left\{\frac{\epsilon}{(1++|q|)^{2-2\delta}(1+|q|)^{6+4( \gamma-\delta)+2\gamma}},\quad\text{when}\quad q>0,\right.\Big{)}\,\Big{)}\]
\[\lesssim \sum_{|K|\leq|I|}\Big{(}|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K}}h) |^{2}\Big{)}\cdot E(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1)\] \[\cdot\Big{(}\left\{\frac{\epsilon}{(1+t+|q|)^{2-2\delta}(1+|q|)^{ 2+2\gamma}},\quad\text{when}\quad q>0,\right.\] \[\left.+\Big{(}\left\{\frac{\epsilon}{(1+t+|q|)^{2-2\delta}(1+|q|) ^{4+2(\gamma-\delta)+2\gamma}},\quad\text{when}\quad q>0,\right.\] \[\left.\frac{\epsilon}{(1+t+|q|)^{2-2\delta}(1+|q|)^{3-2\delta}} \quad\text{when}\quad q<0.\right)\Big{)}\] \[\lesssim \sum_{|K|\leq|I|}\Big{(}|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K }}h)|^{2}\Big{)}\cdot E(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1)\] \[\cdot\Big{(}\Big{(}\left\{\frac{\epsilon}{(1+t+|q|)^{2-2\delta}( 1+|q|)^{2+2\gamma}},\quad\text{when}\quad q>0,\right.\] \[\left.+\Big{(}\left\{\frac{\epsilon}{(1+t+|q|)^{2-2\delta}(1+|q| )^{4+2(\gamma-\delta)+2\gamma}},\quad\text{when}\quad q>0,\right.\] \[\left.\frac{\epsilon}{(1+t+|q|)^{2-2\delta}(1+|q|)^{3-2\delta}} \quad\text{when}\quad q<0.\right)\Big{)}\] \[\lesssim \sum_{|K|\leq|I|}\Big{(}|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K }}h)|^{2}\Big{)}\cdot E(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1)\] \[\cdot\Big{(}\left\{\frac{\epsilon}{(1+t+|q|)^{5-4\delta}(1+|q|)^ {2+4\gamma}},\quad\text{when}\quad q>0,\right.\] \[\left.\frac{\epsilon}{(1+t+|q|)^{5-4\delta}(1+|q|)^{3-2\delta}} \quad\text{when}\quad q<0.\right)\Big{)}\]
And,
\[\Big{(}\sum_{|K|\leq|I|}|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K }}h)|^{2}\Big{)}\cdot E(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1)\] \[\cdot\Big{(}\Big{(}\left\{\frac{\epsilon}{(1+t+|q|)^{2-2\delta}( 1+|q|)^{2+2\gamma}},\quad\text{when}\quad q>0,\right.\] \[\left.+\Big{(}\left\{\frac{\epsilon}{(1+t+|q|)^{2-2\delta}(1+|q|) ^{4+2(\gamma-\delta)+2\gamma}},\quad\text{when}\quad q>0,\right.\] \[\left.\frac{\epsilon}{(1+t+|q|)^{2-2\delta}(1+|q|)^{3-2\delta}} \quad\text{when}\quad q>0.\right)\Big{)}\] \[\lesssim \sum_{|K|\leq|I|}\Big{(}|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K }}h)|^{2}\Big{)}\cdot E(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1)\] \[\cdot\Big{(}\left\{\frac{\epsilon}{(1+t+|q|)^{2-2\delta}(1+|q|)^ {2+2\gamma}},\quad\text{when}\quad q>0,\right.\] \[\left.\frac{\epsilon}{(1+t+|q|)^{2-2\delta}(1+|q|)^{3-2\delta}} \quad\text{when}\quad q<0.\right)\Big{)}\] \[\lesssim \sum_{|K|\leq|I|}\Big{(}|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K }}h)|^{2}\Big{)}\cdot E(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1)\] \[\cdot\Big{(}\left\{\frac{\epsilon}{(1+t+|q|)^{5-4\delta}(1+|q|)^ {3-2\delta}},\quad\text{when}\quad q>0,\right.\] \[\left.\frac{\epsilon}{(1+t+|q|)^{5-4\delta}(1+|q|)^{3-2\delta}} \quad\text{when}\quad q>0,\right.\] \[\left.\frac{\epsilon}{(1+t+|q|)^{5-4\delta}(1+|q|)^{6-4\delta}} \quad\text{when}\quad q<0.\right)\Big{)}\] \[\lesssim \sum_{|K|\leq|I|}\Big{(}|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K }}h)|^{2}\Big{)}\cdot E(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1)\] \[\cdot\Big{(}\left\{\frac{\epsilon}{(1+t+|q|)^{5-4\delta}(1+|q|)^ {2+4\gamma}},\quad\text{when}\quad q>0,\right.\] \[\left.\frac{\epsilon}{(1+t+|q|)^{5-4\delta}(1+|q|)^{3-2\delta}} \quad\text{when}\quad q>0,\right.\] \[\left.\frac{\epsilon}{(1+t+|q|)^{5-4\delta}(1+|q|)^{3-2\delta}} \quad\text{when}\quad q>0,\right.\] \[\left.\frac{\epsilon}{(1+t+|q|)^{5-4\delta}(1+|q|)^{6-4\delta}} \quad\text{when}\quad q>0,\right.
Also,
\[\Big{(}\sum_{|K|\leq|I|}|\mathcal{L}_{Z^{K}}h|^{2}\Big{)}\cdot E( \lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1)\] \[\Big{(}\Big{(}\left\{\frac{\epsilon}{(1+t+|q|)^{5-4\delta}(1+|q|)^ {4+4\gamma}},\quad\text{when}\quad q>0,\right.\] \[\left.+\Big{(}\left\{\frac{\frac{\epsilon}{(1+t+|q|)^{5-4\delta}( 1+|q|)^{5+2(\gamma-\delta)+4\gamma}},\quad\text{when}\quad q>0,\right.\] \[\left.\epsilon(1+|q|)}\right)\] \[+\Big{(}\left\{\frac{\epsilon}{(1+t+|q|)^{5-4\delta}(1+|q|)^{6+4 \gamma}},\quad\text{when}\quad q>0,\right.\] \[\left.\left.\frac{\epsilon}{(1+t+|q|)^{5-4\delta}(1+|q|)^{6+4 \gamma}},\quad\text{when}\quad q>0,\right.\right.\] \[\left.\left.\frac{\epsilon}{(1+t+|q|)^{5-4\delta}(1+|q|)^{6-4 \delta}},\quad\text{when}\quad q<0.\right.\right)\Big{)}\] \[\lesssim \sum_{|K|\leq|I|}\Big{(}|\mathcal{L}_{Z^{K}}h|^{2}\Big{)}\cdot E (\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1)\] \[\cdot\Big{(}\left\{\frac{\epsilon}{(1+t+|q|)^{5-4\delta}(1+|q|)^ {4+4\gamma}},\quad\text{when}\quad q>0,\right.\] \[\left.\frac{\epsilon}{(1+t+|q|)^{5-4\delta}(1+|q|)^{2-4\delta}} \quad\text{when}\quad q<0.\right.\Big{)}\,.\]
## 4. The proof of exterior stability for \(n\geq 4\)
Now, we look \(n\geq 4\) and \(\delta=0\) and we are interested only in the exterior region \(\overline{C}\). We fix \(q_{0}\) such that \(\overline{C}\subseteq\{q\geq q_{0}\}\).
### Using the Hardy type inequality for the space-time integrals of the source terms for \(n\geq 4\)
**Lemma 4.1**.: _For \(n\geq 4\), \(\delta=0\), for \(q\geq q_{0}\), we have_
\[\frac{(1+t)^{1+\lambda}}{\epsilon}\cdot|\mathcal{L}_{Z^{I}}(g^{ \lambda\mu}\nabla^{(\mathbf{m})}{}_{\lambda}\nabla^{(\mathbf{m})}{}_{\mu}A)|^ {2}\] \[\lesssim C(q_{0})\cdot E^{ext}(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac {n}{2}\rfloor+1)\] \[\cdot\sum_{|K|\leq|I|}\Big{(}|\nabla^{(\mathbf{m})}(\mathcal{L}_ {Z^{K}}A)|^{2}+|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K}}h)|^{2}\Big{)}\cdot \Big{(}\frac{\epsilon}{(1+t+|q|)^{2-\lambda}\cdot(1+|q|)^{2\gamma}}\Big{)}\] \[+C(q_{0})\cdot E^{ext}(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac {n}{2}\rfloor+1)\] \[\cdot\sum_{|K|\leq|I|}\Big{(}|\mathcal{L}_{Z^{K}}A|^{2}+| \mathcal{L}_{Z^{K}}h|^{2}\Big{)}\cdot\Big{(}\frac{\epsilon}{(1+t+|q|)^{2- \lambda}\cdot(1+|q|)^{2+2\gamma}}\Big{)}\;.\]
Proof.: Based on what we have shown using the Klainerman-Sobolev inequality in the exterior, we get for \(n\geq 4\), \(\delta=0\), that for all points in the exterior region \(\overline{C}\), we have
\[\frac{(1+t)}{\epsilon}\cdot|\mathcal{L}_{Z^{I}}(g^{\lambda\mu} \nabla^{(\mathbf{m})}{}_{\lambda}\nabla^{(\mathbf{m})}{}_{\mu}A)|^{2}\] \[\lesssim \sum_{|K|\leq|I|}\Big{(}|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K} }A)|^{2}+|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K}}h)|^{2}\Big{)}\cdot E^{ext} (\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1)\] \[\cdot\Big{(}\begin{cases}\frac{\epsilon}{(1+t+|q|)^{2}\cdot(1+|q|) ^{2\gamma}},\quad\text{when}\quad\quad q>0,\\ \frac{\epsilon}{(1+t+|q|)^{2}\cdot(1+|q|)^{-1}}\quad\text{when}\quad\quad q<0. \end{cases}\Big{)}\] \[+\sum_{|K|\leq|I|}\Big{(}|\mathcal{L}_{Z^{K}}A|^{2}+|\mathcal{L} _{Z^{K}}h|^{2}\Big{)}\cdot E^{ext}(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n }{2}\rfloor+1)\] \[\cdot\Big{(}\begin{cases}\frac{\epsilon}{(1+t+|q|)^{2}\cdot(1+|q|) ^{2+2\gamma}},\quad\text{when}\quad\quad q>0,\\ \frac{\epsilon}{(1+t+|q|)^{2}\cdot(1+|q|)^{2}}\quad\text{when}\quad\quad q<0. \end{cases}\Big{)}\,.\]
Hence, for \(q\geq q_{0}\), we have
\[\frac{(1+t)}{\epsilon}\cdot|\mathcal{L}_{Z^{I}}(g^{\lambda\mu} \nabla^{(\mathbf{m})}{}_{\lambda}\nabla^{(\mathbf{m})}{}_{\mu}A)|^{2}\] \[\lesssim \sum_{|K|\leq|I|}\Big{(}|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K} }A)|^{2}+|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K}}h)|^{2}\Big{)}\cdot C(q_{0} )\cdot E^{ext}(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1)\cdot \Big{(}\frac{\epsilon}{(1+t+|q|)^{2}\cdot(1+|q|)^{2\gamma}}\Big{)}\] \[+\sum_{|K|\leq|I|}\Big{(}|\mathcal{L}_{Z^{K}}A|^{2}+|\mathcal{L} _{Z^{K}}h|^{2}\Big{)}\cdot C(q_{0})\cdot E^{ext}(\lfloor\frac{|I|}{2}\rfloor+ \lfloor\frac{n}{2}\rfloor+1)\cdot\Big{(}\frac{\epsilon}{(1+t+|q|)^{2}\cdot(1+| q|)^{2+2\gamma}}\Big{)}\,.\]
**Lemma 4.2**.: _For \(n\geq 4\), \(\delta=0\), and \(q\geq q_{0}\), we have_
\[\frac{(1+t)^{1+\lambda}}{\epsilon}\cdot|\mathcal{L}_{Z^{I}}(g^{ \lambda\mu}\nabla^{(\mathbf{m})}{}_{\lambda}\nabla^{(\mathbf{m})}{}_{\mu}h)|^ {2}\] \[\lesssim C(q_{0})\cdot E^{ext}(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{ n}{2}\rfloor+1)\] \[\cdot\sum_{|K|\leq|I|}\Big{(}|\nabla^{(\mathbf{m})}(\mathcal{L}_{ Z^{K}}A)|^{2}+|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K}}h)|^{2}\Big{)}\cdot\Big{(} \frac{\epsilon}{(1+t+|q|)^{2-\lambda}\cdot(1+|q|)^{2+2\gamma}}\Big{)}\] \[+C(q_{0})\cdot E^{ext}(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{ n}{2}\rfloor+1)\] \[\cdot\sum_{|K|\leq|I|}\Big{(}|\mathcal{L}_{Z^{K}}A|^{2}+|\mathcal{ L}_{Z^{K}}h|^{2}\Big{)}\cdot\Big{(}\frac{\epsilon}{(1+t+|q|)^{5-\lambda} \cdot(1+|q|)^{2+4\gamma}}\Big{)}\,.\]
Proof.: Using Klainerman-Sobolev inequality in the exterior, we get based on what we showed for \(n\geq 4\), \(\delta=0\),
\[\frac{(1+t)}{\epsilon}\cdot|\mathcal{L}_{Z^{I}}(g^{\lambda\mu} \nabla^{(\mathbf{m})}{}_{\lambda}\nabla^{(\mathbf{m})}{}_{\mu}h)|^{2}\] \[\lesssim \sum_{|K|\leq|I|}\Big{(}|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K}} A)|^{2}+|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K}}h)|^{2}\Big{)}\cdot E^{ext}( \lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1)\] \[\cdot\Big{(}\left\{\frac{\epsilon}{(1+t+|q|)^{2}\cdot(1+|q|)^{2+ 2\gamma}},\quad\text{when}\quad\ q>0,\quad\right)\] \[+\sum_{|K|\leq|I|}\Big{(}|\mathcal{L}_{Z^{K}}A|^{2}+|\mathcal{L}_ {Z^{K}}h|^{2}\Big{)}\cdot E^{ext}(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n} {2}\rfloor+1)\] \[\cdot\Big{(}\left\{\frac{\epsilon}{(1+t+|q|)^{5}\cdot(1+|q|)^{2+ 4\gamma}},\quad\text{when}\quad\ q>0,\quad\right).\]
Thus, we obtain for \(q\geq q_{0}\),
\[\frac{(1+t)}{\epsilon}\cdot|\mathcal{L}_{Z^{I}}(g^{\lambda\mu} \nabla^{(\mathbf{m})}{}_{\lambda}\nabla^{(\mathbf{m})}{}_{\mu}h)|^{2}\] \[\lesssim C(q_{0})\cdot E^{ext}(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{ n}{2}\rfloor+1)\] \[+C(q_{0})\cdot E^{ext}(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{ n}{2}\rfloor+1)\cdot\sum_{|K|\leq|I|}\Big{(}|\mathcal{L}_{Z^{K}}A|^{2}+| \mathcal{L}_{Z^{K}}h|^{2}\Big{)}\cdot\Big{(}\frac{\epsilon}{(1+t+|q|)^{5}\cdot (1+|q|)^{2+4\gamma}}\Big{)}\.\]
We recapitulate the following corollary from [17].
**Corollary 4.1**.: _Let \(w\) defined as in Definition 2.12, where \(\gamma>0\). Let \(\Phi\) a tensor that decays fast enough at spatial infinity for all time \(t\), such that_
\[\int_{\mathbb{S}^{n-1}}\lim_{r\to\infty}\Big{(}\frac{r^{n-1}}{(1+t +r)^{a}\cdot(1+|q|)}w(q)\cdot<\Phi,\Phi>\Big{)}d\sigma^{n-1}(t) = 0. \tag{4.1}\]
_Let \(R(\Omega)\geq 0\), be a function of \(\Omega\in\mathbb{S}^{n-1}\). Then, since \(\gamma\neq 0\), we have for \(0\leq a\leq n-1\), that_
\[\int_{\mathbb{S}^{n-1}}\int_{r=R(\Omega)}^{r=\infty}\frac{r^{n-1} }{(1+t+r)^{a}}\cdot\frac{w(q)}{(1+|q|)^{2}}\cdot<\Phi,\Phi>\cdot dr\cdot d \sigma^{n-1}\] \[\leq c(\gamma)\cdot\int_{\mathbb{S}^{n-1}}\int_{r=R(\Omega)}^{r= \infty}\frac{r^{n-1}}{(1+t+r)^{a}}\cdot w(q)\cdot<\partial_{r}\Phi,\partial_{r }\Phi>\cdot dr\cdot d\sigma^{n-1}\,\]
_where the constant \(c(\gamma)\) does not depend on \(R(\Omega)\)._
Proof.: We first prove the claim.
**Lemma 4.2**.: _Let \(w\) be a function of \(\Omega\). Then, for any \(\gamma>0\), there exists a constant \(c(\gamma)\) such that_
\[\int_{\mathbb{S}^{n-1}}\lim_{r\to\infty}\Big{(}\frac{r^{n-1}}{(1+t+r)^{a} \cdot(1+|q|)^{2}}\Big{)}\cdot\frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+r)^ {a}}\cdot\frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)} {(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+r)^{a}}\cdot \frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+r)^ {a}}\cdot\frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)} {(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+r)^{a}}\cdot \frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+r)^ {a}}\cdot\frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)} {(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+r)^{a}}\cdot \frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+r)^ {a}}\cdot\frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)}{ (1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+r)^{a}}\cdot \frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+r)^ {a}}\cdot\frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)}{ (1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+r)^{a}}\cdot\frac {w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+r)^{a}} \cdot\frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+ r)^{a}}\cdot\frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+r)^{a}} \cdot\frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+r)^ {a}}\cdot\frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+ r)^{a}}\cdot\frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+r)^{a}} \cdot\frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+r)^ {a}}\cdot\frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+ r)^{a}}\cdot\frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+r)^{a}} \cdot\frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+ r)^{a}}\cdot\frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+r)^{a}} \cdot\frac{w(q)}{(1+t+r)^{a}}\cdot\frac{w(q)}{(1+t+r)^{a}}\cdot\frac
**Lemma 4.3**.: _For \(n\geq 4\), \(\delta=0\), and for fields decaying fast enough at spatial infinity, such that for all time \(t\), for \(|K|\leq|I|\)_
\[\int_{\mathbb{S}^{n-1}}\lim_{r\to\infty}\Big{(}\frac{r^{n-1}}{(1+t+r)^{2- \lambda}\cdot(1+|q|)}w(q)\cdot\Big{(}|\mathcal{L}_{Z^{K}}A|^{2}+|\mathcal{L}_{ Z^{K}}h|^{2}\Big{)}d\sigma^{n-1}(t) = 0\;, \tag{4.3}\]
_then, for \(\gamma\neq 0\) and \(\mu\neq\frac{1}{2}\), we have_
\[\int_{0}^{t}\Big{(}\int_{\Sigma_{t}^{ext}}\frac{(1+t)^{1+\lambda} }{\epsilon}\cdot|\mathcal{L}_{Z^{I}}(g^{\lambda\mu}\nabla^{(\mathbf{m})}{}_{ \lambda}\nabla^{(\mathbf{m})}{}_{\mu}A)|^{2}\cdot w\Big{)}\cdot dt\] \[\lesssim c(\gamma,\mu)\cdot C(q_{0})\cdot E^{ext}(\lfloor\frac{|I|}{2} \rfloor+\lfloor\frac{n}{2}\rfloor+1)\] \[\times\int_{0}^{t}\frac{\epsilon}{(1+t)^{2-\lambda}}\cdot \Big{(}\int_{\Sigma_{t}^{ext}}\sum_{|K|\leq|I|}\Big{(}|\nabla^{(\mathbf{m})}( \mathcal{L}_{Z^{K}}A)|^{2}+|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K}}h)|^{2} \Big{)}\cdot w\Big{)}\cdot dt\;.\]
Proof.: We showed that for \(n\geq 4\), \(\delta=0\), for \(q\geq q_{0}\), we have
\[\frac{(1+t)^{1+\lambda}}{\epsilon}\cdot|\mathcal{L}_{Z^{I}}(g^{ \lambda\mu}\nabla^{(\mathbf{m})}{}_{\lambda}\nabla^{(\mathbf{m})}{}_{\mu}A)|^ {2}\] \[\lesssim C(q_{0})\cdot E^{ext}(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n }{2}\rfloor+1)\] \[\times\sum_{|K|\leq|I|}\Big{(}|\nabla^{(\mathbf{m})}(\mathcal{L}_ {Z^{K}}A)|^{2}+|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K}}h)|^{2}\Big{)}\cdot \Big{(}\frac{\epsilon}{(1+t+|q|)^{2-\lambda}\cdot(1+|q|)^{2\gamma}}\Big{)}\] \[+C(q_{0})\cdot E^{ext}(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n }{2}\rfloor+1)\cdot\sum_{|K|\leq|I|}\Big{(}|\mathcal{L}_{Z^{K}}A|^{2}+| \mathcal{L}_{Z^{K}}h|^{2}\Big{)}\cdot\Big{(}\frac{\epsilon}{(1+t+|q|)^{2- \lambda}\cdot(1+|q|)^{2+2\gamma}}\Big{)}\;.\]
Based on the Hardy inequality that we have shown in Corollary 4.1, we get that for \(\gamma\neq 0\) and \(0<\lambda\leq\frac{1}{2}\) (and therefore \(2-\lambda\leq 3\leq n-1\) for \(n\geq 4\)), under the assumption again that \(\mathcal{L}_{Z^{K}}A\) and \(\mathcal{L}_{Z^{K}}h\) decay fast enough at spatial infinity for all time \(t\), for \(|K|\leq|I|\), such that
\[\int_{\mathbb{S}^{n-1}}\lim_{r\to\infty}\Big{(}\frac{r^{n-1}}{(1+t +r)^{2-\lambda}\cdot(1+|q|)}w(q)\cdot\Big{(}|\mathcal{L}_{Z^{K}}A|^{2}+| \mathcal{L}_{Z^{K}}h|^{2}\Big{)}d\sigma^{n-1}(t) = 0\;, \tag{4.5}\]
that
\[\int_{\mathbb{S}^{n-1}}\int_{r=R(\Omega)}^{r=\infty}\frac{r^{n-1}} {(1+t+r)^{2-\lambda}}\cdot\frac{w(q)}{(1+|q|)^{2}}\cdot<\Phi,\Phi>\cdot dr \cdot d\sigma^{n-1}\] \[\leq c(\gamma,\mu)\cdot\int_{\mathbb{S}^{n-1}}\int_{r=R(\Omega)}^{r= \infty}\frac{r^{n-1}}{(1+t+r)^{2-\lambda}}\cdot w(q)\cdot<\partial_{r}\Phi, \partial_{r}\Phi>\cdot dr\cdot d\sigma^{n-1}\;.\]
By choosing \(R(\Omega)\) such that when \(\Omega\) spans \(\mathbb{S}^{n-1}\), we obtain the intersection of \(\Sigma_{t}\) and \(N^{t_{2}}_{t_{1}}\) (the null boundary for the metric \(g\) of \(\overline{C}\)). We get
\[\int_{\Sigma^{ext}_{t}}\frac{1}{(1+t+|q|)^{2-\lambda}(1+|q|)^{2}} \cdot\Big{(}|\mathcal{L}_{Z^{K}}A|^{2}+|\mathcal{L}_{Z^{K}}h|^{2}\Big{)}\cdot w\] \[\leq c(\gamma)\cdot\int_{\Sigma^{ext}_{t}}\frac{1}{(1+t+|q|)^{2- \lambda}}\cdot\Big{(}|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K}}A)|^{2}+| \nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K}}h)|^{2}\Big{)}\cdot w\;.\]
As a result,
\[\int_{0}^{t}\Big{(}\int_{\Sigma^{ext}_{t}}\frac{(1+t)^{1+\lambda} }{\epsilon}\cdot|\mathcal{L}_{Z^{I}}(g^{\lambda\mu}\nabla^{(\mathbf{m})}{}_{ \lambda}\nabla^{(\mathbf{m})}{}_{\mu}A)|^{2}\cdot w\Big{)}\cdot dt\] \[\lesssim C(q_{0})\cdot E^{ext}(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{ n}{2}\rfloor+1)\] \[\cdot\int_{0}^{t}\Big{(}\int_{\Sigma^{ext}_{t}}\sum_{|K|\leq|I|} \Big{(}|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K}}A)|^{2}+|\nabla^{(\mathbf{m}) }(\mathcal{L}_{Z^{K}}h)|^{2}\Big{)}\cdot\frac{\epsilon}{(1+t+|q|)^{2-\lambda}} \cdot w\Big{)}\cdot dt\] \[+C(q_{0})\cdot E^{ext}(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{ n}{2}\rfloor+1)\] \[\cdot\int_{0}^{t}\Big{(}\int_{\Sigma^{ext}_{t}}\sum_{|K|\leq|I|} \Big{(}|\mathcal{L}_{Z^{K}}A|^{2}+|\mathcal{L}_{Z^{K}}h|^{2}\Big{)}\cdot\frac{ \epsilon}{(1+t+|q|)^{2-\lambda}\cdot(1+|q|)^{2}}\cdot w\Big{)}\cdot dt\] \[\lesssim C(q_{0})\cdot E^{ext}(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{ n}{2}\rfloor+1)\] \[\cdot\int_{0}^{t}\Big{(}\int_{\Sigma^{ext}_{t}}\sum_{|K|\leq|I|} \Big{(}|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K}}A)|^{2}+|\nabla^{(\mathbf{m}) }(\mathcal{L}_{Z^{K}}h)|^{2}\Big{)}\cdot\frac{c(\gamma)\cdot\epsilon}{(1+t+| q|)^{2-\lambda}}\cdot w\Big{)}\cdot dt\;.\]
**Lemma 4.4**.: _For \(n\geq 4\), \(\delta=0\), and for fields decaying fast enough at spatial infinity, such that for all time \(t\), for \(|K|\leq|I|\),_
\[\int_{\mathbb{S}^{n-1}}\lim_{r\to\infty}\Big{(}\frac{r^{n-1}}{(1+t +r)^{2-\lambda}\cdot(1+|q|)}\cdot w(q)\cdot\Big{(}|\mathcal{L}_{Z^{K}}A|^{2}+ |\mathcal{L}_{Z^{K}}h|^{2}\Big{)}d\sigma^{n-1}(t) = 0\;, \tag{4.7}\]
_then, for \(\gamma\neq 0\) we have_
\[\int_{0}^{t}\Big{(}\int_{\Sigma^{ext}_{t}}\frac{(1+t)^{1+\lambda} }{\epsilon}\cdot|\mathcal{L}_{Z^{I}}(g^{\lambda\mu}\nabla^{(\mathbf{m})}{}_{ \lambda}\nabla^{(\mathbf{m})}{}_{\mu}h)|^{2}\cdot w\Big{)}\cdot dt\] \[\lesssim c(\gamma)\cdot C(q_{0})\cdot E^{ext}(\lfloor\frac{|I|}{2} \rfloor+\lfloor\frac{n}{2}\rfloor+1)\] \[\times\int_{0}^{t}\frac{\epsilon}{(1+t)^{2-\lambda}}\cdot\Big{(} \int_{\Sigma^{ext}_{t}}\sum_{|K|\leq|I|}\Big{(}|\nabla^{(\mathbf{m})}(\mathcal{ L}_{Z^{K}}A)|^{2}+|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K}}h)|^{2}\Big{)}\cdot w \Big{)}\cdot dt\;.\]
Proof.: We have shown that for \(n\geq 4\), \(\delta=0\), and \(q\geq q_{0}\),
\[\frac{(1+t)^{1+\lambda}}{\epsilon}\cdot|\mathcal{L}_{Z^{I}}(g^{ \lambda\mu}\nabla^{(\mathbf{m})}{}_{\lambda}\nabla^{(\mathbf{m})}{}_{\mu}h)|^{2}\] \[\lesssim C(q_{0})\cdot E^{ext}(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n }{2}\rfloor+1)\] \[\times\sum_{|K|\leq|I|}\Big{(}|\nabla^{(\mathbf{m})}(\mathcal{L}_ {Z^{K}}A)|^{2}+|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K}}h)|^{2}\Big{)}\cdot \Big{(}\frac{\epsilon}{(1+t+|q|)^{2-\lambda}\cdot(1+|q|)^{2+2\gamma}}\Big{)}\] \[+C(q_{0})\cdot E^{ext}(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n }{2}\rfloor+1)\] \[\times\sum_{|K|\leq|I|}\Big{(}|\mathcal{L}_{Z^{K}}A|^{2}+| \mathcal{L}_{Z^{K}}h|^{2}\Big{)}\cdot\Big{(}\frac{\epsilon}{(1+t+|q|)^{5- \lambda}\cdot(1+|q|)^{2+4\gamma}}\Big{)}\;.\]
Thus,
\[\frac{(1+t)^{1+\lambda}}{\epsilon}\cdot|\mathcal{L}_{Z^{I}}(g^{ \lambda\mu}\nabla^{(\mathbf{m})}{}_{\lambda}\nabla^{(\mathbf{m})}{}_{\mu}h)|^ {2}\] \[\lesssim C(q_{0})\cdot E^{ext}(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n }{2}\rfloor+1)\cdot\sum_{|K|\leq|I|}\Big{(}|\nabla^{(\mathbf{m})}(\mathcal{L}_ {Z^{K}}A)|^{2}+|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K}}h)|^{2}\Big{)}\cdot \frac{\epsilon}{(1+t+|q|)^{2-\lambda}\cdot(1+|q|)^{2}}\] \[+C(q_{0})\cdot E^{ext}(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n }{2}\rfloor+1)\cdot\sum_{|K|\leq|I|}\Big{(}|\mathcal{L}_{Z^{K}}A|^{2}+| \mathcal{L}_{Z^{K}}h|^{2}\Big{)}\cdot\frac{\epsilon}{(1+t+|q|)^{2-\lambda} \cdot(1+|q|)^{2}}\;.\]
Assuming that both \(\mathcal{L}_{Z^{K}}A\) and \(\mathcal{L}_{Z^{K}}h\) decay fast enough at spatial infinity for all time \(t\), i.e. that
\[\int_{\mathbb{S}^{n-1}}\lim_{r\to\infty}\Big{(}\frac{r^{n-1}}{(1 +t+r)^{2-\lambda}\cdot(1+|q|)}w(q)\cdot\Big{(}|\mathcal{L}_{Z^{K}}A|^{2}+| \mathcal{L}_{Z^{K}}h|^{2}\Big{)}d\sigma^{n-1}(t) = 0\;. \tag{4.8}\]
Then, for \(\gamma\neq 0\) and \(0<\lambda\leq\frac{1}{2}\), we have for \(0\leq 2-\lambda\leq 3\leq n-1\) (for \(n\geq 4\)), we get that
\[\int_{\Sigma^{ext}_{t}}\frac{1}{(1+t+|q|)^{2-\lambda}(1+|q|)^{2} }\cdot\Big{(}|\mathcal{L}_{Z^{K}}A|^{2}+|\mathcal{L}_{Z^{K}}h|^{2}\Big{)}\cdot w\] \[\leq c(\gamma)\cdot\int_{\Sigma^{ext}_{t}}\frac{1}{(1+t+|q|)^{2- \lambda}}\cdot\Big{(}|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K}}A)|^{2}+| \nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K}}h)|^{2}\Big{)}\cdot w\]
As a result,
\[\int_{\Sigma_{t}}\frac{(1+t)^{1+\lambda}}{\epsilon}\cdot|\mathcal{L }_{Z^{I}}(g^{\lambda\mu}\nabla^{(\mathbf{m})}{}_{\lambda}\nabla^{(\mathbf{m})}{}_ {\mu}h)|^{2}\cdot w\] \[\lesssim C(q_{0})\cdot E^{ext}(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n} {2}\rfloor+1)\] \[\times\int_{\Sigma_{t}}\sum_{|K|\leq|I|}\Big{(}|\nabla^{(\mathbf{ m})}(\mathcal{L}_{Z^{K}}A)|^{2}+|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K}}h)|^{2} \Big{)}\cdot\frac{\epsilon}{(1+t+|q|)^{2-\lambda}\cdot(1+|q|)^{2}}\cdot w\] \[+C(q_{0})\cdot E^{ext}(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n }{2}\rfloor+1)\cdot\int_{\Sigma_{t}}\sum_{|K|\leq|I|}\Big{(}|\mathcal{L}_{Z^{K }}A|^{2}+|\mathcal{L}_{Z^{K}}h|^{2}\Big{)}\cdot\frac{\epsilon}{(1+t+|q|)^{2- \lambda}\cdot(1+|q|)^{2}}\cdot w\] \[\lesssim c(\gamma)\cdot C(q_{0})\cdot E^{ext}(\lfloor\frac{|I|}{2} \rfloor+\lfloor\frac{n}{2}\rfloor+1)\cdot\frac{\epsilon}{(1+t)^{2-\lambda}} \cdot\int_{\Sigma_{t}}\sum_{|K|\leq|I|}\Big{(}|\nabla^{(\mathbf{m})}(\mathcal{ L}_{Z^{K}}A)|^{2}+|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K}}h)|^{2}\Big{)}\cdot w\] \[+C(q_{0})\cdot E^{ext}(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n }{2}\rfloor+1)\cdot\int_{\Sigma_{t}}\sum_{|K|\leq|I|}\Big{(}|\nabla^{(\mathbf{ m})}(\mathcal{L}_{Z^{K}}A)|^{2}+|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K}}h)|^{2} \Big{)}\cdot\frac{\epsilon}{(1+t+|q|)^{4-\lambda}}\cdot w\] \[\lesssim c(\gamma)\cdot C(q_{0})\cdot E^{ext}(\lfloor\frac{|I|}{2} \rfloor+\lfloor\frac{n}{2}\rfloor+1)\cdot\frac{\epsilon}{(1+t)^{2-\lambda}} \cdot\int_{\Sigma_{t}}\sum_{|K|\leq|I|}\Big{(}|\nabla^{(\mathbf{m})}(\mathcal{ L}_{Z^{K}}A)|^{2}+|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K}}h)|^{2}\Big{)}\cdot w\.\]
### Grnonwall type inequality on the exterior energy for \(n\geq 4\)
**Lemma 4.5**.: _For \(H^{\mu\nu}=g^{\mu\nu}-m^{\mu\nu}\) satisfying_
\[|H|\leq\frac{1}{n}\;, \tag{4.9}\]
_and for \(\mathcal{L}_{Z^{J}}A\) and \(\mathcal{L}_{Z^{J}}h^{1}\) decaying sufficiently fast at spatial infinity as in the bootstrap argument, and with the condition that for \(\gamma>0\) and for all \(|K|\leq|I|\),_
\[\int_{\mathbb{S}^{n-1}}\lim_{r\to\infty}\Big{(}\frac{r^{n-1}}{(1+t +r)^{2-\lambda}\cdot(1+|q|)}\cdot w(q)\cdot\Big{(}|\mathcal{L}_{Z^{K}}A|^{2}+| \mathcal{L}_{Z^{K}}h|^{2}\Big{)}d\sigma^{n-1}(t) = 0\;, \tag{4.10}\]
_then for \(\delta=0\), and for \(0<\lambda\leq\frac{1}{2}\),_
\[(\mathcal{E}^{ext}_{|I|})^{2}(t_{2})\] \[\lesssim (\mathcal{E}^{ext}_{|I|})^{2}(t_{1})+C(q_{0})\cdot c(\gamma) \cdot E^{ext}(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1)\cdot C (|I|)\cdot\int_{t_{1}}^{t_{2}}\frac{\epsilon}{(1+t)^{1+\lambda}}\cdot( \mathcal{E}^{ext}_{|I|})^{2}(\tau)\cdot d\tau\;, \tag{4.11}\]
_where_
\[\mathcal{E}^{ext}_{|I|}(\tau):=\sum_{|J|\leq|I|}\big{(}\|w^{1/2}\nabla^{( \mathbf{m})}(\mathcal{L}_{Z^{J}}h^{1}(t,\cdot))\|_{L^{2}(\Sigma^{ext}_{\tau})} +\|w^{1/2}\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{J}}A(t,\cdot))\|_{L^{2}( \Sigma^{ext}_{\tau})}\big{)}\,,\]
_with \(w\) defined as in Definition 2.12, with \(\gamma>0\)._
Proof.: Based on Lemmas 4.3 and 4.4 and injecting in Lemma 3.6, we have under the stated assumptions,
\[\left(\mathcal{E}_{|I|}^{ext}\right)^{2}(t_{2})\] \[\lesssim \left(\mathcal{E}_{|I|}^{ext}\right)^{2}(t_{1})+C(|I|)\cdot C(q_{ 0})\cdot c(\gamma)\cdot E^{ext}(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n}{2} \rfloor+1)\cdot\int_{t_{1}}^{t_{2}}\frac{\epsilon}{(1+t)^{1+\lambda}}\cdot \left(\mathcal{E}_{|I|}^{ext}\right)^{2}(\tau)\cdot d\tau\] \[+C(|I|)\cdot\int_{0}^{t}\Big{(}\int_{\Sigma_{t}^{ext}}\frac{(1+ \tau)^{1+\lambda}}{\epsilon}\cdot\sum_{|K|\leq|I|}|\ \mathcal{L}_{Z^{K}}g^{\alpha\beta}\nabla^{(\mathbf{m})}{}_{\alpha}\nabla^{( \mathbf{m})}{}_{\beta}A|^{2}\cdot w\cdot dx^{1}\dots dx^{n}\Big{)}\cdot d\tau\] \[+C(|I|)\cdot\int_{0}^{t}\Big{(}\int_{\Sigma_{t}^{ext}}\frac{(1+ \tau)^{1+\lambda}}{\epsilon}\cdot\sum_{|K|\leq|I|}|\ \mathcal{L}_{Z^{K}}g^{\alpha\beta}\nabla^{(\mathbf{m})}{}_{\alpha}\nabla^{( \mathbf{m})}{}_{\beta}h|^{2}\cdot w\cdot dx^{1}\dots dx^{n}\Big{)}\cdot d\tau\] \[\lesssim \mathcal{E}_{|I|}^{ext}(t_{1})+C(|I|)\cdot C(q_{0})\cdot c(\gamma )\cdot E^{ext}(\lfloor\frac{|I|}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1)\cdot \int_{t_{1}}^{t_{2}}\frac{\epsilon}{(1+t)^{1+\lambda}}\cdot\left(\mathcal{E} _{|I|}^{ext}\right)^{2}(\tau)\cdot d\tau\] \[+C(|I|)\cdot c(\gamma)\cdot C(q_{0})\cdot E^{ext}(\lfloor\frac{|I |}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1)\] \[\times\int_{0}^{t}\frac{\epsilon}{(1+t)^{2-\lambda}}\cdot\Big{(} \int_{\Sigma_{t}^{ext}}\sum_{|K|\leq|I|}\Big{(}|\nabla^{(\mathbf{m})}( \mathcal{L}_{Z^{K}}A)|^{2}+|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{K}}h)|^{2} \Big{)}\cdot w\Big{)}\cdot dt\;.\]
Hence, fixing \(\delta=0\), and \(0<\lambda\leq\frac{1}{2}\), we obtain the result.
### The proof of the theorem for \(n\geq 4\)
**Proposition 4.1**.: _Let \(n\geq 4\). Consider initial data \(\mathcal{L}_{Z^{J}}A\) and \(\mathcal{L}_{Z^{J}}h^{1}\) decaying sufficiently fast at spatial infinity at \(t=0\). For every \(N\geq 2\lfloor\frac{n}{2}\rfloor+2\), for every constant \(E^{ext}(N)\) (to bound \(\mathcal{E}_{N}^{ext}(t)\) in (4.12)), there exists a constant \(c_{0}\), that depends on \(E^{ext}(N)\), on \(N\) and on \(w\) (i.e. depends on \(\gamma\)), such that if_
\[\overline{\mathcal{E}}_{N}(0)\leq c_{0}\;,\]
_then for all time \(t\), we have_
\[\mathcal{E}_{N}^{ext}(t)\leq E^{ext}(N)\;,\]
_and consequently, in the the Lorenz gauge, the Yang-Mills fields decay to zero and the metric decays to the Minkowski metric in wave coordinates, for the initial value Cauchy problem for the Einstein Yang-Mills equations that we defined in the set-up, which will consequently admit a global solution in time \(t\). More precisely, for all \(|J|\leq N-\lfloor\frac{n}{2}\rfloor-1\), we have in the exterior region \(\overline{C}\), which is contained in \(q\geq q_{0}\),_
\[|\nabla^{(\mathbf{m})}(\mathcal{L}_{Z^{J}}A)(t,x)|+|\nabla^{( \mathbf{m})}(\mathcal{L}_{Z^{J}}h)(t,x)| \leq C(q_{0})\cdot E^{ext}(N)\cdot\frac{\epsilon}{(1+t+|q|)^{\frac{(n -1)}{2}}(1+|q|)^{1+\gamma}}\;,\]
_and_
\[|\mathcal{L}_{Z^{J}}A(t,x)|+|\mathcal{L}_{Z^{J}}h(t,x)| \leq C(q_{0})\cdot E^{ext}(N)\cdot\frac{\epsilon}{(1+t+|q|)^{\frac{(n-1 )}{2}}(1+|q|)^{\gamma}}\;.\]
Proof.: We start with the bootstrap assumption on \(\mathcal{E}_{(\lfloor\frac{N}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1)}\). We have then, thanks to (3.14), for \(n\geq 4\) and for \(\delta=0\;\), that
\[|H(t,x)| \lesssim \begin{cases}c(\gamma)\cdot\frac{\mathcal{E}_{(\lfloor\frac{n}{2 }\rfloor+1)}}{(1+t+|q|)^{2}(1+|q|)^{\gamma}},\quad\text{when}\quad\;\;q>0,\\ \frac{\mathcal{E}_{(\lfloor\frac{n}{2}\rfloor+1)}}{(1+t+|q|)^{2}}(1+|q|)^{ \frac{1}{2}},\quad\text{when}\quad\;\;q<0.\end{cases}\] \[\lesssim c(\gamma)\cdot\mathcal{E}_{(\lfloor\frac{n}{2}\rfloor+1)}\] \[\lesssim c(\gamma)\cdot E(\lfloor\frac{n}{2}\rfloor+1)\] \[\text{(where we used that we chose $\delta=0$ and $\epsilon=1$}\;,\] \[\text{see (\ref{eq:c}) and (\ref{eq:c})).}\]
By choosing \(E(\lfloor\frac{n}{2}\rfloor+1)\) small enough, depending on \(\gamma\) and on \(n\,\), we have
\[c(\gamma)\cdot E(\lfloor\frac{n}{2}\rfloor+1)<\frac{1}{n}\;. \tag{4.12}\]
We take initial data decaying sufficiently fast at spatial infinity, and since the fields satisfy a wave equation, we claim that this spatial decay will propagate in time under the bootstrap assumption, and thus, they will be satisfied for all time \(t\), in a way that we could use Lemma 4.5, where we fix \(0<\lambda\leq\frac{1}{2}\) arbitrary. Consequently, we get
\[(\mathcal{E}_{N}^{ext})^{2}(t)\] \[\leq C\cdot(\mathcal{E}_{N}^{ext})^{2}(0)+c(\gamma)\cdot E(\lfloor \frac{|I|}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1)\cdot C(N)\cdot\int_{0}^{t} \frac{\epsilon}{(1+\tau)^{1+\lambda}}\cdot\mathcal{E}_{N}^{2}(\tau)\cdot d\tau\;.\]
Now, using Gronwall lemma, we get
\[(\mathcal{E}_{N}^{ext})^{2}(t)\leq C\cdot\mathcal{E}_{N}^{2}(0) \cdot\exp\Big{(}\int_{0}^{t}c(\gamma)\cdot E(\lfloor\frac{N}{2}\rfloor+ \lfloor\frac{n}{2}\rfloor+1)\cdot C(N)\cdot\epsilon\cdot\frac{1}{(1+\tau)^{1 +\lambda}}\cdot d\tau\Big{)} \tag{4.13}\] \[\leq C\cdot\mathcal{E}_{N}^{2}(0)\cdot\exp\Big{(}c(\gamma)\cdot E( \lfloor\frac{N}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1)\cdot C(N)\cdot\epsilon \cdot\frac{1}{\lambda}\Big{)}\;,\]
which also leads to, using that we chose \(\epsilon\leq 1\) and that \(E(k)\leq 1\), that
\[\mathcal{E}_{N}^{ext}(t) \leq C\cdot\mathcal{E}_{|I|}(0)\cdot\exp\Big{(}c(\gamma)\cdot E( \lfloor\frac{N}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1)\cdot C(N)\cdot\epsilon \cdot\frac{1}{\lambda}\Big{)}\] \[\leq C\cdot\mathcal{E}_{N}(0)\cdot\exp\Big{(}c(\gamma)\cdot C(N) \cdot\frac{1}{\lambda}\Big{)}\;.\]
Thus, choosing an initial data such that
\[\overline{\mathcal{E}}_{N}(0)\leq\frac{1}{2\cdot C\cdot\exp\Big{(}c(\gamma) \cdot C(N)\cdot\frac{1}{\lambda}\Big{)}}\cdot E(\lfloor\frac{N}{2}\rfloor+ \lfloor\frac{n}{2}\rfloor+1)\;, \tag{4.14}\]
implies that
\[\mathcal{E}_{N}(0)\leq\frac{1}{2\cdot C\cdot\exp\left(c(\gamma)\cdot C(N)\cdot \frac{1}{\lambda}\right)}\cdot E(\lfloor\frac{N}{2}\rfloor+\lfloor\frac{n}{2} \rfloor+1)\;, \tag{4.15}\]
This leads to
\[\mathcal{E}_{N}^{ext}(t)\leq\frac{1}{2}\cdot E(\lfloor\frac{N}{2}\rfloor+ \lfloor\frac{n}{2}\rfloor+1)\;.\]
However, for \(N\geq\lfloor\frac{N}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1\), which means for \(\frac{N}{2}\geq\lfloor\frac{n}{2}\rfloor+1\), we have
\[\mathcal{E}_{\lfloor\frac{N}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1}^{ext}(t) \leq\mathcal{E}_{|I|}(0)\;.\]
Thus,
\[\mathcal{E}_{\lfloor\frac{N}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1}^{ext}(t) \leq\frac{1}{2}\cdot E(\lfloor\frac{N}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1)\;. \tag{4.16}\]
This shows that the estimate \(\mathcal{E}_{\lfloor\frac{N}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1}(t)\leq E (\lfloor\frac{N}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1)\,\) is in fact a true estimate and therefore, we can close the bootstrap argument for \(\mathcal{E}_{\lfloor\frac{N}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1}(t)\,\), with \(\epsilon=1\) and \(\delta=0\). For this, we have used the condition that
\[N\geq\lfloor\frac{N}{2}\rfloor+\lfloor\frac{n}{2}\rfloor+1\;,\]
which imposes that \(N\geq 2\lfloor\frac{n}{2}\rfloor+2\), and we also got that
\[\mathcal{E}_{N}^{ext}(t)\leq\frac{1}{2}\cdot E(\lfloor\frac{N}{2}\rfloor+ \lfloor\frac{n}{2}\rfloor+1)\;. \tag{4.17}\]
This in turn gives, using Lemmas 3.2 and 3.1, the stated decay estimates on the fields. |
2301.05474 | Locating topological structures in digital images via local homology | Topological data analysis (TDA) is a rising branch in modern applied
mathematics. It extracts topological structures as features of a given space
and uses these features to analyze digital data. Persistent homology, one of
the central tools in TDA, defines persistence barcodes to measure the changes
in local topologies among deformations of topological spaces. Although local
spatial changes characterize barcodes, it is hard to detect the locations of
corresponding structures of barcodes due to computational limitations. The
paper provides an efficient and concise way to divide the underlying space and
applies the local homology of the divided system to approximate the locations
of local holes in the based space. We also demonstrate this local homology
framework on digital images. | Chaun-Shen Hu | 2023-01-13T10:56:04Z | http://arxiv.org/abs/2301.05474v2 | # Locating topological structures in digital images via local homology
###### Abstract
Topological data analysis (TDA) is a rising branch in modern applied mathematics. It extracts topological structures as features of a given space and uses these features to analyze digital data. Persistent homology, one of the central tools in TDA, defines persistence barcodes to measure the changes in local topologies among deformations of topological spaces. Although local spatial changes characterize barcodes, it is hard to detect the locations of corresponding structures of barcodes due to computational limitations. The paper provides an efficient and concise way to divide the underlying space and applies the local homology of the divided system to approximate the locations of local holes in the based space. We also demonstrate this local homology framework on digital images.
Topological data analysis Persistent homology Local hole structures Persistence barcodes Local systems and patches Short filtrations Cellular sheaves Global sections Merging and outer-merging numbers
## 1 Introduction
Homology is an algebraic description of topological spaces and has become one of the foundations of modern geometry and topology. It uses algebra to detect genera in topological spaces, such as loops and high-dimensional voids, and to classify the topological types and shapes of manifolds. In addition to its importance in pure mathematics, over the past two decades or so, data scientists have noticed the benefits and potential of homology in numerical data and raised a new field called topological data analysis (TDA) [59, 7, 22, 8, 21].
Persistent homology plays a central role in TDA, which transforms a sequence of topological spaces linked by continuous functions into a homology chain. By checking the birth and death of elements in the chain, one can understand which homological generator can have a longer lifespan and shows its importance in the continuous process [59]. Persistent homology and related techniques have been applied in many data science tasks, such as bioinformatics [44, 42, 31], molecular analysis [56, 24, 3, 53], image processing [11, 10, 16, 13, 43], and material science [10].
Persistent barcodes (Section 2.2) record the lifespans of connected components, loops, and voids. Many applications use persistence barcodes and related statistical features as machine learning features [6, 1, 12]. Although persistent homology and persistence barcode has shown their potential in many real applications, it still has some limitations. One is it can only capture the global information of how connected components and holes behave during geometric deformation, while the local merging relations are usually omitted. This information is theoretically present in the definition of persistent homology and persistent barcodes. However, for computational efficiency, hole representations (e.g., \(q\)-circular representation of \(q\)-holes) or positions are often buried in the Gaussian elimination of the matrices in the computation of persistence barcodes.
Recently, some scholars noticed the importance of local information on persistent homology and proposed some interesting works on the local behavior of persistent homology [50, 46]. For example, Vandaele _et al._[50] investigated the local Vietoris-Rips complexes of the point cloud and applied the local Betti pairs to form a global descriptor of the point cloud. This descriptor can be viewed as a heatmap of the whole space. Regions with higher heat values usually mean they have more significant topological/geometric information, such as higher local branch numbers or loops. Also, Stolz described in her doctoral dissertation [46] how to apply the Mayer-Vietoris sequence to compute the local Vietoris-Rips complex linked from a data point.
On the other hand, some of the research also aims to detect the locations of loop or hole structures in the topological space. For example, Akai _et al._[2] generate persistence barcodes of the Vietoris-Rips complex as inputs of a neural network model and apply them for the ego-vehicle localization application. Similarly, Keros _et al._[38] train on a Hodge Laplacian-based graph neural network to detect the nearest optimal homology as a location representation of homologies. Furthermore, Xu _et al._[57] apply the distance measurement (DTM) function [9] to enhance the robustness of Vietoris-Rips complex construction, and apply persistence and distance information to detect holes and voids in point cloud data.
However, while image structures are more regular than point clouds, making it easier to compare local-global attributes, most current methods are designed for point cloud data. Theoretical assurance methods for localized hole detection are still limited. The paper provides a theoretically guaranteed framework for hole position detection in arbitrary topological spaces, demonstrated on digital images.
This paper is an extension of our previous work presented as a workshop paper at CVPR 2021 (2021 Conference on Computer Vision and Pattern Recognition) [30]. The work introduces the concept of cellular sheaves and connects them to persistent homology. In this work, we define the _local merging number_ can consider its geometric meaning in \(0\)-dimensional objects. This paper extends this framework and focuses on \(1\)-dimensional merging relations. In addition to theoretical promotion, we have a preliminary demonstration of images. It shows that the \(1\)-dimensional merging relations can estimate the position of holes in the space, which provides a way to analyze the local topological characteristics.
### Organization
The organization of the paper is as follows. Section 2 quickly recaps the homology, Betti numbers, persistent homology, and barcodes. We present the main results in Section 3 and separate the section into two parts. Section 3.1 introduces how we divide the ambient space by a local region and apply the divided system to compute its persistent homology. We also interpret the geometric meaning of the computed barcodes and explain how they detect the cycle locations. We also compare the proposed framework with previous methods in Section 3.3. Section 4 shows how to adapt the theory developed in Section 3 on digital images and demonstrates the proposed locating method. Finally, we discuss future directions and summarize the paper in Section 5.
## 2 Persistent Homology and Barcodes
We briefly introduce the standard notions and terminologies of singular homology, including its functoriality, Betti numbers, and geometric meanings in Section 2.1. Section 2.2 focuses on persistent homology and barcodes. We will also show in this section typical ways for building filtrations, especially the construction relying on the thresholding technique, which is the foundation of the paper.
### Homology
This section briefly recalls the singular homology and related properties of topological spaces. One can find these materials in several classic textbooks on algebraic topology [27, 51, 41, 25]. We start the section with the following definitions.
**Definition 1**.: _For any non-negative integer \(q\), we define the **geometric q-simplex**, denoted by \(\Delta_{q}\), as the convex hull of the standard basis \(\{\mathbf{e}_{0},\mathbf{e}_{1},...,\mathbf{e}_{q}\}\) for the \((q+1)\)-dimensional Euclidean space \(R^{q+1}\). That is,_
\[\Delta_{q}=\mathrm{conv}(\mathbf{e}_{0},\mathbf{e}_{1},...,\mathbf{e}_{q})= \left\{t_{0}\mathbf{e}_{0}+t_{1}\mathbf{e}_{1}+\cdots+t_{q}\mathbf{e}_{q}:t_{ i}\in[0,1]\text{ and }\sum_{i=0}^{q}t_{i}=1\right\}.\]
For any \((q+1)\) points \(\mathbf{x}_{0},...,\mathbf{x}_{q}\) in \(\mathbb{R}^{n}\) we can define the _affine map_\([\mathbf{x}_{0},...,\mathbf{x}_{q}]:\Delta_{q}\to\mathbb{R}^{n}\) by
\[(t_{0},t_{1},...,t_{q})\longmapsto t_{0}\mathbf{x}_{0}+\cdots+t_{q}\mathbf{x}_ {q}. \tag{1}\]
Then \([\mathbf{x}_{0},...,\mathbf{x}_{q}]\) is a continuous map. A continuous function from \(\Delta_{q}\) to a topological space \(X\) is called a _singular \(q\)-simplex_ in \(X\). In particular, any affine map \([\mathbf{x}_{0},...,\mathbf{x}_{q}]:\Delta_{q}\to\mathbb{R}^{n}\) is a singular \(q\)-simplex in \(\mathbb{R}^{n}\). For \(q\in\mathbb{Z}_{\geq 0}\) and \(i\in\{0,1,...,q+1\}\) we define \(f_{q+1}^{i}=[\mathbf{e}_{0},...,\widehat{\mathbf{e}_{i}},...,\mathbf{e}_{q+1}]\). In other words, \(f_{q+1}^{i}\) is a singular \((q-1)\)-simplex in \(\mathbb{R}^{q+1}\). One can see that the image of \(f_{q+1}^{i}\) is actually the convex hull of the set \(\{\mathbf{e}_{0},...,\widehat{\mathbf{e}_{i}},...,\mathbf{e}_{q+1}\}\), which is the \(i\)-th \((q-1)\)-face of the geometric simplex \(\Delta_{q}\)[25].
**Definition 2**.: _Let \(X\) be a topological space, \(R\) a commutative ring with identity, and \(q\) a non-negative integer. We define \(S_{q}(X;R)\) as the free \(R\)-module generated by all continuous maps \(\sigma:\Delta_{q}\to X\). For convenience, we usually define \(S_{q}(X;R)=0\) for \(q<0\)._
The singular simplexes give us a way to express geometric simplexes in arbitrary topological spaces. In Euclidean spaces, one can explore the faces as boundaries of geometric simplexes by using convex analysis, while it is not applicable in general spaces. In algebraic topology, we use the following boundary maps to read the boundary data of singular simplexes.
**Definition 3**.: _Let \(X\) be a topological space, \(R\) a commutative ring with identity, and \(q\in\mathbb{N}\) a positive integer. The **q-boundary map** is the function \(\partial_{q}:S_{q}(X;R)\to S_{q-1}(X;R)\) that extends by the mapping_
\[\sigma\longmapsto\sum_{i=0}^{q}(-1)^{i}\cdot\sigma\circ f_{q}^{i}\]
_for all continuous \(\sigma:\Delta_{q}\to X\). Note that \(\partial_{q}\) is well-defined since each \(\sigma\circ f_{q}^{i}\) is a singular \((q-1)\)-simplex in \(X\)._
Because \(S_{q}(X;R)\) is defined as the zero space for \(q<0\), we also define \(\partial_{q}\) as the zero maps for \(q\leq 0\). The following proposition is the foundation of homology theory.
**Proposition 1** ([25], (9.2)).: _Let \(X,R,q\) and \(\partial_{q}\) be defined as above. Then \(\partial_{q-1}\circ\partial_{q}=0\)._
The equation \(\partial_{q-1}\circ\partial_{q}=0\) shows that \(\operatorname{im}(\partial_{q})\subseteq\ker(\partial_{q-1})\) for every \(q\in\mathbb{Z}\), and hence we can define the \(q\)-th singular homology of \(X\) as the \(R\)-module
\[H_{q}(X;R)=\frac{\ker(\partial_{q})}{\operatorname{im}(\partial_{q+1})}.\]
**Notation** ([25, 51, 41, 18]).: _To simply the notations, for a topological space \(X\) and \(q\geq 0\), we use \(Z_{q}(X;R)\) and \(B_{q}(X;R)\) to denote the modules \(\ker(\partial_{q})\) and \(\operatorname{im}(\partial_{q+1})\). That is,_
\[Z_{q}(X;R)=\ker(\partial_{q}),B_{q}(X;R)=\operatorname{im}(\partial_{q+1}),\text { and }H_{q}(X;R)=\frac{Z_{q}(X;R)}{B_{q}(X;R)}. \tag{2}\]
_Chains in \(Z_{q}(X;R)\) and \(B_{q}(X;R)\) are called the **q-cycles** and **q-boundaries** of \(X\)._
Except for sending each topological space \(X\) to an \(R\)-module \(H_{q}(X;R)\), for every continuous map \(f:X\to Y\) and \(q\in\mathbb{Z}_{\geq 0}\) we can define an \(R\)-module homomorphism \(S_{q}(f;R):S_{q}(X;R)\to S_{q}(Y;R)\) that extends the mapping
\[\sigma\longmapsto f\circ\sigma\]
for all singular \(q\)-simplexes \(\sigma:\Delta_{q}\to X\). Note that the mapping is well-defined since \(f\circ\sigma\) is also a continuous map from \(\Delta_{q}\) to \(Y\). This observation leads to the following proposition.
**Proposition 2** ([25]).: _Let \(\mathfrak{Top}\) and \(\mathfrak{Moo}_{R}\) be the categories of topological spaces and \(R\)-modules. For each \(q\in\mathbb{Z}_{\geq 0}\), the assignments \(X\in\operatorname{Ob}(\mathfrak{Top})\mapsto S_{q}(X;R)\) and \(f\in\operatorname{Hom}_{\mathfrak{Top}}(X,Y)\mapsto S_{q}(f;R)\) form a functor from \(\mathfrak{Top}\) to \(\mathfrak{Moo}_{R}\)._
In fact, for a continuous map \(f:X\to Y\), one can prove that the rectangles in the ladder
of \(R\)-modules and \(R\)-module homomorphisms are commutative. Therefore, for every \(q\), this ladder induces an \(R\)-module homomorphism
\[H_{q}(f;R):H_{q}(X;R)\longrightarrow H_{q}(Y;R)\]
that sends each equivalence class \([c]\) in \(H_{q}(X;R)\) to the class \([S_{q}(f;R)(c)]\) in \(H_{q}(Y;R)\). Furthermore, we can see that the assignment \(H_{q}(\cdot;R)\) of topological spaces and continuous maps also forms a functor from \(\mathfrak{Top}\) to \(\mathfrak{Moo}_{R}\):
**Proposition 3** ([25]).: _Let \(\mathfrak{Top}\) and \(\mathfrak{Moo}_{R}\) be the categories of topological spaces and \(R\)-modules. For each \(q\in\mathbb{Z}_{\geq 0}\), the assignments \(X\in\operatorname{Ob}(\mathfrak{Top})\mapsto H_{q}(X;R)\) and \(f\in\operatorname{Hom}_{\mathfrak{Top}}(X,Y)\mapsto H_{q}(f;R)\) form a functor from \(\mathfrak{Top}\) to \(\mathfrak{Moo}_{R}\)._
An important purpose of developing singular homology is to detect holes in a topological space in any dimension. This property of singular homology is sometimes called the _Poincare lemma_ of singular homology. We state this lemma as follows.
**Proposition 4** (Corollary (15.5), [25]).: _Let \(n\geq 1\) be a positive integer, and let_
\[S^{n}=\{(x_{1},x_{2},...,x_{n+1})\in\mathbb{R}^{n+1}:x_{1}^{2}+\cdots+x_{n+1}^{ 2}=1\}\]
_be the \(n\)-sphere in \(\mathbb{R}^{n+1}\). Then, for every commutative ring \(R\) with identity and a non-negative integer \(q\geq 0\), we have_
\[H_{q}(S^{n};R)\simeq\begin{cases}R&\text{if $q=n$ or $q=0$},\\ 0&\text{otherwise}.\end{cases} \tag{3}\]
_In particular, for every topological space \(X\), we have \(H_{0}(X;R)\simeq R^{m}\), where \(m\) is the number of path-connected components of \(X\), and each path-connected component of \(X\) can be represented by a constant function from \([0,1]\) to \(X\)._
The Poincare lemma provides us with a reliable measurement to detect the number of \(q\)-dimensional holes in a topological space. This number is called the \(q\)-th _Betti number_.
**Definition 4** ([25]).: _Let \(R\) be a PID. For any topological space \(X\) and integer \(q\geq 0\), we define the **q-th Betti number**\(\beta_{q}=\beta_{q}(X)\) of \(X\) to be the rank of the \(R\)-module \(H_{q}(X;R)\). In particular, when \(R=F\) is a field, we have \(\beta_{q}=\dim_{F}\ H_{q}(X;R)\)._
In applications, we often set \(R\) as the binary field \(\mathbb{Z}_{2}=\mathbb{Z}/2\mathbb{Z}\) and simplify the notation \(H_{q}(X;\mathbb{Z}_{2})\) to \(H_{q}(X)\). In the paper, we will focus on homology over \(\mathbb{Z}_{2}\) and the singular homology of (binary) images (see Section 3).
### Prescient Homology
Homology detects the hole structure in a given topological space, while it may omit some geometry of the based space. For example, two geometric objects with a single 1-dimensional hole in different sizes share the same first homology group (Figure 1). As a generalization of homology, _persistent homology_ (PH) concerns sequences of topological spaces and their homologies. It was motivated by the works related to the Morse theory of Patrizio Frosini [19] and Vanessa Robins [45] in the 1990s. In Morse theory, a height function \(f:M\to\mathbb{R}\) on a smooth manifold \(M\) can form a sublevel
set filtration of subspaces of \(M\)[18]. The topological changes of such sublevel sets (e.g., the changes of Betti numbers) track the shape of \(M\) along the direction of the height function and hence a descriptor (or fingerprint) of \(M\). Persistent homology of height functions is now a well-known and fundamental tool in Morse theory and has many applications in theory [40, 5, 49] and data science [10, 15, 26, 37].
More generally, besides the smooth structures, suppose we have a sequence \(X_{1}\xrightarrow{f_{1}}X_{2}\xrightarrow{f_{2}}\cdots\xrightarrow{f_{n-1}}X_ {n}\) of topological spaces and continuous maps, then the functoriality of singular homology shown in Proposition 3 induces a sequence of homologies as follows:
\[H_{q}(X_{1};R)\xrightarrow{H_{q}(f_{1})}H_{q}(X_{2};R)\xrightarrow{H_{q}(f_{2 })}\cdots\xrightarrow{H_{q}(f_{n-1})}H_{q}(X_{n};R),\]
where \(q\) is an arbitrary non-negative integer, and \(H_{q}(X_{i})\), \(H_{q}(f_{i})\) are vector spaces and linear transformations over \(\mathbb{Z}_{2}\). Because continuous maps can deform the geometry of spaces (e.g., sizes, lengths, and connectivity), the changes in homological cycles and Betti numbers depict how the hole structures in the spaces changed among the continuous deformation.
Computing homologies connected by continuous maps is challenging in real applications, so one usually considers a chain of filtered topological spaces with subspace relations. A tower of such topological spaces is called a _filtration_. We list the formal definition of filtration as follows.
**Definition 5** ([18]).: _A **filtration** of topological spaces is a sequence \(\emptyset=X_{0},X_{1},X_{2},...,X_{n}\) of topological spaces such that \(X_{i}\) is a subspace of \(X_{i+1}\) for each \(i\in\{0,1,...,n-1\}\). We usually use the chain_
\[\mathcal{F}:\emptyset=X_{0}\subseteq X_{1}\subseteq X_{2}\subseteq\cdots \subseteq X_{n}\]
_of topological spaces to denote a filtration of topological spaces._
Because \(H_{q}(\cdot;R):\mathfrak{Top}\rightarrow\mathfrak{Met}_{R}\) is a functor, a filtration of topological spaces \(\emptyset=X_{0}\subseteq X_{1}\subseteq\cdots\subseteq X_{n}\) and a non-negative integer \(q\geq 0\) induce a sequence of \(R\)-modules and \(R\)-module homomorphisms:
\[\mathrm{PH}_{q}:0=H_{q}(\emptyset;R)\xrightarrow{\rho_{0,1}}H_{q}(X_{1};R) \xrightarrow{\rho_{1,2}}H_{q}(X_{2};R)\rightarrow\cdots\to H_{q}(X_{n};R) \tag{4}\]
where the \(R\)-module homomorphism \(\rho_{i,j}:H_{q}(X_{i};R)\to H_{q}(X_{j};R)\) for \(i\leq j\) is induced by the inclusion \(X_{i}\hookrightarrow X_{j}\). Based on the functoriality of singular homology on the sequence (4), we define \(\rho_{i,j}=\rho_{j-1,j}\circ\rho_{j-2,j-1}\circ\cdots\circ\rho_{i,i+1}\) for every \(i\leq j\) in \(\{0,1,...,n\}\), then the \(\rho_{i,j}\) is also the \(R\)-module homomorphism induced by the inclusion map \(X_{i}\hookrightarrow X_{j}\).
**Definition 6** ([18]).: _Suppose \(\mathcal{F}:\emptyset=X_{0}\subseteq X_{1}\subseteq\cdots\subseteq X_{n}\) is a filtration of topological spaces. Then, for every ring \(R\) and \(q\in\mathbb{Z}_{\geq 0}\), we call the sequence defined in (4) is the **q-th persistent homology** of the filtration \(\mathcal{F}\)._
One of the primary purposes of persistent homology is to track the lifespans of local holes, i.e., the births/deaths of connected components, loops, and higher dimensional voids. To tackle this problem, H. Edelsbrunner and J. Harer proposed the _persistence barcode_ of persistent homology to detect such topological changes [17, 18]. We refer to the definition of persistence barcodes as follows.
**Definition 7** ([17, 18]).: _Suppose \(\emptyset=X_{0}\subseteq X_{1}\subseteq\cdots\subseteq X_{n}\) is a filtration of topological spaces and \(0\to H_{q}(X_{1};F)\rightarrow\cdots\to H_{q}(X_{n};F)\) is the induced \(q^{\mathrm{th}}\) persistent homology over a field \(F\). Let \(s_{i}\) be an element in \(H_{q}(X_{i};F)\) (\(i\geq 1\)). Then we have the following definitions:_
* \(s_{i}\) _is said to_ _be born_ _at_ \(i\) _if_ \(\mathrm{im}(\rho_{i-1,i})\)_,_ \(i\) _is called the_ _birth_ _of_ \(s_{i}\)_;_
* \(s_{i}\) _is said to_ _die_ \(i\) _if_ \(\rho_{i,j-1}(s_{i})\notin\mathrm{im}(\rho_{i-1,j-1})\) _and_ \(\rho_{i,j}(s_{i})\in\mathrm{im}(\rho_{i-1,j})\)_,_ \(j\) _is called the_ _death_ _of_ \(s_{i}\)_._
_If \(s_{i}\) is still alive at \(n\), we define the death of \(s_{i}\) to be \(+\infty\) (up to this filtration). The tuple \((i,j)\) of \(s_{i}\) is called the **persistence barcode** of the element \(s_{i}\in H_{q}(X_{i};F)\). The multiset of all persistence barcodes of non-repeated representative generators in all \(H_{q}(X_{i};F)\) is called the **persistence diagram** of the filtration._
For example, by considering the geometry of 2D black objects, rows in Figure 1 define two filtrations of subspaces in \(\mathbb{R}^{2}\), and the induced first persistent homologies (over \(\mathbb{Z}_{2}\)) are
\[\mathbb{Z}_{2}\xrightarrow{\mathrm{id}_{\mathbb{Z}_{2}}}\mathbb{Z}_{2} \xrightarrow{0}\xrightarrow{0}\xrightarrow{0}\xrightarrow{0}\qquad\text{ and }\qquad\mathbb{Z}_{2}\xrightarrow{\mathrm{id}_{\mathbb{Z}_{2}}} \mathbb{Z}_{2}\xrightarrow{\mathrm{id}_{\mathbb{Z}_{2}}}\mathbb{Z}_{2} \xrightarrow{\mathrm{id}_{\mathbb{Z}_{2}}}\mathbb{Z}_{2}\xrightarrow{0}. \tag{5}\]
By definition, the \(1\)-dimensional hole in Figure 1(a)-(e) has the barcode \((0,2)\). On the other hand, the hole in Figure 1(f)-(j) has barcode \((0,4)\).
There are many different ways to construct filtrations and compute their persistent homology. A typical one is the Vietoris-Rips complexes for the point-cloud data. For a (finite) set \(\mathcal{X}\) in the \(n\)-dimensional Euclidean space \(\mathbb{R}^{n}\) and
a fixed positive real number \(\epsilon>0\), one explores the intersections of \(n\)-dimensional balls centered at points \(x\) in \(\mathcal{X}\) with radius \(\epsilon\). Regarding points in \(\mathcal{X}\) as the vertices of a simplicial complex, higher repeated regions lead to higher dimensional simplexes in \(\mathbb{R}^{n}\). The strategy of the Vietoris-Rips complex is to enlarge the radius to construct a filtration of simplicial complexes [39, 23, 14].
As shown in Figure 1 and equation (5), except for the point-cloud data, one can also construct filtrations of digital images and compute their persistent homology. We referred to an \(m\)-dimensional digital image as a function \(f:P\rightarrow\mathbb{R}_{\geq 0}\) from a non-empty set \(P\) of \(\mathbb{Z}^{m}\) to the set of all non-negative real numbers (cf. [11]). An image \(f\) is called _binary_ if its range is contained in the binary set \(\{0,1\}\) and called _grayscale_ for otherwise. For a binary image, the _primage_ of zero \(f^{-1}(0)\) referred to the set of all _black pixels_ of \(f\), and \(f^{-1}(1)\) denotes the set of all _white pixels_ of \(f\). Viewing each black pixel as a closed cube in \(\mathbb{R}^{m}\), we regard \(f^{-1}(0)\) as a subspace of \(\mathbb{R}^{m}\) and consider its topological properties. The first row in Figure 2 provides examples of \(2\)-dimensional grayscale and binary digital images.
As in Figure 1, one can construct filtrations of images by using image processing techniques on a given binary one. Another typical method of building filtrations is operating the sub-level sets of a grayscale image. For a image \(f:P\rightarrow\mathbb{R}_{\geq 0}\) and a threshold \(t\in\mathbb{R}\), we define a binary image \(f_{t}:P\rightarrow\{0,1\}\) by setting \(f_{t}(x)=0\) if \(f(x)\leq t\) and \(f_{t}(x)=1\) for otherwise. Then \(f_{t_{1}}^{-1}(0)\subseteq f_{t_{2}}^{-1}(0)\subseteq\cdots\subseteq f_{t_{n}} ^{-1}(0)\) for \(t_{1}\leq t_{2}\leq\cdots\leq t_{n}\). The second the third row in Figure 2 illustrate how sub-level sets of a grayscale image form a filtration of black pixels. In particular, the \(0\)-th and \(1\)-th persistence diagrams of the filtrations are \(\{(0,+\infty)\}\) and \(\{(0,3),(2,3)\}\). For readers who are interested in persistent homology on digital images, see [36] for more information.
In this paper, we focus on \(2\)-dimensional binary images and their local homology. We combine image segmentation techniques, local homology, and persistence barcodes to illustrate how to estimate and detect the positions of holes in 2D binary images. The combination of this detection method with more image processing techniques (such as mathematical morphology and sub-level set filtration) will be our future work.
## 3 Our Approaches
The section is separated into three parts. First, we quote the definitions of local systems and short persistent homology in our previous work [30]. _Local systems_ and _short persistent homology_ induce a cellular sheaf structure of topological spaces and can depict the spatial merging relations via their global/local sections [30]. We discuss the relationship between hole positions, global/local sections, and persistence barcodes on local systems (Section 3.1). Second, we introduce how we adapt the theory to digital images and implement the method (Section 3.2). Finally, we discuss some properties of the proposed framework, such as the relationship between local systems, the location of holes, and image noises (Section 3.3).
### Persistent Homology of Local Systems
For a topological space \(X\) and a concerned local region \(A\) of \(X\), the _relative homology_\(H_{q}(X,A)\) considers the equivalence classes of cycles in \(X\) that do not meet the subspace \(A\). One can formulate the relative homology of \(X\) and \(A\) by \(H_{q}(X,A)=Z_{q}(X,A)/B_{q}(X,A)\), where \(Z_{q}(X,A)=\{c\in S_{q}(X):\partial_{q}(c)\in S_{q-1}(A)\}=\partial_{q}^{-1}(S_ {q}(A))\) is the set of all chains in \(S_{q}(X)\) with boundaries in \(S_{q-1}(A)\), and \(B_{q}(X,A)=B_{q}(X)+S_{q}(A)\) is the submodule generated by all \(q\)-boundaries of \(X\) and \(q\)-chains in \(A\). Elements in \(Z_{q}(X,A)\) and \(B_{q}(X,A)\) are called relative \(q\)-cycles and relative \(q\)-boundaries of \(X\), respectively [25].
Roughly speaking, relative homology detects holes in \(X\) except for holes that are totally contained in \(A\). More precisely, one can apply the snake lemma on the short exact sequence \(0\to S_{\bullet}(A)\xrightarrow{t_{\bullet}}S_{\bullet}(X) \xrightarrow{t_{\bullet}}S_{\bullet}(X)/S_{\bullet}(A)\to 0\) with canonical inclusion and projection to obtain the long exact sequence
\[\cdots\xrightarrow{\tau_{q}}H_{q}(A)\xrightarrow{\overline{\tau}_{q}}H_{q}( X)\xrightarrow{\overline{\tau}_{q}}H_{q}(X,A)\xrightarrow{\delta_{q}}H_{q-1}(A) \xrightarrow{\overline{\tau}_{q-1}}H_{q-1}(X)\xrightarrow{\overline{\tau}_{q -1}}H_{q-1}(X,A)\xrightarrow{\cdots}. \tag{6}\]
One can use barcode representation to detect hole structures in the spaces \(H_{\bullet}(A),H_{\bullet}(X),\) and \(H_{\bullet}(X,A)\). For example, a non-zero element in \(H_{q}(X)\setminus\operatorname{im}(\overline{t}_{q})\) represents a hole in \(X\) that is not totally emerged in the region \(A\). On the other hand, \(c\in H_{q}(X)\) dies at \(H_{q}(X,A)\) if the cycle \(c\) does not represent a hole in \(A\).
**Remark**.: _Because the long exact sequence in (6) is a chain complex, every lifespan \(b-d\) of a barcode \((b,d)\) is \(1\)._
Relative homology can capture holes contributed by \(A\), \(X\setminus A\), or both. However, it is difficult and expensive to implement and compute due to the complicated data representation. This paper proposes a relatively efficient method to detect hole relations and positions via persistent homology. To achieve this goal, we introduce here two main ideas proposed in our previous work, called _local system_ and _short filtration_[30].
**Definition 8** ([30]).: _Let \(X\) be a topological space and \(X_{1},X_{2}\) be subspaces of \(X\). The triad \((X,X_{1},X_{2})\) is called a **local system** (or an **admissible triad**) if \(\mathrm{cl}_{X}(X_{1})\cap\mathrm{cl}_{X}(X_{2})=\emptyset\)._
For any topological space \(X\) and its subspaces \(X_{1}\) and \(X_{2}\), we have the following definition.
**Definition 9** ([30]).: _Let \((X,X_{1},X_{2})\) be a triad of topological spaces with \(X_{1}\subseteq X\) and \(X_{2}\subseteq X\). This triad leads to two filtrations \(\emptyset\subseteq X_{1}\subseteq X_{1}\cup X_{2}\subseteq X\) and \(\emptyset\subseteq X_{2}\subseteq X_{1}\cup X_{2}\subseteq X\). We call them **short filtrations** of the triad \((X,X_{1},X_{2})\)._
Focusing on the first one in Definition 9, the birth information at \(H_{\bullet}(X_{1}\cup X_{2};F)\) depicts whether \(X_{2}\) contains a homological generator that cannot be represented via generators in \(H_{\bullet}(X_{1};F)\). When \(\mathrm{cl}_{X}(X_{1})\cap\mathrm{cl}_{X}(X_{2})=\emptyset\), the homology \(H_{\bullet}(X_{1}\cup X_{2};F)\) is canonically isomorphic to the space \(H_{\bullet}(X_{1};F)\oplus H_{\bullet}(X_{2};F)\) since \(X_{1}\) and \(X_{2}\) are two path-connected components of \(X_{1}\cup X_{2}\). In this case, every generator \(s_{2}\) in \(H_{\bullet}(X_{2};F)\) is born at \(H_{\bullet}(X_{1}\cup X_{2};F)\) of the persistent homology \(0\to H_{\bullet}(X_{1};F)\to H_{\bullet}(X_{1}\cup X_{2};F)\to H_{\bullet}(X;F)\) and dies at \(H_{\bullet}(X;F)\) if there is an \(s_{1}\in H_{\bullet}(X_{1};F)\) such that \(s_{1}\) and \(s_{2}\) represent the same homological generator in \(H_{\bullet}(X;F)\). This property will benefit computing the homological changes of holes in \(X_{1}\), \(X_{2}\), and \(X\). Furthermore, we will show in Section 3.2 that the condition \(\mathrm{cl}_{X}(X_{1})\cap\mathrm{cl}_{X}(X_{2})=\emptyset\) can be easily established in image data through elementary image processing techniques.
Figure 2: First row: a \(6\times 6\) image domain \(P\) in \(\mathbb{Z}^{2}\), a grayscale image \(g:P\to\{0,1,2,3\}\), and a binary image \(f:P\to\{0,1\}\). Figures (c) and (d) are two different representations for the image \(f\). In a binary image \(f\) as in (d), pixels with a value of \(0\) represent the black pixels of the image. Second row: a filtration of binary images made by image \(g\) and thresholds \(0,1,2,\) and \(3\). Third row: the white-black pixel representations of images in the second row.
In [30], we applied the two filtrations of a local system \((X,X_{1},X_{2})\) to construct the following _cellular sheaf_ structure:
\[H_{q}(X_{1};F)\xrightarrow{\rho_{1}}H_{q}(X;F)\] \[H_{q}(X_{2};F)\]
where \(q\) is any non-negative integer, \(F\) is a fixed field, and \(\rho_{1},\rho_{2}\) are the \(F\)-linear transformations induced by the inclusions \(X_{1}\hookrightarrow X\) and \(X_{2}\hookrightarrow X\). We often call the maps \(\rho_{1},\rho_{2}\)_restriction maps_. A pair \((s_{1},s_{2})\in H_{q}(X_{1};F)\oplus H_{q}(X_{2};F)\) is called a _global section_ of the sheaf if \(\rho_{1}(s_{1})=\rho_{2}(s_{2})\). We use \(\Gamma\) to denote the subspace of all global sections in \(H_{q}(X_{1};F)\oplus H_{q}(X_{2};F)\), and it can be sculptured by the following theorem.
**Theorem 1**.: _For the following sheaf structure of \(F\)-vector spaces and \(F\)-linear maps:_
\[V\xrightarrow{f}P\]
_we define \(\phi:V\oplus W\to P\) by \((v,w)\longmapsto f(v)-g(w)\). Then, \(\phi\) is also an \(F\)-linear linear map and \((V\oplus W)/\Gamma\simeq\mathrm{im}(\phi)\), where \(\Gamma=\{(v,w):f(v)=g(w)\}\) is the space of global sections._
_In particular, \(\mathrm{dim}(\Gamma)=\mathrm{dim}(V)+\mathrm{dim}(W)-\mathrm{dim}(\mathrm{im} (\phi))\) if the spaces \(V,W,P\) are finite-dimensional. In addition, \(\mathrm{dim}(\Gamma)=\mathrm{dim}(V)+\mathrm{dim}(W)-\mathrm{dim}(P)\) if \(\phi\) is onto._
Proof.: It is evident that \(\phi\) is \(F\)-linear. Because \((v,w)\in\ker(\phi)\) if and only if \(f(v)=f(w)\). By the first isomorphism theorem of modules, the theorem follows.
Let \((X,X_{1},X_{2})\) be a local system of topological spaces and \(\Gamma\) the global section space of the sheaf structure \(H_{q}(X_{1};F)\to H_{q}(X;F)\gets H_{q}(X_{2};F)\). Examples shown in Figure 3 depict that the vector spaces \(H_{q}(X_{1};F)\), \(H_{q}(X_{2};F)\), \(H_{q}(X_{1}\cup X_{2};F)\), \(H_{q}(X;F)\), and \(\Gamma\) can be totally different. In other words, the global section space provides an additional than the homology of \(X_{1},X_{2},X_{1}\cup X_{2}\), and \(X\). Actually, suppose we have a sequence \((X_{i},X_{i1},X_{2})\) of local systems that satisfy \(X_{i1}\subseteq X_{(i+1)1}\), \(X_{i2}\subseteq X_{(i+1)2}\), and \(X_{i}\subseteq X_{i+1}\), then we have the following commutative diagram:
where \(\phi_{ij}\) and \(\phi_{i}\) are the \(F\)-linear maps induced by the inclusions. One can check the following sequence is also valid:
In other words, except for computing single global section spaces, one can also consider the persistent homology of global section spaces induced by any filtered local systems of topological spaces.
Theorem 1 presents a way to compute global section spaces. However, on many occasions, computing the image of \(\phi\) in the theorem may be infeasible. To tackle this, we previously proposed an approximation method using persistent homology [30]. We quote the method as the following theorem.
**Theorem 2** (Theorem 2.3.1 [30]).: _Let \(R\) be a commutative ring with identity. Let \((X,X_{1},X_{2})\) be a local system of topological spaces and \(q\) a non-negative integer. Let \(\mathcal{G}_{1}\) be the short filtration \(\emptyset\subseteq X_{1}\subseteq X_{1}\cup X_{2}\subseteq\dot{X}\) and \(s_{2}\in H_{q}(X_{2};R)\) a non-zero element. Then the followings are equivalent:_
* _There is an_ \(s_{1}\in H_{q}(X_{1};R)\) _such that_ \((s_{1},s_{2})\in H_{q}(X_{1};R)\oplus H_{q}(X_{2};R)\) _is global section;_
* \(\widetilde{s_{2}}:=\omega_{2}(s_{2})\) _has barcode_ \((2,3)\) _in the PH_ \(\mathcal{P}_{q}(\mathcal{G}_{1}):0\to H_{q}(X_{1};R)\to H_{q}(X_{1}\cup X_{2};R )\to H_{q}(X;R)\)_._
For a local system \((X,X_{1},X_{2})\), numbers of barcode \((2,3)\) in \(\mathcal{P}_{q}(\mathcal{G}_{1})\) records how many homological non-zero generators in \(H_{q}(X_{2};R)\) that merge to a generator in \(H_{q}(X_{1};R)\). In [30], we defined it as the \(q\)_-th local merging number_.
**Definition 10** ([30]).: _Let \((X,X_{1},X_{2})\) be a local system of topological spaces and \(q\geq 0\). We define the **q-th local merging number** of \(X_{1}\) and \(X_{2}\) as the numbers of barcodes \((2,3)\) in \(\mathcal{P}_{q}(\mathcal{G}_{1})\) and denote it by \(m_{q}(X_{1};X_{2})\)._
We use the two pairs in Figure 3 to explain the local merging numbers. For the first row, we have \(m_{0}(X_{1};X_{2})=5\) since there are \(5\) connected components that merge to \(X_{1}\). On the other hand, \(m_{0}(X_{2};X_{1})=5\). Similarly, the \(m_{0}(X_{1};X_{2})\) and \(m_{0}(X_{2};X_{1})\) of the second row are \(2\) and \(4\), respectively. In particular, these two examples show the local merging numbers \(m_{0}(X_{1};X_{2})\) and \(m_{0}(X_{2};X_{1})\) are not equal in general. Actually, one can prove that \(\max\{m_{0}(X_{1};X_{2}),m_{0}(X_{2};X_{1})\}\leq\dim(\Gamma)\leq m_{0}(X_{1} ;X_{2})+m_{0}(X_{2};X_{1})\)[29].
When \(q=0\), the local merging numbers \(m_{0}(X_{1};X_{2})\) records how many connected components in \(X_{2}\) connect to components in \(X_{1}\) synchronously. In our previous work, we show that local regions with high \(0\)-local merging numbers are likely to be more joint parts of the ambient space and have the potential to analyze handwritten text with texture data [30, 29]. In these works, we focus on local merging numbers in dimension \(1\) and \((2,3)\) barcodes in short filtrations, while the geometric meanings of higher dimensional merging numbers and \((3,+\infty)\) barcodes are still unknown.
In the following theorem, we show that the number of barcodes \((3,+\infty)\) in a short filtration can verify whether \(X_{1}\) and \(X_{2}\) contribute a hole (with dimension \(\geq 1\)) in \(X\).
**Theorem 3**.: _Let \(F\) be a field. Let \((X,X_{1},X_{2})\) be a local system of topological spaces and \(q\) a non-negative integer. Let \(\Gamma\) be the global section space of the sheaf structure \(H_{q}(X_{1};F)\to H_{q}(X;F)\gets H_{q}(X_{2};F)\). Then the number of \((3,+\infty)\) in the PH \(\mathcal{P}_{q}(\mathcal{G}_{1}):0\to H_{q}(X_{1};F)\to H_{q}(X_{1}\cup X_{2};F) \to H_{q}(X;F)\) equals_
\[\dim_{F}(H_{q}(X;F))-\dim_{F}(H_{q}(X_{1};F))-\dim_{F}(H_{q}(X_{2};F))+\dim_{F} (\Gamma).\]
Proof.: Let \(\rho_{1}:H_{q}(X_{1};F)\xrightarrow{}H_{q}(X;F)\) and \(\rho_{2}:H_{q}(X_{2};F)\xrightarrow{}H_{q}(X;F)\) be the canonical linear transformations that are induced by the inclusions. Define \(\phi=\rho_{1}-\rho_{2}:H_{q}(X_{1})\oplus H_{q}(X_{2})\xrightarrow{}H_{q}(X)\), then
\[\dim_{F}(\Gamma)=\dim_{F}(H_{q}(X_{1};F))+\dim_{F}(H_{q}(X_{2};F))-\dim_{F}( \operatorname{im}(\phi)) \tag{7}\]
by Theorem 1. Because \(H_{q}(X_{1}\cup X_{2};F)\) is canonically isomorphic to \(H_{q}(X_{1};F)\oplus H_{q}(X_{2};F)\), the images of \(\rho_{1}-\rho_{2}\) and the map \(H_{q}(X_{1}\cup X_{2};F)\xrightarrow{}H_{q}(X;F)\) in \(\mathcal{P}_{q}(\mathcal{G}_{1})\) are equal. Then the number of barcodes \((3,+\infty)\) in the persistent homology \(\mathcal{P}_{q}(\mathcal{G}_{1})\) counts the dimension of the space \(H_{q}(X;F)/\mathrm{im}(\phi)\). Therefore,
\[\#\{\text{barcode }(3,+\infty)\text{ in }\mathcal{P}_{q}(\mathcal{G}_{1})\}= \dim_{F}(H_{q}(X;F))-\dim_{F}(\mathrm{im}(\phi)). \tag{8}\]
By plugging equation (7) into equation (8), the theorem follows.
If \(c\in Z_{q}(X_{1};F)\subseteq Z_{q}(X;F)\) is a \(q\)-cycle that represents a q-dimensional hole in \(X_{1}\), then \(c\) must have a barcode \((1,\star)\) in the persistent homology \(\mathcal{P}_{q}(\mathcal{G}_{1})\). On the other hand, \(c\in Z_{q}(X_{2};F)\subseteq Z_{q}(X;F)\) representing a hole in \(X_{2}\) implies that it has a barcode \((2,\star)\). In other words, the number of barcodes \((3,+\infty)\) in the persistent homology \(\mathcal{P}_{q}(\mathcal{G}_{1})\) records how many \(q\)-holes in \(X\) are "supported" by both \(X_{1}\) and \(X_{2}\). In particular, removing either \(X_{1}\) or \(X_{2}\) will make those holes disappear. Intuitively, those holes are constructed by gluing the parts by \(X_{1}\) and \(X_{2}\), and hence we have the following definition.
**Definition 11**.: _Let \((X,X_{1},X_{2})\) be a local system of topological spaces and \(q\geq 0\). We define the **q-th local outer-merging number** of \(X_{1}\) and \(X_{2}\) as the numbers of barcodes \((3,+\infty)\) in \(\mathcal{P}_{q}(\mathcal{G}_{1})\) and denote it by \(o_{q}(X_{1};X_{2})\)._
From the above discussion, it can be seen that the local outer-merging number records the contribution of a specific local area in the topological space to the hole structure. We present local outer-merging numbers for digital images in the next section (Section 3.2). In addition, we will analyze the location of holes in the image by segmenting the image and the local outer-merging number of the corresponding region.
### Local Systems in Binary Images
Section 3.1 introduces the local system and its persistent homology. Theorem 2 and Theorem 3 tell us that counting the numbers of barcodes \((2,3)\) and \((3,+\infty)\) in \((X,X_{1},X_{2})\) can detect the glue relationship of local objects in \(X\). Among them, constructing the admissible triad \((X,X_{1},X_{2})\) is the most crucial part of the calculation. For an object \(X\) in \(\mathbb{R}^{n}\) and a bounded \(A\subseteq X\), one can choose \(r_{1},r_{2}>0\) with \(r_{1}<r_{2}\) such that \(A\subseteq\mathbf{B}(\mathbf{0},r_{1})\) and define \(X_{2}=X\cap\{\mathbf{x}\in\mathbb{R}^{n}:|\mathbf{x}|\geq r_{2}\}\). Then, \(\mathrm{cl}_{X}(X_{1})\cap\mathrm{cl}_{X}(X_{2})=\emptyset\). Based on the same idea, the section presents a more efficient way to build local systems in binary images.
Figure 4: An illustration of the construction of a local system in a 2D binary image. In this example, we have \(m_{0}(X_{1};X_{2})=3\), \(o_{0}(X_{1};X_{2})=0\), \(m_{1}(X_{1};X_{2})=0\), and \(o_{1}(X_{1};X_{2})=1\).
As we introduced in Section 2.2, a \(2\)-dimensional image is identified as a non-negative real-valued function \(f:P\rightarrow\mathbb{R}_{\geq 0}\) on a discrete 2D rectangle \(P=([a,b]\times[c,d])\cap\mathbb{Z}^{2}\), where \(a,b,c,d\) are integers with \(a\leq b\) and \(c\leq d\). In the paper, we focus on the geometric realization of black pixels of a binary image and compute its homology (see Figure 2(d) and Figure 4). For a binary image \(f:P\rightarrow\{0,1\}\), we consider the black pixel set \(f^{-1}(0)\) and denote it by \(X\subseteq P\). We use a rectangle in \(P\) to cover a concerned region of \(X\), say \(R=([a_{1},b_{1}]\times[c_{1},d_{1}])\cap\mathbb{Z}^{2}\) with \(a\leq a_{1}\leq b_{1}\leq b\) and \(c\leq c_{1}\leq d_{1}\leq d\) (see Figure 4(b)). Consider
\[B=\mathbb{Z}^{2}\cap\left((\{a_{1}\}\times[c_{1},d_{1}])\cup(\{b_{1}\}\times[c _{1},d_{1}])\cup([a_{1},b_{1}]\times\{c_{1}\})\cup([a_{1},b_{1}]\times\{d_{1} \})\right)\]
as the boundary of \(R\) (see Figure 4(c)), we define \(\widehat{R}=R\setminus B\) (see Figure 4(d)). Defining \(X_{1}=X\cap\widehat{R}\) and \(X_{2}=X\setminus R\) (see Figure 4(e)-(h)), we obtain a triad \((X,X_{1},X_{2})\) with the property \(X_{1}\cap X_{2}=\emptyset\). Because \(X\), \(X_{1}\), and \(X_{2}\) are subspaces in \(\mathbb{R}^{2}\) that are formed by finitely many closed squares in \(\mathbb{R}^{2}\), we must have \(\mathrm{cl}_{X}(X_{1})\cap\mathrm{cl}_{X}(X_{2})=\emptyset\).
For example, the local system \((X,X_{1},X_{2})\) in Figure 4 has merging and outer-merging numbers \(m_{0}(X_{1};X_{2})=3\), \(o_{0}(X_{1};X_{2})=0\), \(m_{1}(X_{1};X_{2})=0\), and \(o_{1}(X_{1};X_{2})=1\). In [30], we separated a 2D image into disjoint blocks \(X_{i}^{(i)}\) (called a _local patches_) and calculated the local merging numbers \(m_{0}(X_{1}^{(i)};X_{2}^{(i)})\) to form a heatmap of the image. In the paper, we mainly focus on the number \(o_{1}(X_{1};X_{2})\) to approximate the hole positions in a binary image. We show in Section 4 how to use the local system described in this section to construct local patches in the image and use them to estimate the 1D holes in the image.
### Discussion
The organization of the section is as follows. First, we compare the proposed method with possible methods in Section 3.3.1. Second, Section 3.3.2 discusses how to estimate the size and shape of the holes in the topological space through the local system. Finally, Section 3.3.3 discusses local systems composed of n subspaces and their local sections, which will be an important future research direction to promote the theory of this paper.
#### 3.3.1 Comparison with other methods
To detect the local regions that contain pores in an \(m\times n\) binary image, some naive methods can be used to tackle this question. For example, one can search every subfigure of the given image and compute whether it contains hole structures. However, it is generally infeasible to compute all the
\[\binom{m}{2}\cdot\binom{n}{2}=\frac{(m^{2}-m)(n^{2}-n)}{4},\]
subfigures for large \(m\) and \(n\). Except for the computational complexity, covering an irregular hole costs a large bounding rectangle and makes the estimation less precise and compact (see Figure 5).
Figure 5: An illustration of bounding regions of the hole structure. (a) A binary image that contains a single 1-dimensional loop structure. (b) A rectangular bounding box formed by yellow and brown pixels. (c) A more compact bounding region formed by red and purple pixels.
Following our previous approach [30], we have two strategies to reduce computational complexity through the image's local systems and sheaf information. The first is splitting the image into many pairs \((X_{1}^{(i)},X_{2}^{(i)})\) with disjoint \(X_{1}^{(i)}\)s and computing each \(\mathcal{P}_{q}(\mathcal{G}_{i,1})\) on the filtration \(\mathcal{G}_{i,1}:\emptyset\subseteq X_{1}^{(i)}\subseteq X_{1}^{(i)}\cup X_{2 }^{(i)}\subseteq X\). The second is to use a sliding window technique to cover the entire image and compute short-persistent homology for each local window. The second strategy computes the persistent homology \(O(mn)\) times, and pore locations generated by this strategy are usually more refined than the first one. In the paper, we mainly follow the second strategy and show in Section 4 that the proposed barcode and local system framework can detect local holes effectively and with more concise bounding regions than bounding boxes (Figure 5 (b), (c)).
#### 3.3.2 Size Issues
As we mentioned in Section 2.2, homological generators generally lack specific geometric properties, such as the size and stability of pores (Figure 1), which cannot be detected by traditional homology or persistent homology of sub-level set filtration. In recent years, the shape and size of pore structures have become more and more important topics in bioinformatics and material science [34, 35, 52, 55, 54, 4]. Recently, some research has shown the potential and advantage of persistent homology in pore size analysis [33, 58, 32, 11].
As far as the field of image processing is concerned, the combination of persistent homology and mathematical morphology has opened up a new research direction for this field [20, 32, 11, 48]. In particular, our approach in [11] applies morphological opening and closing to measure the spatial information of black and white regions in binary images. Mathematical morphological operations can estimate the sizes of image pores, and most of the current work focuses on the global description of such spatial information, such as the number of pores with a specified morphological size and the average image pore size. However, the location information of pores in images is still limited in present methods. Through the discussion in this section, we will see that localized systems can capture both the location and size of pores, providing a richer pore analysis technique.
**Theorem 4**.: _Let \(F\) be a field. Let \((X,X_{1},X_{2})\) be a local system of topological spaces and \(q\) a non-negative integer. Then, non-zero elements in \(H_{q}(X_{1})\) and \(H_{q}(X_{2})\) in the persistent homology \(\mathcal{P}_{q}(\mathcal{G}_{1}):0\to H_{q}(X_{1};F)\to H_{q}(X_{1}\cup X_{2}; F)\to H_{q}(X;F)\) has birth number \(<3\)._
Proof.: Suppose \(s_{1}\) is a non-zero element in \(H_{q}(X_{1})\), then the birth number of \(s_{1}\) is \(1\). On the other hand, the assumption of \(\mathrm{cl}_{X}(X_{1})\cap\mathrm{cl}_{X}(X_{2})=\emptyset\) forces that \(H_{q}(X_{1}\cup X_{2};F)\simeq H_{q}(X_{1};F)\oplus H_{q}(X_{2};F)\) canonically. Then every non-zero element in \(H_{q}(X_{2})\) must have barcode \(2\).
**Corollary 1**.: _Let \(X\) be a subspace of the \(n\)-dimensional Euclidean space \(\mathbb{Z}^{n}\). Let \(F\) be a field and \(q\) a non-negative integer. For every \(c\in Z_{q}(X;F)\) with \([c]\neq 0\) in \(H_{q}(X;F)\), there is a bounded set \(X_{1}\subseteq X\) and an \(X_{2}\subseteq X\) such that \(\mathrm{cl}_{X}(X_{1})\cap\mathrm{cl}_{X}(X_{2})=\emptyset\) and \(c\in H_{q}(X_{1};F)\). In particular, \([c]\) has a barcode \((1,\star)\) in \(\mathcal{P}_{q}(\mathcal{G}_{1})\)._
Proof.: For a chain \(c\) in \(S_{q}(X;F)\), we can write \(c=\sum_{i=1}^{n}\lambda_{i}\sigma_{i}\), where \(\lambda_{i}\in F\setminus\{0\}\) and \(\sigma_{i}:\Delta_{q}\to X\) is a continuous for each \(i\). Recall that the support of \(c\) denoted by \(|c|\) is defined as the union of the images of the \(\sigma_{i}\). Because \(\Delta_{q}\) is compact, and \(\sigma_{i}\) is continuous, the support of \(c\) is a compact subset of \(X\). In particular, the support of \(c\) is closed and bounded. Therefore, we may choose a positive number \(r_{1}\) such that \(|c|\subseteq\mathbf{B}(\mathbf{0},r_{1})\). Choose \(r_{2}>r_{1}\) and set \(X_{1}=X\cap\mathbf{B}(\mathbf{0},r_{1})\) and \(X_{2}=X\cap\{\mathbf{x}\in\mathbb{R}^{n}:|\mathbf{x}|\geq r_{2}\}\), then \(\mathrm{cl}_{X}(X_{1})\cap\mathrm{cl}_{X}(X_{2})=\emptyset\) and \(c\in Z_{q}(X_{1};F)\). By the proof of Theorem 4, \([c]\) has a barcode \((1,\star)\) in \(\mathcal{P}_{q}(\mathcal{G}_{1})\).
**Definition 12**.: _For convenience, we use \(i_{q}(X_{1};X_{2})\) to denote the number of barcodes in \(\mathcal{P}_{q}(\mathcal{G}_{1})\)._
Corollary 1 gives us a way to measure the size of \(q\)-holes (\(q>0\)) for any subspace in \(\mathbb{R}^{n}\) by choosing the local system appropriately. More precisely, we can choose a bounded subspace \(X_{1}\) of \(X\) that contains the hole and is therefore an approximation of the size of the hole. Also, since the location of \(X_{1}\) is known, it also keeps track of the hole location. We also demonstrate in Section 4 an application of Corollary 1 to detect the "largest" holes in images.
#### 3.3.3 More general systems
In the paper, we focus on a local system consisting of topological spaces \(X,X_{1},X_{2}\) that satisfy \(X_{1}\subseteq X\), \(X_{2}\subseteq X\), and \(\mathrm{cl}_{X}(X_{1})\cap\mathrm{cl}_{X}(X_{2})=\emptyset\). This system induces a sheaf structure as in Theorem 1, and its global section space can be computed by the persistent homology of the short filtration. Actually, one can consider a more general case consisting of \(n+1\) spaces \(X,X_{1},...,X_{n}\) with \(\mathrm{cl}_{X}(X_{i})\cap\mathrm{cl}_{X}(X_{j})=\emptyset\) for \(i\neq j\). We synthesize the above settings into the following definition and theorem.
**Definition 13**.: _Let \(X\) be a topological space and \(X_{1},...,X_{n}\) by subspaces of \(X\) that satisfy \(\mathrm{cl}_{X}(X_{i})\cap\mathrm{cl}_{X}(X_{j})=\emptyset\) for \(i\neq j\). The \((n+1)\)-tuple \((X,X_{1},...,X_{n})\) is called a **local n-system** (or an **admissible (n+1)-tuple**) of topological spaces._
Furthermore, for an \(n\)-system \((X_{1},...,X_{n})\), we can consider the diagram
with homologies and induced homomorphisms [28]. As above, we define its global section space by
\[\Gamma=\left\{(s_{1},s_{2},...,s_{n})\in\prod_{i=1}^{n}H_{q}(X_{i}):\rho_{i}(s _{i})=\rho_{j}(s_{j})\text{ for }i,j\in\{1,2,...,n\}\right\}.\]
**Theorem 5**.: _Let \((X,X_{1},...,X_{n})\) be a local \(n\)-system of topological spaces. Let \(R\) be a commutative ring with identity and \(q\geq 0\) a non-negative integer. Consider the sheaf structure_
_of \(R\)-modules and module homomorphisms and the homomorphism_
\[\bigoplus_{i=1}^{n}H_{q}(X_{i};R)\stackrel{{\phi}}{{ \longrightarrow}}\bigoplus_{i=2}^{n}H_{q}(X;R),\ \ (s_{i})_{i=1}^{n}\longmapsto(\rho_{1}(s_{1})-\rho_{i}(s_{i}))_{i=2}^{n}. \tag{9}\]
_Let \(\Gamma\) be the global section space of the sheaf. Then \(\Gamma\) is the kernel of \(\phi\)._
Proof.: An \(n\)-tuple \((s_{i})_{i=1}^{n}\) is a global section if and only if \(\rho_{i}(s_{i})-\rho_{j}(s_{j})=0\) for every \(i,j\in\{1,2,...,n\}\). If it holds, then \(\rho_{1}(s_{1})-\rho_{j}(s_{j})=0\) for every \(j\in\{2,...,n\}\). Conversely, suppose \(\rho_{1}(s_{1})-\rho_{j}(s_{j})=0\) for every \(j\in\{2,...,n\}\), then \(\rho_{i}(s_{i})-\rho_{j}(s_{j})=\rho_{i}(s_{i})-\rho_{1}(s_{1})+\rho_{1}(s_{1 })-\rho_{j}(s_{j})=0\) for every \(i,j\in\{2,...,n\}\), as desired.
Although Theorem 5 provides a way to sculpt the global section space, it still has a limitation in computation. When \(n=2\), and \(R=F\) is a field, the homomorphism in (9) and \(H_{q}(X_{1}\cup X_{2};F)\to H_{q}(X;F)\) have the same image. In this case, the approximation developed in Theorem 2 and Theorem 3 is available. However, the map in (9) and the canonical one \(H_{q}(X_{1}\cup X_{2};F)\to H_{q}(X;F)\) are not coincident and can not be calculated by counting the barcodes in the short filtration. Computing global sections through persistence barcodes is one of our future research directions.
## 4 Demonstration on Digital Images
Hole structures in images can be subtle and complicated. As shown in Figure 6(a)1, although many white areas appear in the image as porosity structures or closed voids, many of these areas connect to the white background and thus
are not actual holes. In order to detect the image's hole positions, we propose Algorithm 1 to approximate the holes' geometric locations using the above theory and sliding window technique. Figure 6 is a demonstration of Algorithm 1 on a binary image. We can see that all the holes in the image are detected by the output heatmap as Figure 6(c).
We note that Line 6 in Algorithm 1 considers both \(i_{1}(X_{1};X_{2})\) and \(o_{1}(X_{1};X_{2})\). If \(i_{1}(X_{1};X_{2})\neq 0\), it means that \(X_{1}\) contains some holes in \(X\) and already a bounding box of certain holes (Corollary 1). On the other hand, \(i_{1}(X_{1};X_{2})\) records whether (part of) the black pixels in \(X_{1}\) can contribute to hole structures and the number of these structures. Therefore, the sum of \(i_{1}(X_{1};X_{2})\) and \(o_{1}(X_{1};X_{2})\) can estimate whether \(X_{1}\) nears a hole structure in \(X\).
```
0: Binary image \(f:P\rightarrow\{0,1\}\) on a rectangle \(R\), \(X=f^{-1}(0)\), an \(n\times n\) square window \(R\), and a sliding step \(k\).
0: A function \(H:P\rightarrow\mathbb{R}\) as a heatmap of \(f\). The heatmap estimates the hole locations in image \(f\). A point in \(P\) with a high heat value is more possible as a part of a hole.
1: Denote \(P=([0,a]\times[0,b])\cap\mathbb{Z}^{2}\) and \(R=([0,n]\times[0,n])\cap\mathbb{Z}^{2}\). Define \(B\) and \(\widehat{R}\) as in Section 3.2.
2: Set \(H:P\rightarrow\mathbb{R}\) as the zero function.
3:for\(i\in\{0,1,...,a\}\) and \(j\in\{0,1,...,b\}\)do
4:if\((i\cdot k,j\cdot k)+R\subseteq P\)then
5: Set \(X_{1}=((i,j)+\widehat{R})\cap X\) and \(X_{2}=X\setminus((i,j)+R)\)
6: Compute \(\mathcal{M}=i_{1}(X_{1};X_{2})+o_{1}(X_{1};X_{2})\)
7: Define \(H^{\prime}:P\rightarrow\{0,1\}\) as follows: \[H^{\prime}(\mathbf{x})=\begin{cases}H(\mathbf{x})+\mathcal{M}&\text{ if }\mathbf{x}\in(i,j)+\widehat{R},\\ 0&\text{ otherwise.}\end{cases}\]
8:\(H\gets H^{\prime}\)
9:else
10:continue
11:endif
12:endfor
13:return\(H^{\prime}\cdot(1-f)\)
```
**Algorithm 1** The hole structure detection algorithm.
Apart from the task of detecting the location of holes in an image, recognizing the size or shape of holes is also an interesting one. As introduced in Section 3.3.2, local windows that contain holes will produce \((1,2)\) barcodes in the corresponding short persistent homology. Based on the observation, we can modify Algorithm 1 by considering
Figure 6: A demonstration of Algorithm 1 on a binary image. (a) An illustration of a \(500\times 700\) fingerprint image, where the Betti pair of the image is \((\beta_{0},\beta_{1})=(92,14)\). (b) The marked white regions as the \(14\) holes of the fingerprint image. (c) The output heatmap by Algorithm 1. (d) The non-zero parts of the output heatmap, form an estimation for the hole positions. Here we choose the window \(R\) as a \(30\times 30\) square with step \(k=15\).
the information on the size of local windows and changing the \(\mathcal{M}\) value to tackle this task. As follows, we propose Algorithm 2 as a modification of Algorithm 1. We note here that the we implement these two algorithms in Python with the Gudhi package [47].
```
0: Binary image \(f:P\rightarrow\{0,1\}\) on a rectangle \(R\), \(X=f^{-1}(0)\), an \(n\times n\) square window \(R\), and a sliding step \(k\).
0: A function \(H:P\rightarrow\mathbb{R}\) as a heatmap of \(f\). The heatmap estimates the hole locations in image \(f\). A point in \(P\) with a high heat value is more possible as a part of a "large" hole.
1: Denote \(P=([0,a]\times[0,b])\cap\mathbb{Z}^{2}\) and \(R=([0,n]\times[0,n])\cap\mathbb{Z}^{2}\). Define \(B\) and \(\widehat{R}\) as in Section 3.2.
2: Set \(H:P\rightarrow\mathbb{R}\) as the zero function.
3:for\(i\in\{0,1,...,a\}\) and \(j\in\{0,1,...,b\}\)do
4:if\((i\cdot k,j\cdot k)+R\subseteq P\)then
5: Set \(X_{1}=((i,j)+\widehat{R})\cap X\) and \(X_{2}=X\setminus((i,j)+R)\)
6: Compute \(\mathcal{M}=\operatorname{vol}(R)\cdot(o_{1}(X_{1};X_{2})-i_{1}(X_{1};X_{2}))\)\(\triangleright\) The only difference to Algorithm 1
7: Define \(H^{\prime}:P\rightarrow\{0,1\}\) as follows: \[H^{\prime}(\mathbf{x})=\begin{cases}H(\mathbf{x})+\mathcal{M}&\text{ if } \mathbf{x}\in(i,j)+\widehat{R},\\ 0&\text{ otherwise.}\end{cases}\]
8:\(H\gets H^{\prime}\)
9:else
10:continue
11:endif
12:endfor
13:return\(H^{\prime}\cdot(1-f)\)
```
**Algorithm 2** The hole size estimation algorithm.
We note that the only difference between Algorithm 1 and Algorithm 2 is the value of \(\mathcal{M}\) in line 6 of both algorithms. As we discussed above, \(i_{1}(X_{1};X_{2})>0\) means that some holes are bounded by the area \(X_{1}\). Due to this, we apply \(-\operatorname{vol}(R)\cdot i_{1}(X_{1};X_{2})\) as the punishment term of the region's heat value. In addition, since punishment will produce negative values, if a certain area has many fine holes, the heat function will have a high negative value in this area and also estimates the location of these holes.
Figure 7 demonstrates Algorithm 2 on a binary image with hole structures in different sizes. Within different local windows (\(50\times 50\), \(100\times 100\), and \(150\times 150\) squares), Algorithm 2 gives different attention to these holes. We notice that small holes may get more attention from Algorithm 2 (Figure 7(b)) since there are many holes next to each other, and hence the shared edges will get a higher heat value. Keeping enlarging the size of the local window, we observe that smaller holes in the image would get more punishment in heat values. The sum of the three heatmaps summarizes
Figure 7: A demonstration of Algorithm 2 on a binary image. (a) An illustration of a \(200\times 300\) binary image with Betti pair \((\beta_{0},\beta_{1})=(2,28)\). (b), (c), and (d) are the output heatmaps of Algorithm 2 with local windows of sizes \(50\times 50\), \(100\times 100\), and \(150\times 150\), respectively. (e) The sum heatmap of (b), (c), and (d). (e) The non-zero parts of the output heatmap, form an estimation for the hole positions. Here we choose the step \(k=25\).
the "importances" of black pixels in the image. Finally, we see that Algorithm 2 successfully approximates the position of the "largest hole" by the thresholding method.
However, real pore structure data may be more complex than the images presented in the paper. Methods to use barcode information in pore structure analysis, such as the choice of the penalty function, still need research and development, which is our main development work in the future.
## 5 Conclusion
To summarize the paper, we propose a local system and local persistent homology framework to study the local merging relations in an arbitrary topological space. By using the merging and out-merging numbers of local regions, we propose an algorithm to detect the sizes and positions of pores in the image. Although the demonstration focuses on digital images, the framework can be adapted to any topological space with local systems. We also look forward to applying this framework to point-cloud data, especially its applications in crystalline data analysis.
## Acknowledgement
Most of the work in this article was completed by the author during his doctoral study at National Taiwan Normal University (2016-2022). The author would like to thank Dr. Chun-Chi Lin (NTNU) and Dr. Yu-Min Chung (Eli Lilly and Company), the author's doctoral supervisors, for their comments and suggestions on the work. Especially, Dr. Yu-Min Chung provided many suggestions for studying the geometric meaning of persistent barcodes and the global sections of the local system, making the discussion more fruitful and rigorous. The author would also like to thank Dr. Kelin Xia, the author's postdoctoral supervisor at Nanyang Technological University. The author got a lot of inspiration from the discussion with Dr. Xia so that this paper can have more research directions, such as a more detailed study of the geometry of cellular sheaves, and more possible applications.
|
2302.13746 | Topologized standard construction and locally quasinormal subgroups | This paper is the extended version of some results in [13, 14]. Let H be a
subgroup of fundamental group. The first paper of the paper is devoted to
studying weaker conditions under which homotopically Hausdorff relative to H
becomes homotopically path Hausdorff relative to H. By using of these
conditions, we explore the connection between whisker and quotient topologies
on fundamental group. After that, we address the coincidence of two determined
topologies on the standard construction XeH when H is a locally quasinormal
subgroup. Finally Example 3.14 illustrates that these kinds of subgroups are
more extensive than normal subgroups and justifies the generalizations of these
results. | Zeynal Pashaei, Necat Gorentas, Roghayeh Abdi | 2023-02-27T13:20:49Z | http://arxiv.org/abs/2302.13746v1 | # Topological standard construction and locally quasinormal subgroups
###### Abstract
This paper is the extended version of some results in [13; 14]. Let \(H\leq\pi_{1}(X,x_{0})\). The first part of the paper is devoted to studying weaker conditions under which homotopically Hausdorff relative to H becomes homotopically path Hausdorff relative to H. By using of these conditions, we explore the connection between whisker and quotient topologies on fundamental group. After that, we address the coincidence of two determined topologies on the standard construction \(\widetilde{X}_{H}\) when H is a locally quasinormal subgroup. Finally, Example 3.14 illustrates that these kinds of subgroups are more extensive than normal subgroups and justifies the generalization of these results.
keywords: Homotopically Hausdorff, Homotopically path Hausdorff, Strong small loop transfer spaces, Quasitopological fundamental group, Whisker topology, Lasso topology, Covering map. Msc: 57M10, 57M12, 57M05, 55Q05. +
Footnote †: journal: Elsevier
## 1 Introduction
In the classical covering theory, semilocally simply connectivity is a crucial condition. Indeed, when \(X\) is Peano semilocally simply connected, connected covering spaces of \(X\) correspond with the conjugacy classes of all subgroups of \(\pi_{1}(X,x)\). Accordingly, it is possible there are many subgroups of \(\pi_{1}(X,x)\) which are not correspondence to a covering map but which are correspondence to other generalizations of covering map [4; 5]. There is a natural topology on fundamental group, \(\pi_{1}^{qtop}(X,x)\)
which plays important role in the existence of covering spaces: \(X\) admits a universal covering space if and only if \(\pi_{1}^{qtop}(X,x)\) is discrete. So, the entire subgroups of \(\pi_{1}(X,x)\) are covering subgroup if \(\pi_{1}^{qtop}(X,x)\) is discrete.
If \(X\) is a non-semilocally simply connected space, such as 1-dimensional Menger universal cure, the Hawaiian Earring and other complicated local spaces, we do not have simply connected covering. Accordingly, one is led to generalize the concept of universal covering space including such spaces. Considering those properties of covering spaces which are essential is a joint approach. One of these generalizations is semicoverings. Semicoverings are in connection with topological group structures on fundamental groups [3, 4]. Next approach, named generalized covering map, are expressed only on the basis of unique lifting properties and need not to be a semicovering map [11]. Besides what is said, the topological properties of covering, semicovering, and generalized covering subgroups of \(\pi_{1}^{qtop}(X,x)\) have been studied in [4, 6, 16]. The existence of universal connected covering space of \(X\) makes the coincidence of these three concepts occur. In [8], Brodskiy et al. have studied whisker topology on fundamental group for the first time, \(\pi_{1}^{wh}(X,x)\), and they have shown that \(\pi_{1}^{wh}(X,x)\) does not depend on the choice of point \(x\in X\) in case \(X\) is small loop transfer (SLT for short) space. A few results of the paper [8] are relevant to strong SLT spaces which are stronger version of SLT spaces. The authors in [13, 14] have introduced spaces are more extensive than SLT and strong SLT spaces. One of advantages of these new approaches is in relation to the vastness of them and their generalizations. Example 2.16 of [14] shows that (strong) SLT spaces are wider than semilocally simply connected and small loop spaces. Moreover, (strong) SLT spaces and their generalizations have a number of other advantages over semilocally simply connected and small loop approaches. Let us recall some of these basic results which have been recently obtained by researchers.
* A Peano topological space \(X\) is SLT iff \(\widetilde{X}_{e}^{wh}=\widetilde{X}_{e}^{top}\). The extended version of it is Theorem 3.2 of [13]; moreover, we can verify that \(\pi_{1}^{qtop}(X,x_{0})\) is topological group whenever \(X\) is SLT at \(x_{0}\).
* A path connected space \(X\) is strong SLT iff \(\widetilde{X}_{e}^{wh}=\widetilde{X}_{e}^{l}\). The extended version of it is Theorem 4.2 of [14].
* An equivalent condition for the discreteness of \(\pi_{1}^{wh}(X,x_{0})\) is that \(X\) be semilocally simply connected at \(x_{0}\).
* Letting \(X\) be an SLT space, the concepts of h.H and h.p.H are equivalent; its relative version does also hold.
* Each generalized covering subgroup of \(\pi_{1}(X,x_{0})\), e.g. H, is semicovering subgroup when \(X\) is an H-SLT space at \(x_{0}\).
* Each semicovering subgroup of \(\pi_{1}(X,x_{0})\), e.g. H, is covering subgroup when \(X\) is a strong H-SLT space at \(x_{0}\).
Note that the property of locally quasinormal subgroup, defined in [9] (see Definition 3.1), led us to improve some results of the articles [13; 14]. At the begining of section 3, our purpose is unifying two important concepts as mentioned above. By using of this coincidence, we get information about the connection between whisker and quotient topologies on fundamental group. However, we use new conditions to expand Theorem 4.2 of [14]. Finally, Example 3.14 illustrates that locally quasinormal subgroups are more extensive than normal subgroups which are one of the requirements of some theorems of the articles [13; 14].
## 2 Definitions and terminologies
Throughout the paper \((X,x_{0})\) will denote a pointed path-connected space and H will denote a subgroup of fundamental group \(\pi_{1}(X,x_{0})\). However, we call \(X\) is Peano when it is connected and locally path connected. For a given path \(\alpha:[0,1]\to X\), \(\bar{\alpha}(t)=\alpha(1-t)\) is the reverse path. Let \(\gamma\) and \(\delta\) be paths in \(X\). The concatenation of \(\gamma\) and \(\delta\) is denoted by \(\gamma*\delta\) in which \(\gamma(1)=\delta(0)\). Moreover, we denote constant path sending the unit interval set \([0,1]\) to \(x\) by \(c_{x}\). For given \(x\in X\), \(P(X,x)\) denotes the subspace of paths whose starting point is \(x\) and \(\Omega(X,x)\) denotes the subspace of paths whose the initial and final points are equal to \(x\). Letting \(\alpha\in P(X,x_{0})\), we denote path-conjugate subgroup \([\bar{\alpha}H\alpha]=\{[\bar{\alpha}*\delta*\alpha]\mid[\delta]\in H\}\) of \(\pi_{1}(X,x)\) by \(H_{\alpha}\). The map \(f_{\#}:\pi_{1}(X,x_{0})\rightarrow\pi_{1}(Y,y_{0})\) denotes the homomorphism induced by continuous function \(f:(X,x_{0})\rightarrow(Y,y_{0})\). The subgroup \(\pi(\mathcal{U},x_{0})\) of \(\pi_{1}(X,x_{0})\), named Spanier subgroup, is generated by elements having the forms as \(\alpha*\beta*\bar{\alpha}\), where \(\beta\in\Omega(X,\alpha(1))\) in which \(\mathrm{Im}(\beta)\) is contained in some elements of \(\mathcal{U}\). In the following theorem, Spanier has shown that Spanier subgroups help us to determine when a map is covering. Note that H is called a covering subgroup if \(X\) has a covering map such as \(p:\widetilde{X}\to X\) with \(p(\tilde{x}_{0})=x_{0}\) so that \(p_{\#}\pi_{1}(\widetilde{X},\tilde{x}_{0})\) and \(H\) are equal.
**Theorem 2.1**.: _[_15_, Theorem 2.5.13]_ _Given a Peano space \(X\), \(H\leq\pi_{1}(X,x_{0})\) is a covering subgroup iff there exists an open cover \(\mathcal{U}\) of \(X\) such that \(\pi(\mathcal{U},x_{0})\leq H\)._
The standard construction is introduced by Spanier in [15] when he was going to classify covering spaces of Peano space \(X\) having at least one universal covering space. Take \(\alpha,\beta\in P(X,x_{0})\). We say that \(\alpha\) and \(\beta\) have the same equivalence
classes, denoted by \(\alpha\sim\beta\), if and only if \(\beta(1)=\alpha(1)\) and \([\alpha]\in H[\beta]\). Define \(\widetilde{X}_{H}=P(X,x_{0})/\sim\). Let \([\alpha]_{H}\) denote the equivalence class of \(\alpha\). Put \(\tilde{x}_{H}=[c_{x_{0}}]_{H}\). We write \(\widetilde{X}\) instead of \(\widetilde{X}_{H}\) whenever H is trivial; it is called the standard construction. Let us recall that three types of topology have been studied \(\widetilde{X}_{H}\) so far. The quotient map \(q:P(X,x_{0})\rightarrow\widetilde{X}_{H}\) induces the quotient topology on \(\widetilde{X}_{H}\), denoted by \(\widetilde{X}_{H}^{top}\), in which \(P(X,x_{0})\) is equipped with the compact-open topology. In the attempt to construct covering spaces, topology Spanier defined on \(\widetilde{X}_{H}\) is as follows; it is named the whisker topology by some people [7, 19] and denoted by \(\widetilde{X}_{H}^{wh}\).
**Definition 2.2**.: _The Whisker topology on standard construction has the basis \(N_{H}([\alpha]_{H},U)=\{[\gamma]_{H}\ |\ \gamma\simeq\alpha*\delta\ for\ some\ \delta\ inside\ open\ subset\ U\ of\ \alpha(1)=\delta(0)\}\)_
**Definition 2.3**.: _The Lasso topology on standard construction has the basis_
\[\begin{array}{l}N_{H}([\alpha]_{H},\mathcal{U},U)=\{[\beta]_{H}\ |\ \beta\simeq \alpha*\gamma*\delta\ for\ some\ [\gamma]\in\\ \pi(\mathcal{U},\alpha(1))\ and\ for\ some\ \delta\ inside\ U\ of\ \alpha(1)=\delta(0)\}\end{array}\]
_where \(U\in\mathcal{U}\). It is denoted as \(\widetilde{X}_{H}^{l}\)._
It can be easily observed that \(\pi_{1}(X,x_{0})\) is a subset of \(\widetilde{X}\). The induced topologies on \(\pi_{1}(X,x_{0})\) by \(\widetilde{X}^{wh}\), \(\widetilde{X}^{top}\), and \(\widetilde{X}^{l}\) are denoted as \(\pi_{1}^{wh}(X,x_{0})\), \(\pi_{1}^{qtop}(X,x_{0})\), and \(\pi_{1}^{l}(X,x_{0})\), respectively (see [3, 6, 8, 13, 14, 19] for more details).
Semicovering maps are defined based on the local homeomorphism property (see [2, 4]). As in the definition of covering subgroup, H is a semicovering subgroup if it can be expressed in terms of a semicovering map such as \(p:\widetilde{X}\to X\). It was shown that semicoverings correspond to open subgroups of quasitopological fundamental group \(\pi_{1}^{qtop}(X)\)[2]. The authors in [16] have tried to recognize which one of subgroups are open. This attempt led them to define a special subgroups are rather similar to Spanier subgroups. At first, they introduced path open cover \(\mathcal{V}=\{V_{\alpha}\ |\ \alpha\in P(X,x_{0})\ and\ \alpha(1)\in V_{\alpha}\}\) which is an open cover of \(X\). The subgroup \(\widetilde{\pi}(\mathcal{V},x_{0})\leq\pi_{1}(X,x_{0})\), called path Spanier subgroup, is generated by elements having the forms as \(\alpha*\beta*\bar{\alpha}\), where \(\beta\) is a loop at \(\alpha(1)\) whose image is contained in \(V_{\alpha}\in\mathcal{V}\).
**Theorem 2.4**.: _[_16_, Corollary 3.3]_ _For a given Peano space \(X\), H is a semicovering subgroup if and only if it contains a path Spanier subgroup._
Unlike universal covering maps, generalized universal covering maps, initially defined by Fischer and Zastrow in [10], play important role in finding fundamental group of complicated local spaces, e.g. Hawaiian Earring ( see [10, 12] for more details). After, this definition has been extended to general subgroups of \(\pi_{1}(X,x_{0})\) by Brazas in [5] as follows.
**Definition 2.5**.: _A map \(p:(\widetilde{X},\tilde{x})\rightarrow(X,x)\) is called a generalized covering map if it has the following properties:_
1. \(\widetilde{X}\) _is a Peano space,_
2. _for every map_ \(g:(Z,z)\rightarrow(X,x)\)_, there exists a unique map_ \(\tilde{g}:(Z,z)\rightarrow(\widetilde{X},\tilde{x})\) _such that_ \(p\circ\tilde{g}=g\) _provided that_ \(g_{\#}\pi_{1}(Z,z)\subseteq p_{\#}\pi_{1}(\widetilde{X},\tilde{x})\)__
In this extension, the unique lifting property of covering maps has been just used. Note that these two notions are not necessarily equal (see [10, Proposition 3.6], [15, Corollary 2.5.14]). In [5, Lemma 5.10], it is verified that each generalized covering map such as \(p:(\widetilde{X},\tilde{x}_{0})\rightarrow(X,x_{0})\) associated to a subgroup H, it means that \(p_{\#}\pi_{1}(\widetilde{X},\tilde{x}_{0})=H\), is equivalent to a specific map, named endpoint projection map, \(p_{H}:(\widetilde{X}_{H}^{wh},\tilde{x}_{H})\rightarrow(X,x_{0})\) with \(p_{H}([\alpha]_{H})=\alpha(1)\). In other words, the topology of any generalized covering space coincides with the standard topology.
**Definition 2.6**.: _[_10_, p. 190]_ _Let \(\alpha\in P(X,x_{0})\) with \(\alpha(1)=x\) and \(g\notin H\). If there is an open subset \(U\subseteq X\) containing \(x\) such that \(i_{\#}\pi_{1}(U,x)_{\bar{\alpha}}\cap Hg=\emptyset\), we say that \(X\) is homotopically Hausdorff (h.H for short) relative to H._
**Definition 2.7**.: _[_6_]_ _Let \(\alpha,\beta\in P(X,x_{0})\) with \(\alpha(1)=\beta(1)=x\) and \([\alpha]\notin H[\beta]\). Also, suppose that we have partition \(\{[t_{i-1},t_{i}]|\ 1\leq i\leq n\) such that \(t_{0}=0,\ t_{n}=1\}\) of the unit interval \(I\) and open subsets \(U_{1},U_{2},...,U_{n}\) with \(\alpha([t_{i-1},t_{i}])\subseteq U_{i}\) for \(i=1,2,...,n\). If \(\lambda\in P(X,x_{0})\) is another path with \(\lambda(1)=x\) in which \(\lambda([t_{i-1},t_{i}])\subseteq U_{i}\) and \(\lambda(t_{i})=\alpha(t_{i})\) so that for \(i=0,1,...,n\), \([\lambda]\notin H[\beta]\), then we call \(X\) is homotopically path Hausdorff (h.p.H for short) relative to H._
Note that one of the requirements of the closeness of H in \(\pi_{1}^{qtop}(X,x_{0})\) is that \(p_{H}\) has the unique path lifting property.
**Theorem 2.8**.: _[_6_]_ _Let \(X\) be a Peano space. If H is closed in \(\pi_{1}^{qtop}(X,x_{0})\), then \(p_{H}\) has the unique path lifting property._
**Theorem 2.9**.: _If \(H\) is closed in \(\pi_{1}^{qtop}(X,x_{0})\), then \(X\) is h.p.H relative to H. If \(X\) is Peano and h.p.H relative to H, then H is closed in \(\pi_{1}^{qtop}(X,x_{0})\)._
The propostion below refers to necessary and sufficient conditions for \(p_{H}\) becomes a generalized covering map.
**Proposition 2.10**.: _[_5_]_ _Let \(H\leq\pi_{1}(X,x_{0})\). Then_
* _If_ \(p_{H}\) _is generalized covering,_ \(X\) _is h.H relative to H._
* \(p_{H}\) _is generalized covering if_ \(X\) _is h.p.H relative to H._
In [8], Brodskiy et al. initially inroduced the concept of (strong) small loop transfer spaces. The extended version of them is defined based on an arbitrary subgroup of \(\pi_{1}(X,x_{0})\).
**Definition 2.11**.: _[_13_, Definition 2.11]_ _Let \(\alpha\in P(X,x_{0})\) with \(\alpha(1)=x\). If for each open subset \(U\) at \(x_{0}\) there exists an open subset \(V\) at \(x\) such that \(i_{\#}\pi_{1}(V,\alpha(1))_{\bar{\alpha}}\subseteq{Hi_{\#}\pi_{1}(U,x_{0})}\), we call \(X\) is an H-small loop transfer (H-SLT for short) space at \(x_{0}\). However, we say that \(X\) is an H-SLT space if for each \(\delta\in P(X,x_{0})\) with \(\delta(1)=x\), \(X\) is \(H_{\delta}\)-SLT at \(x\). We write SLT instead of H-SLT whenever H is trivial._
**Definition 2.12**.: _[_14_, Definition 1.3]_ _Let for each \(x\in X\) and for every open subset \(U\) at \(x_{0}\) there exists an open subset \(V\) at \(x\) so that for every \(\alpha\in P(X,x_{0})\) with \(\alpha(1)=x\) we have \(i_{\#}\pi_{1}(V,\alpha(1))_{\bar{\alpha}}\subseteq{Hi_{\#}\pi_{1}(U,x_{0})}\). Then we call \(X\) is a strong H-SLT space at \(x_{0}\). However, we say that \(X\) is a strong H-SLT space if for each \(\delta\in P(X,x_{0})\) with \(\delta(1)=x\), \(X\) is strong \(H_{\delta}\)-SLT at \(x\). As in the above definition, we write strong SLT instead of strong H-SLT whenever H is trivial._
## 3 Main results
Definitions 2.6 and 2.7 and their relations have been investigated by some people in [5; 9; 10; 13]. One of significant features of these kinds of spaces can be seen in Proposition 2.10. Though every h.p.H relative to \(H\) is h.H relative to \(H\), but the converse need not to be true (see [12; 18]). It is of importance to determine when these two notions are coincident. Recall that the property of being semilocally simply connected causes the coincidence of these concepts. Even the authors in [13; Theorem 2.5] showed that this statement holds for small loop transfer spaces relative to H provided that H is normal. One of main purposes of this article is expanding this theorem. In fact, we consider another subgroup instead of normal subgroup; it is called locally quasinormal subgroup.
**Definition 3.1**.: _[_9_]_ _Let \(\alpha\in P(X,x_{0})\) with \(\alpha(1)=x\). If for each open subset \(x\in U\) there exists an open subset \(V\subseteq U\) of \(x\) such that \(H\pi(\alpha,V)=\pi(\alpha,V)H\), we say that \(H\) is locally quasinormal._
**Lemma 3.2**.: _Let \(\alpha\in P(X,x_{0})\). If H is a locally quasinormal subgroup, then so is \(H_{\alpha}\)._
Proof.: Let \(\delta\) be an arbitrary path from \(x\) to \(y\). Because H is locally quasinormal, we have \(H\pi(\alpha*\delta,V)=\pi(\alpha*\delta,V)H\). Now, we show that \(H_{\alpha}\pi(\delta,V)=\pi(\delta,V)H_{\alpha}\)
At first, take an element \([\bar{\alpha}*h*\alpha*\delta*\beta*\bar{\delta}]\in H_{\alpha}\pi(\delta,V)\), where \([h]\in H\) and \([\beta]\in\pi_{1}(V,y)\). Note that \([\bar{\alpha}*h*\alpha*\delta*\beta*\bar{\delta}]=[\bar{\alpha}*h*\alpha*\delta* \beta*\bar{\delta}*\bar{\alpha}*\alpha]\). Clearly, \([h*\alpha*\delta*\beta*\bar{\delta}*\bar{\alpha}]\in H\pi(\alpha*\delta,V)\). According to the above relation, there exist \([h^{\prime}]\in H\) and \([\beta^{\prime}]\in\pi_{1}(V,y)\) such that \([h*\alpha*\delta*\beta*\bar{\delta}*\bar{\alpha}]=[\alpha*\delta*\beta^{\prime }*\bar{\delta}*\bar{\alpha}*h^{\prime}]\in\pi(\alpha*\delta,V)H\). Therefore, \([\bar{\alpha}*h*\alpha*\delta*\beta*\bar{\delta}]=[\bar{\alpha}*h*\alpha* \delta*\beta*\bar{\delta}*\bar{\alpha}*\alpha]=[\bar{\alpha}*\alpha*\delta* \beta^{\prime}*\bar{\delta}*\bar{\alpha}*h^{\prime}*\alpha]=[\delta*\beta^{ \prime}*\bar{\delta}*\bar{\alpha}*h^{\prime}*\alpha]\in\pi(\delta,V)H_{\alpha}\) which implies that \(H_{\alpha}\pi(\delta,V)\subseteq\pi(\delta,V)H_{\alpha}\). In similar way, we can follow \(\pi(\delta,V)H_{\alpha}\subseteq H_{\alpha}\pi(\delta,V)\).
**Remark 3.3**.: _Assume that H is a locally quasinormal subgroup. By the proof of lemma 3.2, if we put constant path \(c_{x}\) instead of \(\delta\), then \(H_{\alpha}\pi(c_{x},V)=\pi(c_{x},V)H_{\alpha}\). Note that \(\pi(c_{x},V)=i_{\#}(V,x)\), where \(i:V\to X\) is a inclusion map._
**Theorem 3.4**.: _Let \(X\) be an \(H\)-SLT space at \(x_{0}\) and \(H\) be locally quasinormal. If \(X\) is h.H relative to \(H\), then \(X\) is h.p.H relative to \(H\)._
Proof.: Suppose that \([\beta*\bar{\alpha}]\notin H\) where \(\alpha,\beta\in P(X,x_{0})\) and \(\alpha(1)=\beta(1)=x\). Since \(X\) is h.H relative to H, we have an open subset \(U\subseteq X\) of \(x_{0}\) such that \(i_{\#}\pi_{1}(U,x_{0})\cap H[\beta*\bar{\alpha}]=\emptyset\). On the other hand, because \(H\) is locally quasinormal subgroup, there is an open subset \(V\subseteq U\) of \(x_{0}\) such that \(Hi_{\#}\pi_{1}(V,x_{0})=i_{\#}\pi_{1}(V,x_{0})H\) and \(i_{\#}\pi_{1}(V,x_{0})\cap H[\beta*\bar{\alpha}]=\emptyset\). For every \(t\in[0,1]\), consider the path \(\alpha_{t}=\alpha\mid_{[0,t]}\) from \(x_{0}\) to \(\alpha_{t}(1)=\alpha(t)=x_{t}\) and also put \(\alpha_{t_{0}}=c_{x_{0}}\). By assumption, we have open subset \(V_{t}\subseteq X\) at \(x_{t}\) such that for any closed path \(\beta_{t}\) at \(x_{t}\) in \(V_{t}\) there is a closed path \(\gamma_{t}\) at \(x_{0}\) in \(V\) such that \([\alpha_{t}*\beta_{t}*\bar{\alpha}_{t}]_{H}=[\delta_{t}]_{H}\) or equivalently, \([\alpha_{t}*\beta_{t}*\bar{\alpha}_{t}]\in H[\delta_{t}]\). By the compactness of closed interval \(I=[0,1]\) and the continuity of \(\alpha\), we have a partition \(\{[t_{i-1},t_{i}]|\ i=1,2,...,n\ such\ that\ t_{0}=0,\ t_{n}=1\}\) of \([0,1]\) and open subsets \(U_{1},...,U_{n}\) such that \(\alpha[t_{i-1},t_{i}]\subseteq U_{i}\). Put \(\alpha_{i}:=\alpha|_{[t_{i-1},t_{i}]}\) for \(i=1,2,...,n\). Choose another path such as \(\gamma\) from \(x_{0}\) to \(x\) such that image of \(\gamma_{i}:=\gamma|_{[t_{i-1},t_{i}]}\) is contained in \(U_{i}\) for \(i=1,2,...,n\) and \(\gamma(t_{i})=\alpha(t_{i})\) for \(i=0,1,...,n\). If we put \(\theta_{i}=\alpha_{t_{i-1}}*\gamma_{i}*\bar{\alpha}_{i}*\bar{\alpha}_{t_{i-1}}\) for \(1\leq i\leq n\). It is not difficult to see that \(\theta_{i}\)'s are loop at \(\alpha(t_{i-1})\) in \(U_{i}\). As stated, for \(\theta_{1},....,\theta_{n}\) we have, respectively, \([\delta_{1}],...,[\delta_{n}]\) belong to \(i_{*}\pi_{1}(V,x_{0})\) such that \([\theta_{i}]\in H[\delta_{i}]\) for \(i=1,2,...,n\). Since \(Hi_{\#}\pi_{1}(V,x_{0})=i_{\#}\pi_{1}(V,x_{0})H\), \(H[\delta_{1}]H[\delta_{2}]\in HH_{\#}\pi_{1}(V,x_{0})[\delta_{2}]=Hi_{\#}\pi_{1} (V,x_{0})\) and in a silmilar way \(H[\delta_{1}]H[\delta_{2}]H[\delta_{3}]\in Hi_{\#}\pi_{1}(V,x_{0})H[\delta_{3}] =HH_{\#}\pi_{1}(V,x_{0})[\delta_{3}]=Hi_{*}\pi_{1}(V,x_{0})\). By continuity this process, \(H[\delta_{1}]H[\delta_{2}]...H[\delta_{n}]\in Hi_{\#}\pi_{1}(V,x_{0})\). Note that \(\gamma*\bar{\alpha}=\theta_{1}*...*\theta_{n}\). Therefore, \([\gamma*\bar{\alpha}]=[\theta_{1}*...*\theta_{n}]\in H[\delta_{1}]H[\delta_{2}]...H[\delta_{n}]=Hi_{\#}\pi_{1}(V,x_{0})\). In other words, there exists a closed path \(\delta\) at \(x_{0}\) in \(V\) such that \([\gamma*\bar{\alpha}]\in H[\delta]\), i.e., \([\gamma*\bar{\alpha}*\bar{\delta}]\in H\). We have \([\gamma*\bar{\beta}]=[\gamma*\bar{\alpha}*\bar{\delta}][\delta*\alpha*\bar{ \beta}]\). Since \(i_{\#}\pi_{1}(V,x_{0})\cap H[\beta*\bar{\alpha}]=\emptyset\), so \([\delta*\alpha*\bar{\beta}]\notin H\). Therefore, \([\gamma*\bar{\beta}]\notin H\) because \([\gamma*\bar{\alpha}*\bar{\delta}]\in H\). This means that \(X\) is h.p.H relative to \(H\).
**Corollary 3.5**.: _Assume that \(H\) is locally quasinormal and \(X\) is H-SLT. If \(X\) is h.H relative to \(H_{\alpha}\), then \(X\) is h.p.H relative to \(H_{\alpha}\) for every \(\alpha\in P(X,x_{0})\)._
Proof.: By Remark 3.2, \(H_{\alpha}\) is locally quasinormal. However, by assumption, \(X\) is \(H_{\alpha}\)-SLT at \(x\). Accordingly, Theorem 3.4 concludes that \(X\) is h.p.H relative to \(H_{\alpha}\).
**Remark 3.6**.: _In view of Corollary 3.5, the requirements "H-SLT" and "locally quasinormality of H" assure that for every \(\alpha\in P(X,x_{0})\) the concepts of h.p.H relative to \(H_{\alpha}\) and h.H relative to \(H_{\alpha}\) are coincident. In case of H=1, we also have the coincidence of h.H and h.p.H when \(X\) is a SLT space._
**Theorem 3.7**.: _Let \(X\) be an H-SLT space. Then \(X\) is h.H relative to \(H_{\alpha}\) iff \(H_{\alpha}\) is closed in \(\pi_{1}^{wh}(X,\alpha(1))\) for every \(\alpha\in P(X,x_{0})\)._
Proof.: "Only If": Take \(\alpha\in P(X,x_{0})\) with \(\alpha(1)=x\). It is shown in [1, Proposition 3.9] that the property of being h.H relative to \(H_{\alpha}\) implies that \(H_{\alpha}\) is closed in \(\pi_{1}^{wh}(X,x)\).
"If": By assumption, \(X\) is \(H_{\alpha}\)-SLT at \(x\). From Theorem 2.6 of [13], if \(H_{\alpha}\) is closed in \(\pi_{1}^{wh}(X,x)\), then \(X\) is h.H relative to \(H_{\alpha}\).
The normality of H is used in some results of [13], e.g. Corollaries 2.8 and 2.9. These results can be improved as follows.
**Corollary 3.8**.: _Suppose \(X\) is a locally path connected H-SLT space and H is locally quasinormal. Then \(H_{\alpha}\) is closed in \(\pi_{1}^{qtop}(X,x)\) iff \(H_{\alpha}\) is closed in \(\pi_{1}^{wh}(X,x)\)._
Proof.: Only the sufficiency requires proof. From Theorem 3.7, \(X\) is h.H relative to \(H_{\alpha}\) and Corollary 3.5 implies that \(X\) is h.p.H relative to \(H_{\alpha}\). Therefore, from Theorem 2.9, \(H_{\alpha}\) is closed in \(\pi_{1}^{qtop}(X,x)\).
The usefulness of the endpoint projection maps can be seen in Lemma 5.10 of [5]. However, recall that the unique lifting property of \(p_{H}\) results from its unique path lifting property [5, Lemma 5.9]. Indeed, a map \(p:\widetilde{X}\to X\) with \(p(\tilde{x}_{0})=x_{0}\) and \(H=p_{\#}\pi_{1}(\widetilde{X},\tilde{x}_{0})\) is a generalized covering map if \(p_{H}\) has the unique path lifting property. In [6], it has been discovered that specified subgroups of fundamental group make the endpoint projection map becomes unique path lifting, e.g. closed subgroups of \(\pi_{1}^{qtop}(X,x_{0})\). The following corollary demonstrates that Corollary 2.9 of [13] holds for locally quasinormal subgroups.
**Corollary 3.9**.: _Suppose that \(\alpha\in P(X,x_{0})\), \(X\) is a locally path connected H-SLT, and H is locally quasinormal subgroup. Then \(H_{\alpha}\) is closed in \(\pi_{1}^{qtop}(X,\alpha(1))\) iff \(p_{H_{\alpha}}\) has the unique path lifting property._
Proof.: "Only if"': It is immediate from Theorem 2.8.
"If": By Proposition 2.10, \(X\) is h.H relative to \(H_{\alpha}\). Since \(X\) is H-SLT, Theorem 3.7 concludes that \(X\) is h.p.H relative to \(H_{\alpha}\). Therefore, Theorem 2.9 implies that \(H_{\alpha}\) is closed in \(\pi_{1}^{qtop}(X,x)\).
The connection between whisker and lasso topologies on homotopy class of paths, \(\widetilde{X}\), has been introduced by Virk and Zastrow for the first time [19]. After, in similar fashion, the authors in [14] not only clarified the connection between \(\widetilde{X}_{H}^{wh}\) and \(\widetilde{X}_{H}^{l}\) but also characterized conditions for which they become coincident [14, Theorem 4.2]; they are not necessarily identical [19]. One of these conditions is the normality of H. We show that this coincidence holds for locally quasinormal subgroups.
**Theorem 3.10**.: _Let H be locally quasinormal. Then \(X\) is a strong H-SLT space iff \(\widetilde{X}_{H_{\alpha}}^{l}=\widetilde{X}_{H_{\alpha}}^{wh}\) for each path \(\alpha\in P(X,x_{0})\)._
Proof.: "Only if": From the definitions of whisker and lasso topologies, it is evident that \(\widetilde{X}_{N}^{l}\) is coarser than \(\widetilde{X}_{N}^{wh}\) for each subgroup \(N\) of fundamental group. At first, it will be shown that \(\widetilde{X}_{H}^{wh}\) is coarser than \(\widetilde{X}_{H}^{l}\). To do this, we take an open subset \(([\alpha]_{H},U)\) of \(\widetilde{X}_{H}^{wh}\). The hypothesis of locally quasinormality of H assures the existence of an open subset \(V\subseteq U\) of \(\alpha(1)=x\) such that \(H\pi(\alpha,V)=\pi(\alpha,V)H\). Clearly, We have \(([\alpha]_{H},V)\subseteq([\alpha]_{H},U)\). So, since \(X\) is strong H-SLT, for every point \(y\in X\) there is an open subset \(W\) at \(y\) such that for every path \(\delta\) from \(x\) to \(y\), for every closed path \(\beta:I\to W\) at \(y\) there is a closed path \(\lambda:I\to V\) at \(x\) such that \([\delta*\beta*\bar{\delta}]_{H_{\alpha}}=[\lambda]_{H_{\alpha}}\). Assume that \(\mathcal{W}\) is an open cover of \(X\) consisting of \(W\)'s. Define open basis neighborhood \(([\alpha]_{H},\mathcal{W},V)\) in \(\widetilde{X}_{H}^{l}\). Consider \([\alpha*l*\lambda]_{H}\in([\alpha]_{H},\mathcal{W},V)\), where \([l]\in\pi(\mathcal{W},\alpha(1))\) and \(\lambda:I\to V\) is a path with \(\lambda(0)=\alpha(1)=x\). We know that \(l=\Pi_{i=1}^{n}\alpha_{i}*\beta_{i}*\bar{\alpha}_{i}\), where \(\alpha_{i}\)'s are paths with \(\alpha_{i}(1)=x\) and \(\beta_{i}\)'s are loops at \(\alpha_{i}(1)\) in some \(W\in\mathcal{W}\). Put \(\theta_{i}=\alpha_{i}*\beta_{i}*\bar{\alpha}_{i}\) for \(i=1,2,...,n\). Since \(X\) is strong H-SLT, \([\theta_{i}]\in H_{\alpha}i_{\#}\pi_{1}(V,x)\) for \(i=1,2,...,n\). So, we have \([l]=[\theta_{1}*\theta_{2}*...*\theta_{n}]\in(H_{\alpha}i_{\#}\pi_{1}(V,x))(H _{\alpha}i_{\#}\pi_{1}(V,x))...(H_{\alpha}i_{\#}\pi_{1}(V,x))\). By Remark 3.3, \((H_{\alpha}i_{\#}\pi_{1}(V,x))(H_{\alpha}i_{\#}\pi_{1}(V,x))...(H_{\alpha}i_{ \#}\pi_{1}(V,x))=H_{\alpha}i_{\#}\pi_{1}(V,x)\) and therefore \([l]\in H_{\alpha}i_{\#}\pi_{1}(V,x)\). In other words, there exists a loop \(\theta\) in \(V\) at \(x\) so that \([l]\in H_{\alpha}[\theta]\), i.e., \([l*\bar{\theta}]\in H_{\alpha}\). Write \([l*\bar{\theta}]=[l*\lambda*\bar{\lambda}*\bar{\theta}]\). So we have \([l*\lambda*\bar{\lambda}*\bar{\theta}]\in[\bar{\alpha}H\alpha]\) or equivalently, \([\alpha*l*\lambda*\bar{\lambda}*\bar{\theta}*\bar{\alpha}]\in H\). This means that \([\alpha*l*\lambda]_{H}=[\alpha*\theta*\lambda]_{H}\in([\alpha]_{H},V)\subseteq ([\alpha]_{H},U)\). Therefore, \(([\alpha]_{H},\mathcal{W},V)\subseteq([\alpha]_{H},U)\) shows that \(\widetilde{X}_{H}^{l}\) is finer than \(\widetilde{X}_{H}^{wh}\). So, \(\widetilde{X}_{H}^{l}=\widetilde{X}_{H}^{wh}\). Note that we can easily derive that for every path \(\alpha\in P(X,x_{0})\) the space \(X\) is a strong \(H_{\alpha}\)-SLT space. Accordingly, Lemma 3.2 and the above statements follow that \(\widetilde{X}_{H_{\alpha}}^{wh}=\widetilde{X}_{H_{\alpha}}^{l}\) for every path \(\alpha\in P(X,x_{0})\).
"If": The proof is analogous to the proof of [14, Theorem 4.2].
The corollary below is an extended version of Corollary 4.3 of [14].
**Corollary 3.11**.: _Suppose H is locally quasinormal and \(\alpha\in P(X,x_{0})\) with \(\alpha(1)=x\). If \(X\) is strong H-SLT, then \((p_{H_{\alpha}}^{-1}(x))^{wh}=(p_{H_{\alpha}}^{-1}(x))^{l}\)._
Proof.: It follows immediately from Theorem 3.10.
The intersection of all Spanier subgroups of \(\pi_{1}(X,x_{0})\), denoted by \(\pi_{1}^{sp}(X,x_{0})\), is called Spanier group [12, Definition 2.3]. However, the set of all homotopy classes of small loops of \(\pi_{1}(X,x_{0})\) forms a subgroup which is denoted by \(\pi_{1}^{s}(X,x_{0})\)[17, Definition 1]; note that a loop \(\alpha\in\Omega(X,x_{0})\) is called small iff it has a homotopy representative in each open subset \(U\) of \(x_{0}\). The usefulness of these subgroups and other important subgroups can be observed in [12, 17]. In the following. it is investigated the equality of these two subgroups. Of course, recall that we have the relation \(\pi_{1}^{s}(X,x_{0})\leq\pi_{1}^{sp}(X,x_{0})\).
**Proposition 3.12**.: _Let \(\pi_{1}^{s}(X,x_{0})\) contains locally quasinormal subgroup H. Then \(\pi_{1}^{s}(X,x_{0})=\pi_{1}^{sp}(X,x_{0})\) if \(X\) is strong H-SLT at \(x_{0}\)._
Proof.: Assume that \([\theta]\in\pi_{1}^{sp}(X,x_{0})\) and \(U\) is an open subset in \(X\) containing \(x_{0}\). By Remark 3.3, we have an open subset \(x_{0}\in V\subseteq U\) such that \(i_{\#}\pi_{1}(V,x_{0})H=Hi_{\#}\pi_{1}(V,x_{0})\). Since \(X\) is strong H-SLT at \(x_{0}\), we can define an open cover \(\mathcal{U}\) of \(X\) such that \(\pi_{1}^{sp}(X,x_{0})\subseteq\pi(\mathcal{U},x_{0})\) and \([\theta]\in\pi(\mathcal{U},x_{0})\). We know that \(\theta=\Pi_{i=1}^{n}\gamma_{i}\), where \(\gamma_{i}=\alpha_{i}*\beta_{i}*\bar{\alpha}_{i}\) for \(i=1,2,...,n\), \(\alpha_{i}\)'s are paths from \(x_{0}\) to \(\alpha_{i}(1)\), and \(\beta_{i}\)'s are closed paths in some \(U_{i}\in\mathcal{U}\) at \(\alpha_{i}(1)\). Hence, there is a closed path \(\lambda_{i}\) in \(V\) at \(x_{0}\) such that \([\gamma_{i}*\bar{\lambda_{i}}]\in H\), that is, \([\gamma_{i}]\in H[\lambda_{i}]\). Thus, \([\theta]=[\gamma_{1}*\gamma_{2}*...*\gamma_{n}]\in(H[\lambda_{1}])(H[\lambda_{ 2}])...(H[\lambda_{n}])\). By the relation \(i_{\#}\pi_{1}(V,x_{0})H=Hi_{\#}\pi_{1}(V,x_{0})\), there is a closed path \(\gamma:I\to V\) at \(x_{0}\) such that \([\theta]\in H[\gamma]\), i.e., \([\theta]=[h*\gamma]=[h][\gamma]\). Since \(H\subseteq\pi_{1}^{s}(X,x_{0})\), so \([h]\in i_{\#}\pi_{1}(V,x_{0})\). Therefore, \([\theta]\in i_{\#}\pi_{1}(V,x_{0})\) concludes that \(\theta\) is a small loop and accordingly, \(\pi_{1}^{s}(X,x_{0})=\pi_{1}^{sp}(X,x_{0})\).
**Corollary 3.13**.: _If \(X\) is strong SLT at \(x_{0}\), \(\pi_{1}^{s}(X,x_{0})=\pi_{1}^{sp}(X,x_{0})\)._
Proof.: It follows immediately from Proposition 3.12.
In the following example, we give a locally quasinormal subgroup which is not normal.
**Example 3.14**.: _As we know, Spanier subgroup \(\pi(\mathcal{U},x_{0})\) and path Spanier subgroup \(\widetilde{\pi}(\mathcal{V},x_{0})\) are equal if \(\widetilde{\pi}(\mathcal{V},x_{0})\) is normal [16]. Therefore, path Spanier subgroup is not necessarily normal. On the other hand, the form of elements of \(\widetilde{\pi}(\mathcal{V},x_{0})\) and \(\pi(\alpha,V_{\alpha})\) for each path \(\alpha\in P(X,x_{0})\) and for every open subset \(V_{\alpha}\in\mathcal{V}\) of \(\alpha(1)\) implies that \(\pi(\alpha,V_{\alpha})\widetilde{\pi}(\mathcal{V},x_{0})=\widetilde{\pi}( \mathcal{V},x_{0})\pi(\alpha,V_{\alpha})\). Accordingly, \(\widetilde{\pi}(\mathcal{V},x_{0})\) is locally quasinormal._ |
2308.06476 | Improved Bohr radius for $ k$-fold symmetric univalent logharmonic
mappings | We study the $k$-fold symmetric starlike univalent logharmonic mappings of
the form $f(z)=zh(z)\overline{g(z)}$ in the unit disk $\mathbb{D}:= \lbrace z
\in \mathbb{C}: |z|<1 \rbrace$ with several examples, where $h(z)=\exp
\left(\sum_{n=1}^{\infty}a_{nk}z^{nk}\right)$ and $g(z)=\exp
\left(\sum_{n=1}^{\infty}b_{nk}z^{nk}\right)$ are analytic in $\mathbb{D}.$ The
distortion bounds of these functions are obtained, which give area bounds.
Improved Bohr radii for this family are calculated. We also introduce the
pre-Schwarzian and Schwarzian derivatives of logharmonic mappings that vanish
at the origin. | Akash Meher, Priyabrat Gochhayat | 2023-08-12T06:03:04Z | http://arxiv.org/abs/2308.06476v1 | # Improved Bohr radius for \(k\)-fold symmetric univalent logharmonic mappings
###### Abstract
We study the \(k\)-fold symmetric starlike univalent logharmonic mappings of the form \(f(z)=zh(z)\overline{g(z)}\) in the unit disk \(\mathbb{D}:=\{z\in\mathbb{C}:|z|<1\}\) with several examples, where \(h(z)=\exp\left(\sum_{n=1}^{\infty}a_{nk}z^{nk}\right)\) and \(g(z)=\exp\left(\sum_{n=1}^{\infty}b_{nk}z^{nk}\right)\) are analytic in \(\mathbb{D}\). The distortion bounds of these functions are obtained, which give area bounds. Improved Bohr radii for this family are calculated. We also introduce the pre-Schwarzian and Schwarzian derivatives of logharmonic mappings that vanishes at the origin.
_2020 Mathematics Subject Classification:_ 30A10, 30C35, 30C45.
_Keywords:_ Logharmonic mappings, \(k\)-fold symmetric mapping, distortion bound, improved Bohr radius, Schwarzian derivative
## 1 Introduction
Let \(\mathcal{A}(\mathbb{D})\) be the class of analytic functions defined in the open unit disk \(\mathbb{D}:=\{z\in\mathbb{C}:|z|<1\}\). A complex-valued function \(f\) is said to be harmonic, if both \(Re\{f\}\) and \(Im\{f\}\) are real harmonic. In other words, harmonic functions \(f\) are the solutions of \(\Delta f=0\), where \(\Delta\) is the Laplacian operator and defined as
\[\Delta=4\frac{\partial^{2}}{\partial z\partial\overline{z}}=\frac{\partial^{2} }{\partial x^{2}}+\frac{\partial^{2}}{\partial y^{2}}.\]
Every harmonic mapping \(f\) has crucial property, that is, it has canonical decomposition \(f=h+\overline{g}\), where \(h,g\in\mathcal{A}(\mathbb{D})\) and respectively known as analytic and co-analytic part of \(f\). Denote \(\mathcal{H}(\mathbb{D}),\) the class of all complex-valued harmonic mappings defined in \(\mathbb{D}\).
A mapping \(f\) defined in \(\mathbb{D}\), is logharmonic, if \(\log(f)\in\mathcal{H}(\mathbb{D})\). Alternatively, the logharmonic mappings are the solutions of the non-linear elliptic partial differential equation
\[\frac{\overline{f}_{\overline{z}}}{\overline{f}}=\omega\frac{f_{z}}{f}, \tag{1}\]
where \(\omega\in\mathcal{A}(\mathbb{D})\), \(|\omega|<1\) and is known as second dilatation of \(f\). Note that if \(f_{1}\) and \(f_{2}\) are two logharmonic functions with respect to \(\omega\), then \(f_{1}f_{2}\) is logharmonic with respect to same \(\omega\), and \(f_{1}/f_{2}\) is logharmonic (provided \(f_{2}\neq 0\)). The logharmonicity preserves pre-composition with a conformal mapping, whereas it is not always true for a post composition. Furthermore, logharmonicity is not invariant under translation and inversion. Every non-constant logharmonic mapping is quasiregular, therefore it is continuous and open. The logharmonic mapping \(f\) is also sense preserving as its Jacobian
\[J_{f}(z)=|f_{z}(z)|^{2}-|f_{\overline{z}}(z)|^{2}=|f_{z}|^{2}\left(1-|\omega|( z)|^{2}\right),\qquad z\in\mathbb{D}. \tag{2}\]
is positive. The salient properties like the modified Liouville's theorem, the maximum principle, the identity principle, and the argument principle hold true for logharmonic mappings (cf. [4]).
A non-constant logharmonic mapping \(f\) defined in \(\mathbb{D}\) bear the representation [5] (also see [4])
\[f(z)=z^{m}|z|^{2\beta m}h(z)\overline{g(z)},\qquad z\in\mathbb{D},\]
where \(m\) is a non-negative integer, \(Re(\beta)>1/2\) and \(h,g\in\mathcal{A}(\mathbb{D})\) satisfying \(h(0)\neq 0\) and \(g(0)=1\). Here, \(\omega(0)\) is the only factor that determines the exponent \(\beta\) in the following way
\[\beta=\overline{\omega(0)}\frac{1+\omega(0)}{1-|\omega(0)|^{2}}.\]
For \(m=0\), \(f\) is non-vanishing at origin and vice versa, and this type of mapping admits the representation
\[f(z)=h(z)\overline{g(z)},\qquad z\in\mathbb{D},\]
where \(h,g\in\mathcal{A}(\mathbb{D})\). The extensive studies of non-vanishing logharmonic mappings are found in [1, 2, 43, 46]. If \(f\) is non-constant univalent logharmonic mapping in \(\mathbb{D}\) such that \(f(0)=0\) and \(f(z)\neq 0\) elsewhere, then \(f\) has the form
\[f(z)=z|z|^{2\beta}h(z)\overline{g(z)},\qquad z\in\mathbb{D},\]
where \(Re(\beta)>1/2\), and \(h,g\in\mathcal{A}(\mathbb{D})\) with \(0\notin(hg)(\mathbb{D})\) and \(g(0)=1\). This type of mapping is widely studied in [1, 3, 5, 6, 14, 16, 42]. For more information on univalent logharmonic mappings, we refer to the review article [4].
Denote \(S_{LH}\), the class of univalent logharmonic mappings \(f\) in \(\mathbb{D}\) of the form
\[f(z)=zh(z)\overline{g(z)},\qquad z\in\mathbb{D}, \tag{3}\]
where \(h,g\in\mathcal{A}(\mathbb{D})\) with
\[h(z)=\exp\left(\sum_{n=1}^{\infty}a_{n}z^{n}\right)\ \ \text{and}\ \ g(z)=\exp\left(\sum_{n=1}^{\infty}b_{n}z^{n}\right).\]
A function \(f\in S_{LH}\) is said to be starlike of order \(\alpha\) if
\[\frac{\partial}{\partial\theta}\left(\arg f(re^{i\theta})\right)=Re\left( \frac{Df(z)}{f(z)}\right)>\alpha,\]
for all \(z=re^{i\theta}\in\mathbb{D}\), where the operator \(D=z\frac{\partial}{\partial z}-\overline{z}\frac{\partial}{\partial\overline{z}}\) and \(0\leq\alpha<1\). This type of function form the class \(S_{LH}^{*}(\alpha)\) (cf. [3, 42]). For \(\alpha=0\), the class becomes \(S_{LH}^{*}\), the class of starlike univalent logharmonic mappings (cf. [6, 14]). This paper is mainly dealing with the functions which are \(k\)-fold symmetric and in the class \(S_{LH}^{*}(\alpha)\). The detailed explanations of the class are in Section 2.
### Bohr radius
In [21], Bohr described the size of the moduli of terms of the power series of a bounded analytic function, which is now called as Bohr phenomenon. For \(f(z)=\sum_{n=0}^{\infty}a_{n}z^{n}\in\mathcal{A}(\mathbb{D})\) with \(|f(z)|<1\) in \(\mathbb{D}\), Bohr obtained the inequality
\[\sum_{n=0}^{\infty}|a_{n}||z^{n}|<1,\qquad z\in\mathbb{D} \tag{4}\]
for \(|z|<1/6\). Later, Weiner, Schur and Riesz independently found the sharp value \(|z|<1/3\), for which the Bohr inequality (4) holds (cf. [47, 49, 51]). The value \(1/3\) is known as the classical Bohr radius for the class of analytic function. The inequality (4) can be expressed in terms of the distance formula
\[d\left(\sum_{n=0}^{\infty}|a_{n}z^{n}|,|a_{0}|\right)=\sum_{n=1}^{\infty}|a_{n }z^{n}|\leq 1-f(0)=d(f(0),\partial f(\mathbb{D}))\]
for \(|z|<1/3\), where \(d\) is the Euclidean distance and \(\partial\mathbb{D}\) is the boundary of the disk \(\mathbb{D}\). The theory is also represented with regard to hyperbolic metric (see [8]).
Following the work due to Bohr [21], researchers raised the Bohr phenomenon to an active area of research by investigating it in one and several complex variables, and also in different contexts. A connection between Bohr inequality and characterizations of Banach Algebra satisfying von Neumann's inequality led operator algebraists to be more interested in Bohr inequality after Dixon [30] showed it existed. The extension of Bohr's theory to the context of Banach Algebra is found in [20, 28, 48]. Ali et al. [15] calculated the Bohr radius for odd and even analytic functions and alternating series. For the case of \(k\)-fold symmetric analytic mappings, the Bohr radius is obtained due to Kayumov and Pinnusamy [38, 39] by making use of Cauchy-Schwarz inequality and subordination principle. The theory is generalised to concave wedge domain in [9, 15]. The paper [9] is also dealing with the linkage between the Bohr phenomenon and differential subordination. Balasubramanian et al. [17]
introduced Bohr inequality for the Dirichlet series. The multidimensional Bohr radius was introduced by Boas and Khavinson [18] with a conclusion that the radius decreases to zero in accordance to increase in dimension. We refer [25, 26, 27, 28] for more information about the Bohr phenomenon on the multidimensional case and Banach space theory. Abu-Muhanna and Gunatillake [10] obtained the Bohr radius for weighted Hardy-Hilbert space and concluded that the Bohr radius of classical Hilbert space cannot be obtained. The Bohr phenomenon on multidimensional weighted Hardy-Hilbert space is derived in [47]. In [19], the authors show the existence of the theory for the case of Hardy space. Abu-Muhanna [7] for the first time generalised the theory of the Bohr phenomenon to harmonic mappings, but the theory was not correct for all cases. Kayumov et al. [41] gave the actual harmonic extension of classical Bohr inequality and obtained the Bohr radius for the class of locally univalent harmonic mappings, quasiconformal harmonic mappings, analytic and harmonic Bloch space. Kayumov and Ponnusamy [40] improved the classical Bohr inequality with new four different formulations. The results are further investigated and improved in the recent review article due to Ismagilov et al. [37]. The harmonic extension of improved Bohr's radius for locally univalent harmonic mapping is calculated by Evdoridis et al. [32]. The logharmonic analogue of classical Bohr inequality is shown by Ali et al. [14] and they calculated the Bohr radius for the functions in the class \(S^{*}_{LH}\). The improved version of Bohr estimate for the functions class \(S^{*}_{LH}\) are obtained in [11]. In this paper, we compute the improved Bohr radius and Bohr type inequalities for the functions in the class \(S^{k*}_{LH}(\alpha)\), the class of univalent \(k\)-fold symmetric mappings which are logharmonic starlike of order \(\alpha\).
The paper is organised as follows: In Section 2, we describe the class \(S^{k*}_{LH}(\alpha)\) by considering various examples. Using the distortion bounds of the functions in \(S^{k*}_{LH}(\alpha)\), we calculate the area bounds in Section 3. We present improved Bohr radius and Bohr type inequalities with their numerical illustration in Section 4. The introduction of the Pre-Schwarzian and Schwarzian derivative for logharmonic mappings of the form \(f(z)=zh(z)\overline{g(z)}\), where \(h(z),g(z)\in\mathcal{A}(\mathbb{D})\) is in Section 5.
## 2 \(k\)-fold symmetric starlike logharmonic mappings
For a positive integer \(k\), an analytic function \(f\) defined in \(\mathbb{D}\) is \(k\)-fold symmetric if \(f(e^{\frac{2\pi i}{k}}z)=e^{\frac{2\pi i}{k}}f(z)\) and has the Taylor-Maclaurin series representation
\[f(z)=\sum_{n=0}^{\infty}A_{nk+1}z^{nk+1},\qquad z\in\mathbb{D}. \tag{5}\]
Conversely, any function \(f\) with the power series representation (5) is \(k\)-fold symmetric inside the domain of convergence of the series (cf. [33, pp. 18], [50]). It is important to note that not all \(k\)-fold symmetric mappings are univalent. Denote \(\mathcal{S}^{k}\) the class of all \(k\)-fold symmetric univalent analytic functions. The class of univalent odd analytic functions is obtained for \(k=2\). The functions which are univalent analytic, \(k\)-fold symmetric and starlike of order \(\alpha\), constitute the class \(\mathcal{S}^{k*}(\alpha)\), where \(0\leq\alpha<1\).
A mapping \(f\) is said to be \(k\)-fold symmetric logharmonic if it is \(k\)-fold symmetric and solution of (1) with respect to some \(\omega\). Let \(S^{k*}_{LH}(\alpha)\) be the class of univalent \(k\)-fold symmetric mappings which are logharmonic starlike of order \(\alpha\) defined in \(\mathbb{D}\) with the representation \(f(z)=zh(z)\overline{g(z)}\), where
\[h(z)=\exp\left(\sum_{n=1}^{\infty}a_{nk}z^{nk}\right)\ \ \text{and}\ \ g(z)=\exp\left(\sum_{n=1}^{\infty}b_{nk}z^{nk}\right),\qquad z\in \mathbb{D}.\]
When \(g\equiv 1\) in \(\mathbb{D}\), the function \(f\) is in \(\mathcal{S}^{k*}(\alpha)\). The following lemma gives the bridge between the classes \(\mathcal{S}^{k*}(\alpha)\) and \(S^{k*}_{LH}(\alpha)\) which is a consequence of Theorem-2.1 of [3]
**Lemma 1**.: _A function \(f(z)=zh(z)\overline{g(z)}\in S^{k*}_{LH}(\alpha)\) if and only if \(\phi(z)=\frac{zh(z)}{g(z)}\in\mathcal{S}^{k*}(\alpha)\) in \(\mathbb{D}\)._
Our first observation on the class \(S^{k*}_{LH}(\alpha)\) is about the logarithm convexity property.
**Theorem 1**.: _The function class \(S^{k*}_{LH}(\alpha)\) is closed under logarithmic convex combination._
Proof.: Suppose \(f_{1},f_{2}\in S^{k*}_{LH}(\alpha)\) and are the solutions of (1) with respect to same \(\omega\). For \(\gamma\in(0,1)\), define \(f(z)=(f_{1}(z))^{\gamma}(f_{2}(z))^{1-\gamma}\) in \(\mathbb{D}\). Then, it is simple to see that \(f(e^{\frac{2\pi i}{k}}z)=e^{\frac{2\pi i}{k}}f(z)\) for some positive integer \(k\). Using the property of logharmonic mappings, \(f\) is a solution of (1) with respect to same \(\omega\) as for \(f_{1}\) and \(f_{2}\) and hence \(f\) is in \(S^{k*}_{LH}(\alpha)\). This completes the proof.
We would like to point out that not all logharmonic mappings are \(k\)-fold symmetric and vice versa. For example, the function \(\tilde{f}_{1}(z)=z^{3}(\overline{z})^{2}\) is \(k\)-fold symmetric, but is not a solution of (1). On the other hand, \(\tilde{f}_{2}(z)=z^{3}\overline{z}\) is a solution of (1) with \(\omega(z)=1/3\), but is not \(k\)-fold symmetric. Now we present some examples of the \(k\)-fold symmetric logharmonic mappings as follows:
**Example 1**.: Consider the function \(\tilde{f}_{3}(z)=z^{k+1}(\overline{z})^{k}\) which is not harmonic in \(\mathbb{D}^{*}=\mathbb{D}\setminus\{0\}\) as \(\tilde{f}_{3z\overline{z}}=k(k+1)|z|^{k-1}z\neq 0,\forall z\in\mathbb{D}^{*}\). However, observe that \(\tilde{f}_{3}\) is \(k\)-fold symmetric and also a solution of (1) with \(\omega(z)=k/(k+1)\). Furthermore, \(\frac{Df_{3}(z)}{f_{3}(z)}=1\) showing that \(f\) is univalent, starlike and sense preserving. Therefore, \(\tilde{f}_{3}\) is \(k\)-fold symmetric starlike sense preserving univalent logharmonic mapping in \(\mathbb{D}^{*}\).
**Example 2**.: Some simple calculations show that the function \(\tilde{f}_{4}(z)=\frac{z(1-\overline{z}^{k})}{1-z^{k}},\) where \(k\) is a positive integer, is \(k\)-fold symmetric and a solution of (1) with
\[\omega(z)=\frac{-kz^{k}}{1+(k-1)z^{k}}.\]
Therefore, \(\tilde{f}_{4}\) is \(k\)-fold symmetric sense preserving logharmonic mapping in \(\mathbb{D}\). Further,
\[\frac{D\tilde{f}_{4}(z)}{\tilde{f}_{4}(z)}=\frac{1+(k-1)z^{k}}{1-z^{k}}+\frac {k\overline{z}^{k}}{1-\overline{z}^{k}}\]
imply that the radius of starlikeness of \(\tilde{f}_{4}\) is the unique root in \((0,1)\) of \(1+(1-2k)r^{2k}-2(k-1)r^{k}=0\).
**Example 3**.: The function \(\tilde{f}_{5}(z)=\dfrac{z^{2}\overline{z}}{(1-z^{k})^{2(1-\alpha)}},\ 0\leq\alpha<1.\) Here, \(\tilde{f}_{5}\) is \(k\)-fold symmetric and also a solution of (1) with
\[\omega(z)=\dfrac{1-z^{k}}{2(1-z^{k})+2k(1-\alpha)z^{k}}.\]
This shows that \(\tilde{f}_{5}\) is a member of \(k\)-fold symmetric logharmonic mappings in \(\mathbb{D}\). Note that
\[\dfrac{D\tilde{f}_{5}(z)}{\tilde{f}_{5}(z)}=1+\dfrac{2k(1-\alpha)z^{k}}{1-z^{ k}}.\]
This implies that the function \(\tilde{f}_{5}\) is starlike of order \(\alpha\) within the radius \(|z|<r^{*}\), where \(r^{*}\) is the unique root in \((0,1)\) of \(1+(1-2k)r^{2k}-2(k-1)r^{k}=0\).
Thus, in general we have the following:
**Theorem 2**.: _Every function \(f\in S^{k*}_{LH}(\alpha)\) is starlike of order \(\alpha\) in \(|z|<R,\) where \(R\) is the unique root in \((0,1)\) of \((1+2\alpha)r^{2k}-(6-2\alpha)r^{k}+1=0.\)_
Recall the function \(f_{\alpha}\) (cf. [16]) defined by
\[f_{\alpha}(z)=zh_{\alpha}(z)\overline{g_{\alpha}(z)}=\dfrac{z}{(1-z^{k})^{ \frac{1}{k}}}\dfrac{1}{(1-\overline{z}^{k})^{\frac{2\alpha-1}{k}}}\exp\left( \dfrac{(1-\alpha)}{k}Re\left(\dfrac{4z^{k}}{1-z^{k}}\right)\right),\qquad z \in\mathbb{D}, \tag{6}\]
where the analytic functions \(h_{\alpha}\) and \(g_{\alpha}\) are of the form
\[h_{\alpha}(z)=\dfrac{1}{(1-z^{k})^{\frac{1}{k}}}\exp\left(\dfrac{2(1-\alpha)z ^{k}}{k(1-z^{k})}\right),\qquad z\in\mathbb{D} \tag{7}\]
and
\[g_{\alpha}(z)=\dfrac{1}{(1-z^{k})^{\frac{2\alpha-1}{k}}}\exp\left(\dfrac{2(1- \alpha)z^{k}}{k(1-z^{k})}\right),\qquad z\in\mathbb{D}. \tag{8}\]
It is important to note that, the characteristics of \(f_{\alpha}\) in the family \(S^{k*}_{LH}(\alpha)\) are same as that of the Koebe function in the family \(\mathcal{S}\), analytic univalent functions. In other words, the function\(f_{\alpha}\) is the \(k\)-fold
Figure 1: Images of the unit disk under the mappings \(\tilde{f}_{4}\) and \(\tilde{f}_{5}\)
symmetric logharmonic Koebe function, which plays extremal role in the calculation of coefficient bounds, growth and covering theorem for the family \(S_{LH}^{k*}(\alpha)\) (cf. [16]). We examine that \(f_{\alpha}\) also provides sharpness to the distortion and area bounds.
## 3 Distortion and area bounds
To study the geometric properties of \(k\)-fold symmetric function is not so easy as to calculate extremal functions for classical problem is quite difficult. However, in this paper attempt has been made to derive sharp distortion and area bounds for the functions in the family \(S_{LH}^{k*}(\alpha)\). Also, by making use of these results along with growth and coefficient bounds, improved version of Bohr phenomenon are studied and show that the \(k\)-fold symmetric logharmonic Koebe function plays a crucial role in these aforementioned problems.
Let \(f_{1},f_{2}\in\mathcal{A}(\mathbb{D})\), then \(f_{1}\) is subordinate to \(f_{2}\), denoted by \(f_{1}\prec f_{2}\), if \(f_{1}(z)=f_{2}(\psi(z))\) for all \(z\in\mathbb{D}\), where \(\psi:\mathbb{D}\to\mathbb{D}\) is an analytic function such that \(\psi(0)=0\) and \(|\psi(z)|<1\). This type of function \(\psi\) is known as the Schwarz function. Denote \(\mathcal{P}\) the class of Caratheodory functions. In other words, an analytic function \(p\) is in \(\mathcal{P}\) if \(Re(p)>0\) and \(p(0)=1\). The class \(\mathcal{P}\) and the Schwarz functions are closely related. A function \(p\in\mathcal{P}\) if and only if there is a Schwarz function \(\psi\) such that
\[p(z)=\frac{1+\psi(z)}{1-\psi(z)},\qquad z\in\mathbb{D}. \tag{9}\]
For more information on subordination and Caratheodory functions, see the books [31, 34, 35]. The following preliminaries help us to calculate our main results.
**Lemma 2** ( cf. Corollary 3.6, [35]).: _An analytic function \(p(z)\) is in the class \(\mathcal{P}\) if and only if there is a probability measure \(\nu\) on \(\partial\mathbb{D}\) such that_
\[p(z)=\int_{\partial\mathbb{D}}\frac{1+\eta z}{1-\eta z}d\nu(\eta),\qquad z\in \mathbb{D}. \tag{10}\]
Some simple calculation in the expressions (9) and (10) give
\[\frac{\psi(z)}{1-\psi(z)}=\int_{\partial\mathbb{D}}\frac{\eta z}{1-\eta z}d \nu(\eta),\qquad z\in\mathbb{D}. \tag{11}\]
**Lemma 3** (Theorem 2.2, [16]).: _Any function \(f(z)=zh(z)\overline{g(z)}\in S_{LH}^{k*}(\alpha)\) with \(\omega(0)=0\) satisfy_
1. \(\frac{\exp\left((1-\alpha)\frac{-2|z|^{k}}{k(1+|z|^{k})}\right)}{(1+|z|^{k})^{ \frac{1}{k}}}\leq|h(z)|\leq\frac{\exp\left((1-\alpha)\frac{2|z|^{k}}{k(1-|z|^{ k})}\right)}{(1-|z|^{k})^{\frac{1}{k}}}\)_,_
2. \(\frac{\exp\left((1-\alpha)\frac{-2|z|^{k}}{k(1+|z|^{k})}\right)}{(1+|z|^{k})^{ \frac{2\alpha-1}{k}}}\leq|g(z)|\leq\frac{\exp\left((1-\alpha)\frac{2|z|^{k}}{k (1-|z|^{k})}\right)}{(1-|z|^{k})^{\frac{2\alpha-1}{k}}}\)_,_
Figure 2: Images of the unit disk under the \(k\)-fold symmetric logharmonic Koebe function \(f_{\alpha}\).
_(iii)_ \(\dfrac{|z|\exp\left((1-\alpha)\dfrac{-4|z|^{k}}{k(1+|z|^{k})}\right)}{(1+|z|^{k})^ {\frac{2\alpha}{k}}}\leq|f(z)|\leq\dfrac{|z|\exp\left((1-\alpha)\dfrac{4|z|^{k}} {k(1-|z|^{k})}\right)}{(1-|z|^{k})^{\frac{2\alpha}{k}}}.\)__
_The inequalities are sharp for the functions of the form \(\overline{\lambda}f_{\alpha}(\lambda z),\)\(|\lambda|=1\), where \(f_{\alpha}\) is of the form (6)._
**Lemma 4** (Theorem 2.3, [16]).: _Any function \(f(z)=zh(z)\overline{g(z)}\in S_{LH}^{k*}(\alpha)\) with \(\omega(0)=0.\) and \(n,k\geq 1\) satisfies_
1. \(|a_{kn}|\leq\dfrac{2}{k}(1-\alpha)+\dfrac{1}{kn},\)__
2. \(|b_{kn}|\leq\dfrac{2}{k}(1-\alpha)+\dfrac{2\alpha-1}{kn}.\)__ _The inequalities are sharp for the functions of the form_ \(\overline{\lambda}f_{\alpha}(\lambda z),\)__\(|\lambda|=1,\) _where_ \(f_{\alpha}(z)\) _is given by (_6_)._
Now we see the relations between \(h\), \(g\) and \(\alpha\) in term of subordination.
**Theorem 3**.: _Let \(0\leq\alpha<1.\) Then the function \(f(z)=zh(z)\overline{g(z)}\in S_{LH}^{k*}(\alpha)\) if and only if_
\[\left(z\dfrac{h^{\prime}(z)}{h(z)}-z\dfrac{g^{\prime}(z)}{g(z)}\right)\prec \dfrac{2(1-\alpha)z^{k}}{1-z^{k}}.\]
Proof.: Let \(f\in S_{LH}^{k*}(\alpha)\) with the form \(f(z)=zh(z)\overline{g(z)}.\) Then
\[\alpha<Re\left(\dfrac{zf_{z}-\overline{z}f_{\overline{z}}}{f}\right)=Re\left( 1+z\dfrac{h^{\prime}(z)}{h(z)}-\overline{z}\overline{\left(\dfrac{g^{\prime}( z)}{g(z)}\right)}\right)=Re\left(1+z\dfrac{h^{\prime}(z)}{h(z)}-z\dfrac{g^{ \prime}(z)}{g(z)}\right)\]
if and only if
\[1+z\dfrac{h^{\prime}(z)}{h(z)}-z\dfrac{g^{\prime}(z)}{g(z)}=(1-\alpha)p(z^{k}) +\alpha=(1-\alpha)\dfrac{1+\psi(z^{k})}{1-\psi(z^{k})}+\alpha \tag{12}\]
for some \(p\in\mathcal{P}\) and Schwarz function \(\psi\). Therefore,
\[z\dfrac{h^{\prime}(z)}{h(z)}-z\dfrac{g^{\prime}(z)}{g(z)}=(1-\alpha)\dfrac{1+ \psi(z^{k})}{1-\psi(z^{k})}-(1-\alpha)\]
and hence
\[\left(z\dfrac{h^{\prime}(z)}{h(z)}-z\dfrac{g^{\prime}(z)}{g(z)}\right)\prec \dfrac{2(1-\alpha)z^{k}}{1-z^{k}}.\]
The distortion bounds of the functions in the class \(S_{LH}^{k*}(\alpha)\) is given by the next theorem.
**Theorem 4**.: _Let \(f=zh(z)\overline{g(z)}\in S_{LH}^{k*}(\alpha)\) with \(\omega(0)=0\). Then for \(|z|=r<1\), we have_
1. \(\frac{1-(1-2\alpha)r^{k}}{(1+r^{k})^{\frac{2(\alpha+k)}{k}}}\exp\left((1-\alpha) \frac{-4r^{k}}{k(1+r^{k})}\right)\leq|f_{z}(z)|\)__ \[\leq\frac{1+(1-2\alpha)r^{k}}{(1-r^{k})^{\frac{2(\alpha+k)}{k}}} \exp\left((1-\alpha)\frac{4r^{k}}{k(1-r^{k})}\right),\]
2. \(\frac{r^{k}\left(1-(1-2\alpha)r^{k}\right)}{(1+r^{k})^{\frac{2( \alpha+k)}{k}}}\exp\left((1-\alpha)\frac{-4r^{k}}{k(1+r^{k})}\right)\leq|f_{ \overline{z}}(z)|\)__ \[\leq\frac{r^{k}\left(1+(1-2\alpha)r^{k}\right)}{(1-r^{k})^{\frac{2( \alpha+k)}{k}}}\exp\left((1-\alpha)\frac{4r^{k}}{k(1-r^{k})}\right),\]
3. \(\frac{1-(1-2\alpha)r^{k}-(1+r^{k})^{2}}{r(1+r^{k})^{2}}\exp\left((1-\alpha) \frac{-2r^{k}}{k(1+r^{k})}\right)\leq|h^{\prime}(z)|\)__ \[\leq\frac{1+(1-2\alpha)r^{k}+(1-r^{k})^{2}}{r(1-r^{k})^{2}}\exp \left((1-\alpha)\frac{2r^{k}}{k(1-r^{k})}\right),\]
4. \(\frac{r^{k-1}(1-(1-2\alpha)r^{k})}{(1+r^{k})^{\frac{2(\alpha+k)-1}{k}}}\exp \left((1-\alpha)\frac{(-2r^{k})}{k(1+r^{k})}\right)\leq|g^{\prime}(z)|\)__ \[\leq\frac{r^{k-1}(1+(1-2\alpha)r^{k})}{(1-r^{k})^{\frac{2(\alpha+k)-1}{k}}} \exp\left((1-\alpha)\frac{2r^{k}}{k(1-r^{k})}\right).\]
_The inequalities are sharp for functions of the form \(\overline{\lambda}f_{\alpha}(\lambda z),\,|\lambda|=1,\) where \(f_{\alpha}(z)\) is given by (6)._
To prove the above theorem, we need the following lemma:
**Lemma 5**.: _Let \(f(z)=zh(z)\overline{g(z)}\in S_{LH}^{k*}(\alpha)\) is the solution of (1) with respect to \(\omega\) and \(\phi(z)=\dfrac{zh(z)}{g(z)}\). Then_
1. \((1-\alpha)\dfrac{1-r^{k}}{1+r^{k}}+\alpha\leq\left|z\dfrac{\phi^{\prime}(z)}{ \phi(z)}\right|\leq(1-\alpha)\dfrac{1+r^{k}}{1-r^{k}}+\alpha,\)__
2. \(\frac{r}{(1+r^{k})^{\frac{2(1-\alpha)}{k}}}\leq|\phi(z)|\leq \frac{r}{(1-r^{k})^{\frac{2(1-\alpha)}{k}}},\)__
3. \(\frac{r^{k}}{1+r^{k}}\leq\left|\dfrac{\omega(z)}{1-\omega(z)} \right|\leq\frac{r^{k}}{1-r^{k}}.\)__
Proof.: By making use of Lemma 1, Lemma 2 and expression (12), we have
\[\frac{z\phi^{\prime}(z)}{\phi(z)}=(1-\alpha)\int_{|\eta|=1}\frac{1+\eta z^{k}} {1-\eta z^{k}}d\nu(\eta)+\alpha,\qquad z\in\mathbb{D},\]
for some probability measure \(\nu\) on the boundary of \(\mathbb{D}\). For \(|z|=r<1\), the above expression give
\[\left|\frac{z\phi^{\prime}(z)}{\phi(z)}\right|\geq\min_{\nu}\left\{\min_{|z|=r }\left((1-\alpha)\int_{\partial\mathbb{D}}\frac{1+\eta z^{k}}{1-\eta z^{k}}d \nu(\eta)+\alpha\right)\right\}\geq(1-\alpha)\frac{1-r^{k}}{1+r^{k}}+\alpha\]
and
\[\left|\frac{z\phi^{\prime}(z)}{\phi(z)}\right|\leq(1-\alpha)\int_{ \partial\mathbb{D}}\left|\frac{1+\eta z^{k}}{1-\eta z^{k}}\right|d\nu(\eta)+ \alpha\leq(1-\alpha)\frac{1+r^{k}}{1-r^{k}}+\alpha.\]
This complete the proof of _(a). The proof of \((b)\) directly follows from (a) and using the same argument as in (a), the expression (11) gives (c)._
Proof of Theorem 4.: Let \(f\in S^{k*}_{LH}(\alpha)\) is of the form (3) and \(\phi(z)=\frac{zh(z)}{g(z)},z\in\mathbb{D},\) then
\[f(z)=\phi(z)|g(z)|^{2} \tag{13}\]
and
\[h(z)=\frac{\phi(z)g(z)}{z} \tag{14}\]
the function \(g(z)\) has the representation of the form (cf. [16])
\[g(z)=\exp\left(\int_{0}^{z}\frac{\omega(s)}{1-\omega(s)}\frac{ \phi^{\prime}(s)}{\phi(s)}ds\right),\qquad z\in\mathbb{D}, \tag{15}\]
where \(\omega\) is defined as in (1). Then, (15) gives
\[|g(z)|^{2}=\exp\left(2Re\int_{0}^{z}\frac{\omega(s)}{1-\omega(s)}\frac{\phi^{ \prime}(s)}{\phi(s)}ds\right)\]
and the expressions (13) and (14) respectively become
\[f(z)=\phi(z)\exp\left(2Re\int_{0}^{z}\frac{\omega(s)}{1-\omega(s)}\frac{\phi^ {\prime}(s)}{\phi(s)}ds\right) \tag{16}\]
and
\[h(z)=\frac{\phi(z)}{z}\exp\left(\int_{0}^{z}\frac{\omega(s)}{1-\omega(s)} \frac{\phi^{\prime}(s)}{\phi(s)}ds\right) \tag{17}\]
The partial derivative of (16) and (17) with respect to \(z\), by using the Leibnitz rule, give
\[f_{z}(z) =\phi^{\prime}(z)\exp\left(2Re\int_{0}^{z}\frac{\omega(s)}{1- \omega(s)}\frac{\phi^{\prime}(s)}{\phi(s)}ds\right)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\phi(z)\frac{\omega(z )}{1-\omega(z)}\frac{\phi^{\prime}(z)}{\phi(z)}\exp\left(2Re\int_{0}^{z}\frac{ \omega(s)}{1-\omega(s)}\frac{\phi^{\prime}(s)}{\phi(s)}ds\right)\] \[=\frac{\phi^{\prime}(z)}{\phi(z)}f(z)+\frac{\phi^{\prime}(z)}{ \phi(z)}f(z)\frac{\omega(z)}{1-\omega(z)}=f(z)\frac{1}{1-\omega(z)}\frac{\phi ^{\prime}(z)}{\phi(z)} \tag{18}\]
and
\[h^{\prime}(z)=\frac{z\phi^{\prime}(z)-\phi(z)}{z^{2}}\exp\left( \int_{0}^{z}\frac{\omega(s)}{1-\omega(s)}\frac{\phi^{\prime}(s)}{\phi(s)}ds\right)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\frac{\phi(z)}{z} \frac{\omega(z)}{1-\omega(z)}\frac{\phi^{\prime}(z)}{\phi(z)}\exp\left(\int_{0} ^{z}\frac{\omega(s)}{1-\omega(s)}\frac{\phi^{\prime}(s)}{\phi(s)}ds\right)\]
\[=\frac{z\phi^{\prime}(z)-\phi(z)}{z\phi(z)}h(z)+h(z)\frac{\omega(z)}{1-\omega(z)} \frac{\phi^{\prime}(z)}{\phi(z)}=h(z)\left(\frac{1}{1-\omega(z)}\frac{\phi^{ \prime}(z)}{\phi(z)}-\frac{1}{z}\right) \tag{19}\]
By making use of the right inequalities of Lemma 3 and Lemma 5 for \(|z|=r<1\), the expressions (18) and (19) become
\[|f_{z}(z)| \leq|f(z)|\left|\frac{1}{z(1-\omega(z))}\right|\left|z\frac{\phi^ {\prime}(z)}{\phi(z)}\right|\] \[\leq\frac{r\exp\left((1-\alpha)\frac{4r^{k}}{k(1-r^{k})}\right)} {(1-r^{k})^{\frac{2\alpha}{k}}}\frac{1}{r(1-r^{k})}\left((1-\alpha)\frac{1+r^ {k}}{1-r^{k}}+\alpha\right)\] \[=\frac{1}{(1-r^{k})^{\frac{2\alpha}{k}+1}}\left(\frac{(1-\alpha) (1+r^{k})}{1-r^{k}}+\alpha\right)\exp\left((1-\alpha)\frac{4r^{k}}{k(1-r^{k}) }\right)\] \[=\frac{1+(1-2\alpha)r^{k}}{(1-r^{k})^{\frac{2(\alpha+k)}{k}}} \exp\left((1-\alpha)\frac{4r^{k}}{k(1-r^{k})}\right)\]
and
\[|h^{\prime}(z)| \leq|h(z)|\left(\left|\frac{1}{1-\omega(z)}\right|\left|\frac{ \phi^{\prime}(z)}{\phi(z)}\right|+\left|\frac{1}{z}\right|\right)\] \[\leq\frac{\exp\left((1-\alpha)\frac{2r^{k}}{k(1-r^{k})}\right)}{( 1-r^{k})^{\frac{1}{k}}}\left(\frac{1}{r(1-r^{k})}\left(\frac{(1-\alpha)(1+r^ {k})}{1-r^{k}}+\alpha\right)+\frac{1}{r}\right)\] \[=\frac{1+(1-2\alpha)r^{k}+(1-r^{k})^{2}}{r(1-r^{k})^{2}}\exp \left((1-\alpha)\frac{2r^{k}}{k(1-r^{k})}\right).\]
Similarly, with the help of left inequalities of Lemma 3 and Lemma 5 for \(|z|=r<1\), the expressions (18) and (19) give the left inequalities of _(i)_ and _(iii)_. This completes the proof of _(i)_ and _(iii)_. The inequalities _(ii)_ and _(iv)_ follow in the same way as in _(i)_ and _(iii)_.
Since the functions \(f_{z},\ f_{\overline{z}},\ h_{z}\) and \(g_{z}\) respectively depend on \(f,\ f,\ h\) and \(g\), the equalities occur in the inequalities of Theorem 4 if the equalities occur in the inequalities of the Lemma 3.
**Remark 1**.: _Upon substituting \(k=1,\) and \(\alpha=0\) and \(k=1\) Theorem 4 respectively gives distortion bounds for \(S^{*}_{LH}(\alpha)\)[42] and \(S^{*}_{LH}\)[14]._
With the assistance of distortion bounds next, we calculate bounds on the area of the functions in the class \(S^{k*}_{LH}(\alpha)\).
**Theorem 5**.: _The area \(Ar\) of the disk \(\mathbb{D}_{r}:=\{z\in\mathbb{C}:|z|<r<1\}\) under the mapping \(f\in S^{k*}_{LH}(\alpha)\) satisfy_
\[2\pi L_{1}\leq Ar\leq 2\pi L_{2}, \tag{20}\]
_where_
\[L_{1}=\int_{0}^{r}\left(\frac{1-(1-2\alpha)\rho^{k}}{(1+\rho^{k})^{\frac{2( \alpha+k)}{k}}}\right)^{2}\exp\left(\frac{-8(1-\alpha)\rho^{k}}{k(1+\rho^{k}) }\right)\rho d\rho\]
\[-\int_{0}^{r}\left(\frac{\rho^{k}(1+(1-2\alpha)\rho^{k})}{(1-\rho^{k})^{ \frac{2(\alpha+k)}{k}}}\right)^{2}\exp\left(\frac{8(1-\alpha)\rho^{k}}{k(1-\rho ^{k})}\right)\rho d\rho\]
_and_
\[L_{2}=\int_{0}^{r}\left(\frac{1+(1-2\alpha)\rho^{k}}{(1-\rho^{k} )^{\frac{2(\alpha+k)}{k}}}\right)^{2}\exp\left(\frac{8(1-\alpha)\rho^{k}}{k(1- \rho^{k})}\right)\rho d\rho\\ -\int_{0}^{r}\left(\frac{\rho^{k}(1-(1-2\alpha)\rho^{k})}{(1+\rho ^{k})^{\frac{2(\alpha+k)}{k}}}\right)^{2}\exp\left(\frac{-8(1-\alpha)\rho^{k}} {k(1+\rho^{k})}\right)\rho d\rho.\]
Proof.: Let \(f\in S_{LH}^{k*}.\) By making use of (2), the area \(Ar\) of \(\mathbb{D}_{r}\) under \(f\) is given by
\[Ar=\int\int_{\mathbb{D}_{r}}J_{f}(z)dxdy=\int\int_{\mathbb{D}_{r}}(|f_{z}|^{2 }-|f_{\overline{z}}|^{2})dxdy,\qquad z=x+iy, \tag{21}\]
where \(J_{f}\) is the Jacobian of the mapping \(f.\) Use of Theorem 4 in (21), and taking \(z=re^{i\theta},\;0\leq r<1\) and \(0\leq\theta\leq 2\pi\) give
\[\int_{0}^{2\pi}L_{1}d\theta\leq Ar\leq\int_{0}^{2\pi}L_{2}d\theta\]
This implies (20).
## 4 Improved Bohr radius
For the functions class \(S_{LH}^{k*}(\alpha),\) the Bohr radius is calculated by Alizadeh et al. [16]. In this section, we have calculated improved Bohr radius and Bohr type inequalities, and illustrated the radii numerically for different values of the parameters \(\alpha\) and \(k.\) The first theorem of this section gives the improved Bohr radius for functions in the family \(S_{LH}^{k*}(\alpha)\). The theorem is calculated in support of dilogarithmic function, defined by
\[Li_{2}(z)=\sum_{n=1}^{\infty}\frac{z^{n}}{n^{2}},\qquad z\in\mathbb{C}.\]
**Theorem 6**.: _Let \(f(z)=zh(z)\overline{g(z)}\in S_{LH}^{k*}(\alpha)\) with \(\omega(0)=0.\) Then for any real \(\theta,\) the inequality_
\[|z|\exp\left(\sum_{n=1}^{\infty}|a_{nk}+e^{i\theta}b_{nk}+ka_{nk}b_{nk}||z|^{ nk}\right)\leq d(f(0),\partial f(\mathbb{D}))\]
_is true for \(|z|\leq r_{1},\) where \(r_{1}\) is unique root in \((0,1)\) of_
\[\frac{r2^{\frac{2\alpha}{k}}}{(1-r^{k})^{\frac{2\alpha(3-2\alpha)}{k}}}\exp \left(\left[\frac{4(1-\alpha)(2-\alpha)}{1-r^{k}}+(2\alpha-1)Li_{2}(r^{k})+2(1 -\alpha)\right]\frac{1}{k}\right)=1.\]
_The radius \(r_{1}\) is the best possible._
Proof.: Let \(f(z)=zh(z)\overline{g(z)}\in S^{k*}_{LH}(\alpha)\). In view of Lemma 3, we have the following relation
\[\frac{1}{2^{\frac{2\alpha}{k}}e^{\frac{2(1-\alpha)}{k}}}\leq d(f(0),\partial f( \mathbb{D}))\leq 1. \tag{22}\]
Then, for any real \(\theta\), and through the use of Lemma 4 together with (22), we have
\[|z|\exp\left(\sum_{n=1}^{\infty}|a_{nk}+e^{i\theta}b_{nk}+ka_{nk} b_{nk}||z|^{nk}\right)\] \[\qquad\leq r\exp\left(\sum_{n=1}^{\infty}\left(\frac{4}{k}(1- \alpha)+\frac{2\alpha}{kn}+\frac{1}{k}\left[2(1-\alpha)+\frac{1}{n}\right] \left[2(1-\alpha)+\frac{2\alpha-1}{n}\right]\right)r^{nk}\right)\] \[\qquad=r\exp\left(\sum_{n=1}^{\infty}\left(\frac{4}{k}(1-\alpha)( 2-\alpha)+\frac{2\alpha}{kn}(3-2\alpha)+\frac{2\alpha-1}{kn^{2}}\right)r^{nk}\right)\] \[\qquad=r\exp\left(\frac{4(1-\alpha)(2-\alpha)r^{k}}{k(1-r^{k})} -\frac{2\alpha(3-2\alpha)}{k}\log(1-r^{k})+\frac{2\alpha-1}{k}Li_{2}(r^{k})\right)\] \[\qquad\leq d(f(0),\partial f(\mathbb{D}))\]
if and only if
\[\frac{r}{(1-r^{k})^{\frac{2\alpha(3-2\alpha)}{k}}}\exp\left(\frac{4(1-\alpha)( 2-\alpha)r^{k}}{k(1-r^{k})}+\frac{2\alpha-1}{k}Li_{2}(r^{k})\right)\leq\frac{ 1}{2^{\frac{2\alpha}{k}}e^{\frac{2(1-\alpha)}{k}}}.\]
Therefore, for this case, the Bohr radius \(r_{1}\) is the unique root in \((0,1)\) of the equation
\[\frac{r}{(1-r^{k})^{\frac{2\alpha(3-2\alpha)}{k}}}\exp\left(\frac{4(1-\alpha) (2-\alpha)r^{k}}{k(1-r^{k})}+\frac{2\alpha-1}{k}Li_{2}(r^{k})\right)=\frac{1} {2^{\frac{2\alpha}{k}}e^{\frac{2(1-\alpha)}{k}}},\]
that is,
\[\frac{r2^{\frac{2\alpha}{k}}}{(1-r)^{\frac{2\alpha(1-\alpha)}{k}}}\exp\left( \left[\frac{4(1-\alpha)(2-\alpha)r^{k}}{1-r^{k}}+(2\alpha-1)Li_{2}(r^{k})+2(1- \alpha)\right]\frac{1}{k}\right)=1.\]
To show the uniqueness of \(r_{1},\) consider the function \(f_{1}:[0,1)\rightarrow\mathbb{R},\) defined by
\[f_{1}(r)=\frac{r2^{\frac{2\alpha}{k}}}{(1-r)^{\frac{2\alpha(1-\alpha)}{k}}} \exp\left(\left[\frac{4(1-\alpha)(2-\alpha)r^{k}}{1-r^{k}}+(2\alpha-1)Li_{2}(r ^{k})+2(1-\alpha)\right]\frac{1}{k}\right)-1.\]
Note that \(f_{1}(0)=-1\), \(f_{1}(1)=\infty\) and \(f_{1}^{\prime}(r)>0,\ \forall r\in(0,1).\) In accordance to intermediate value theorem, \(r_{1}\) is the unique root of \(f_{1}\).
To show the sharpness of \(r_{1}\), consider the function defined in (6) and \(r=r_{1}\). For this function
\[|a_{kn}|=\frac{2}{k}(1-\alpha)+\frac{1}{kn},\ \ |b_{nk}|=\frac{2}{k}(1-\alpha)+ \frac{2\alpha-1}{kn}\ \ \text{and}\ \ d(0,\partial f(\mathbb{D}))=\frac{1}{2^{\frac{2\alpha}{k}}e^{\frac{2(1- \alpha)}{k}}}\]
Then,
\[|z|\exp\left(\sum_{n=1}^{\infty}|a_{nk}+e^{i\theta}b_{nk}+ka_{nk}b_{nk}||z|^{nk}\right)\]
\[=r\exp\left(\sum_{n=1}^{\infty}\left(\frac{4}{k}(1-\alpha)+\frac{2 \alpha}{kn}+\left[\frac{2}{k}(1-\alpha)+\frac{1}{kn}\right]\left[\frac{2}{k}(1- \alpha)+\frac{2\alpha-1}{kn}\right]\right)r^{nk}\right)\] \[=\frac{r_{1}}{(1-r_{1}^{k})^{\frac{2\alpha(3-2\alpha)}{k}}}\exp \left(\frac{4(1-\alpha)(2-\alpha)r_{1}^{k}}{k(1-r_{1}^{k})}+\frac{2\alpha-1} {k}Li_{2}(r_{1}^{k})\right)=d(f(0),\partial f(\mathbb{D})).\]
Therefore, the radius \(r_{1}\) is the best possible.
The next theorem is the sharp Bohr radius of the analytic and co-analytic factors of the functions in the class \(S_{LH}^{k*}(\alpha)\) when \(|a_{kn}|^{2}\) and \(|b_{kn}|^{2}\) respectively added in their power series expansion.
**Theorem 7**.: _Let \(f(z)=zh(z)\overline{g(z)}\in S_{LH}^{k*}(\alpha)\) with \(\omega(0)=0\), \(H(z)=zh(z)\) and \(G(z)=zg(z)\). Then for \(z\in\mathbb{D}\),_
1. _the inequality_ \[|z|\exp\left(\sum_{n=1}^{\infty}\left(|a_{nk}|+k|a_{nk}|^{2}\right)|z|^{nk} \right)\leq d(H(0),\partial H(\mathbb{D}))\] _holds for_ \(|z|=r\leq r_{2}\)_, where_ \(r_{2}\) _is the unique root in_ \((0,1)\) _of the equation_ \[\frac{2^{\frac{1}{k}}r}{(1-r^{k})^{\frac{5-4\alpha}{k}}}\exp\left(\left(\frac{ 2(1-\alpha)(3-2\alpha)r^{k}}{1-r^{k}}+Li_{2}(r^{k})+1-\alpha\right)\frac{1}{k }\right)=1,\]
2. _the inequality_ \[|z|\exp\left(\sum_{n=1}^{\infty}\left(|b_{nk}|+k|b_{nk}|^{2}\right)|z|^{nk} \right)\leq d(G(0),\partial G(\mathbb{D}))\] _holds for_ \(|z|=r\leq r_{3}\)_, where_ \(r_{3}\) _is the unique root in_ \((0,1)\) _of the equation_ \[\frac{2^{\frac{2\alpha-1}{k}}r}{(1-r^{k})^{\frac{(2\alpha-1)(5-4\alpha)}{k}}} \exp\left(\left(\frac{2(1-\alpha)(3-2\alpha)r^{k}}{1-r^{k}}+(2\alpha-1)^{2}Li_ {2}(r^{k})+1-\alpha\right)\frac{1}{k}\right)=1.\]
Proof.: Let \(f(z)=zh(z)\overline{g(z)}\in S_{LH}^{k*}(\alpha)\). From Lemma 3, we have the relations
\[\frac{1}{2^{\frac{1}{k}}e^{\frac{1-\alpha}{k}}}\leq d(H(0),\partial H(\mathbb{ D}))\leq 1 \tag{23}\]
and
\[\frac{1}{2^{\frac{2\alpha-1}{k}}e^{\frac{1-\alpha}{k}}}\leq d(G(0),\partial G (\mathbb{D}))\leq 1 \tag{24}\]
(i) By application of Lemma 4 and (23), and for \(|z|=r<1\) we have
\[|z|\exp\left(\sum_{n=1}^{\infty}\left(|a_{nk}|+k|a_{nk}|^{2} \right)|z|^{nk}\right)\] \[=r\exp\left(\sum_{n=1}^{\infty}\left(\frac{2(1-\alpha)(3-2\alpha) }{k}+\frac{5-4\alpha}{k}\frac{1}{n}+\frac{1}{k}\frac{1}{n^{2}}\right)r^{nk}\right)\]
\[=r\exp\left(\frac{2(1-\alpha)(3-2\alpha)r^{k}}{k(1-r^{k})}-\frac{5-4 \alpha}{k}\log(1-r^{k})+\frac{1}{k}Li_{2}(r^{k})\right)\] \[\leq d(H(0),\partial H(\mathbb{D}))\]
if and only if
\[\frac{r}{(1-r^{k})^{\frac{5-4\alpha}{k}}}\exp\left(\left(\frac{2(1-\alpha)(3-2 \alpha)r^{k}}{1-r^{k}}+Li_{2}(r^{k})\right)\frac{1}{k}\right)\leq\frac{1}{2^{ \frac{1}{k}}e^{\frac{1-\alpha}{k}}},\]
that is,
\[\frac{2^{\frac{1}{k}}r}{(1-r^{k})^{\frac{5-4\alpha}{k}}}\exp\left(\left(\frac{ 2(1-\alpha)(3-2\alpha)r^{k}}{1-r^{k}}+Li_{2}(r^{k})+1-\alpha\right)\frac{1}{k }\right)\leq 1.\]
Therefore, for this case, the Bohr radius \(r_{2}\) is the unique root of the equation
\[\frac{2^{\frac{1}{k}}r}{(1-r^{k})^{\frac{5-4\alpha}{k}}}\exp\left(\left(\frac{ 2(1-\alpha)(3-2\alpha)r^{k}}{1-r^{k}}+Li_{2}(r^{k})+1-\alpha\right)\frac{1}{k }\right)=1.\]
To show the uniqueness of \(r_{2},\) consider the function \(f_{2}:[0,1)\rightarrow\mathbb{R},\) defined by
\[f_{2}(r)=\frac{2^{\frac{1}{k}}r}{(1-r^{k})^{\frac{5-4\alpha}{k}}}\exp\left( \left(\frac{2(1-\alpha)(3-2\alpha)r^{k}}{1-r^{k}}+Li_{2}(r^{k})+1-\alpha \right)\frac{1}{k}\right)-1.\]
Note that \(f_{2}(0)=-1,\)\(\lim_{r\to 1}f_{2}(r)=\infty\) and \(f_{2}^{\prime}(r)>0,\ \forall r\in(0,1).\) In accordance to intermediate value theorem, \(r_{2}\) is the unique root of \(f_{2}\). To show the sharpness, consider the function (7) and \(r=r_{2}\). Then
\[|a_{nk}|=\frac{2}{k}(1-\alpha)+\frac{1}{kn}\ \ \text{and}\ \ d(H(0),\partial H( \mathbb{D}))=\frac{1}{2^{\frac{1}{k}}e^{\frac{1-\alpha}{k}}}.\]
Therefore,
\[|z|\exp\left(\sum_{n=1}^{\infty}\left(|a_{nk}|+k|a_{nk}|^{2} \right)|z|^{nk}\right)=r_{2}+\exp\left(\frac{2}{k}(1-\alpha)+\frac{1}{kn}+k \left(\frac{2}{k}(1-\alpha)+\frac{1}{kn}\right)^{2}\right)r_{2}^{kn}\\ =\frac{r_{2}}{(1-r_{2}^{k})^{\frac{5-4\alpha}{k}}}\exp\left(\frac {2(1-\alpha)(3-2\alpha)r_{2}^{k}}{k(1-r_{2}^{k})}+\frac{1}{k}Li_{2}(r_{2}^{k}) \right)=d(H(0),\partial H(\mathbb{D}))\]
Therefore, \(r_{2}\) is the best possible.
(ii) For \(|z|=r<1\), the use of Lemma 4 along with (24) give
\[|z|\exp\left(\sum_{n=1}^{\infty}\left(|b_{nk}|+k|b_{nk}|^{2} \right)|z|^{nk}\right)\] \[\leq r\exp\left(\frac{2}{k}(1-\alpha)+\frac{2\alpha-1}{kn}+k\left( \frac{2}{k}(1-\alpha)+\frac{2\alpha-1}{kn}\right)^{2}\right)r^{kn}\] \[=r\exp\left(\sum_{n=1}^{\infty}\left(\frac{2(1-\alpha)(3-2\alpha)} {k}+\frac{(2\alpha-1)(5-4\alpha)}{k}\frac{1}{n}+\frac{(2\alpha-1)^{2}}{k} \frac{1}{n^{2}}\right)r^{nk}\right)\]
\[=r\exp\left(\frac{2(1-\alpha)(3-2\alpha)r^{k}}{k(1-r^{k})}-\frac{(2 \alpha-1)(5-4\alpha)}{k}\log(1-r^{k})+\frac{(2\alpha-1)^{2}}{k}Li_{2}(r^{k})\right)\] \[\leq d(G(0),\partial G(\mathbb{D}))\]
if and only if
\[\frac{r}{(1-r^{k})^{\frac{(2\alpha-1)(5-4\alpha)}{k}}}\exp\left( \frac{2(1-\alpha)(3-2\alpha)r^{k}}{k(1-r^{k})}+\frac{(2\alpha-1)^{2}}{k}Li_{2} (r^{k})\right)\leq\frac{1}{2^{\frac{2\alpha-1}{k}}e^{\frac{1-\alpha}{k}}},\]
that is,
\[\frac{2^{\frac{2\alpha-1}{k}}r}{(1-r^{k})^{\frac{(2\alpha-1)(5-4 \alpha)}{k}}}\exp\left(\left(\frac{2(1-\alpha)(3-2\alpha)r^{k}}{1-r^{k}}+(2 \alpha-1)^{2}Li_{2}(r^{k})+1-\alpha\right)\frac{1}{k}\right)\leq 1.\]
Therefore, for this case, the Bohr radius \(r_{3}\) is the unique root of the equation
\[\frac{2^{\frac{2\alpha-1}{k}}r}{(1-r^{k})^{\frac{(2\alpha-1)(5-4 \alpha)}{k}}}\exp\left(\left(\frac{2(1-\alpha)(3-2\alpha)r^{k}}{1-r^{k}}+(2 \alpha-1)^{2}Li_{2}(r^{k})+1-\alpha\right)\frac{1}{k}\right)=1.\]
To show the uniqueness of \(r_{3},\) consider the function \(f_{3}:[0,1)\rightarrow\mathbb{R},\) defined by
\[f_{3}(r)=\frac{2^{\frac{2\alpha-1}{k}}r}{(1-r^{k})^{\frac{(2 \alpha-1)(5-4\alpha)}{k}}}\exp\left(\left(\frac{2(1-\alpha)(3-2\alpha)r^{k}}{1 -r^{k}}+(2\alpha-1)^{2}Li_{2}(r^{k})+1-\alpha\right)\frac{1}{k}\right)-1.\]
Note that \(f_{3}(0)=-1,\)\(\lim_{r\to 1}f_{3}(r)=\infty\) and \(f_{3}^{\prime}(r)>0,\ \forall r\in(0,1).\) In accordance to intermediate value theorem, \(r_{3}\) is the unique root of \(f_{3}\). To show the sharpness, consider the function (8) and \(r=r_{3}\). For this function
\[\left|b_{nk}\right|=\frac{2}{k}(1-\alpha)+\frac{2\alpha-1}{kn}\ \ \text{and}\ \ d(G(0),\partial G(\mathbb{D}))=\frac{1}{2^{\frac{2\alpha-1}{k}}e^{\frac{1- \alpha}{k}}}.\]
Therefore,
\[\left|z\right|\exp\left(\sum_{n=1}^{\infty}\left(\left|b_{nk} \right|+k|b_{nk}\right|^{2}\right)\left|z\right|^{nk}\right)\] \[=r_{3}\exp\left(\frac{2}{k}(1-\alpha)+\frac{2\alpha-1}{kn}+k\left( \frac{2}{k}(1-\alpha)+\frac{2\alpha-1}{kn}\right)^{2}\right)r_{3}^{kn}\] \[=\frac{r_{3}}{(1-r_{3}^{k})^{\frac{(2\alpha-1)(5-4\alpha)}{k}}} \exp\left(\frac{2(1-\alpha)(3-2\alpha)r_{3}^{k}}{k(1-r_{3}^{k})}+\frac{(2 \alpha-1)^{2}}{k}Li_{2}(r_{3}^{k})\right)=d(G(0),\partial G(\mathbb{D}))\]
Therefore, \(r_{3}\) is the best possible.
We calculate the improved sharp Bohr radius for the function in the class \(S_{LH}^{k*}(\alpha)\) by adding the modulus of the sum of its analytic and co-analytic factor to its series expansion.
**Theorem 8**.: _Let \(f(z)=zh(z)\overline{g(z)}\in S^{k*}_{LH}(\alpha)\) with \(\omega(0)=0\), \(|h(z)|\leq 1\) and \(|g(z)|\leq 1\). Then for any real \(\theta\), we have_
\[|z|\exp\left(|h(z)+g(z)|+\sum_{n=1}^{\infty}|a_{nk}+e^{i\theta}b_{nk}||z|^{nk} \right)\leq d(f(0),\partial f(\mathbb{D}))\]
_for \(|z|=r\leq r_{4},\) where \(r_{4}\) is the unique root in \((0,1)\) of the equation_
\[\left(\frac{2}{1-r^{k}}\right)^{\frac{2\alpha}{k}}e^{2}r\exp\left(\left(\frac {2(1-\alpha)r^{k}}{1-r^{k}}+1-\alpha\right)\frac{2}{k}\right)=1\]
_The radius \(r_{4}\) is the best possible._
Proof.: let \(f(z)=zh(z)\overline{g(z)}\in S^{k*}_{LH}(\alpha),\) then from Lemma 3 and 4 with \(|z|=r<1\), we have
\[|z|\exp\left(|h(z)+g(z)|+\sum_{n=1}^{\infty}|a_{nk}+e^{i\theta}b _{nk}||z|^{nk}\right)\] \[\qquad\leq e^{2}r\exp\left(\sum_{n=1}^{\infty}\left(\frac{4}{k}(1 -\alpha)+\frac{2\alpha}{kn}\right)r^{nk}\right)\leq d(f(0),\partial f(\mathbb{ D}))\]
if and only if
\[\frac{e^{2}r}{(1-r^{k})^{\frac{2\alpha}{k}}}\exp\left(\frac{4(1-\alpha)r^{k}}{ k(1-r^{k})}\right)\leq\frac{1}{2^{\frac{2\alpha}{k}}e^{\frac{2(1-\alpha)}{k}}},\]
That is,
\[\left(\frac{2}{1-r^{k}}\right)^{\frac{2\alpha}{k}}e^{2}r\exp\left(\frac{2(1- \alpha)}{k}\frac{1+r^{k}}{1-r^{k}}\right)\leq 1\]
Here, the Bohr radius is \(r_{4}\) and is the solution of
\[\left(\frac{2}{1-r^{k}}\right)^{\frac{2\alpha}{k}}e^{2}r\exp\left(\frac{2(1- \alpha)}{k}\frac{1+r^{k}}{1-r^{k}}\right)=1.\]
The function \(f_{4}:[0,1)\rightarrow\mathbb{R},\) defined by
\[f_{4}(r)=\left(\frac{2}{1-r^{k}}\right)^{\frac{2\alpha}{k}}e^{2}r\exp\left( \frac{2(1-\alpha)}{k}\frac{1+r^{k}}{1-r^{k}}\right)-1.\]
gives the uniqueness of \(r_{4}\). Note that \(f_{4}(0)=-1\), \(\lim_{r\to 1}f_{4}(r)=\infty\) and \(f_{4}^{\prime}(r)>0,\ \forall r\in(0,1).\) In accordance with the intermediate value theorem, \(r_{4}\) is the unique root of \(f_{4}\). To show the sharpness of \(r_{4}\), consider the function of the form (6) and \(r=r_{4}\)
\[|z|\exp\left(|h(z)+g(z)|+\sum_{n=1}^{\infty}|a_{nk}+e^{i\theta}b _{nk}||z|^{nk}\right)\] \[=\frac{e^{2}r_{4}}{(1-r_{4}^{k})^{\frac{2\alpha}{k}}}\exp\left( \frac{4(1-\alpha)r_{4}^{k}}{k(1-r_{4}^{k})}\right)=d(f(0),\partial f(\mathbb{ D}))\]
Therefore, \(r_{4}\) is the best possible.
**Theorem 9**.: _For any function \(f(z)=zh(z)\overline{g(z)}\) in the class \(S^{k*}_{LH}(\alpha)\) in \(\mathbb{D}\) with \(|h(z)|\leq 1,\)\(|g(z)|\leq 1\) and for any real \(\theta\) satisfy the inequality_
\[|f(z)|+|z|\exp\left(\sum_{n=1}^{\infty}|a_{nk}+e^{i\theta}b_{nk}||z|^{nk}\right) \leq d(f(0),\partial f(\mathbb{D}))\]
_for \(|z|=r\leq r_{5}\), where \(r_{5}\) is the unique root, in \((0,1),\) of the equation equation_
\[r+\frac{r}{(1-r^{k})^{\frac{2\alpha}{k}}}\exp\left(\frac{4(1-\alpha)r^{k}}{k( 1-r^{k})}\right)=\frac{1}{2^{\frac{2\alpha}{k}}e^{\frac{2(1-\alpha)}{k}}}\]
_and \(r_{5}\) is the best possible._
Proof.: Let \(f(z)=zh(z)\overline{g(z)}\in S^{k*}_{LH}(\alpha).\) In view of Lemma 4, we have
\[|f(z)| +|z|\exp\left(\sum_{n=1}^{\infty}|a_{nk}+e^{i\theta}b_{nk}||z|^{nk}\right)\] \[\leq r+r\exp\left(\frac{4(1-\alpha)r^{k}}{k}\frac{1}{1-r^{k}}- \frac{2\alpha}{k}\log(1-r^{k})\right)\leq d(f(0),\partial f(\mathbb{D}))\]
if and only if
\[r+\frac{r}{(1-r^{k})^{\frac{2\alpha}{k}}}\exp\left(\frac{4(1-\alpha)r^{k}}{k( 1-r^{k})}\right)\leq\frac{1}{2^{\frac{2\alpha}{k}}e^{\frac{2(1-\alpha)}{k}}}.\]
In this case, the solution of
\[r+\frac{r}{(1-r^{k})^{\frac{2\alpha}{k}}}\exp\left(\frac{4(1-\alpha)r^{k}}{k( 1-r^{k})}\right)=\frac{1}{2^{\frac{2\alpha}{k}}e^{\frac{2(1-\alpha)}{k}}}.\]
gives the Bohr radius denoted by \(r_{5}\). Consider the function \(f_{5}:[0,1)\rightarrow\mathbb{R},\) defined by
\[f_{5}(r)=2^{\frac{2\alpha}{k}}e^{\frac{2(1-\alpha)}{k}}r+\frac{2^{\frac{2 \alpha}{k}}r}{(1-r^{k})^{\frac{2\alpha}{k}}}\exp\left(\frac{2(1-\alpha)(1+r^{k })}{k(1-r^{k})}\right)-1.\]
It is observed that \(f_{5}(0)=-1,\)\(\lim_{r\to 1}f_{5}(r)=\infty\) and \(f_{5}^{\prime}(r)>0,\ \forall r\in(0,1).\) In accordance to intermediate value theorem, \(r_{5}\) is the unique root of \(f_{5}\). The function defined in (6) will give the sharpness of \(r_{5}\).
The next result gives the Bohr type inequalities for the functions in the class \(S^{k*}_{LH}(\alpha)\) when the modulus of \(f_{z}\) is added to its power series expansion.
**Theorem 10**.: _Every function \(f(z)=zh(z)\overline{g(z)}\) in the class \(S^{k*}_{LH}(\alpha)\) with \(\omega(0)=0\) and for any real \(\theta\) satisfy the inequality_
\[|zf_{z}(z)|+|z|\exp\left(\sum_{n=1}^{\infty}|a_{nk}+e^{i\theta}b_{nk}||z|^{nk} \right)\leq d(f(0),\partial f(\mathbb{D}))\]
_for \(|z|=r_{6}\), where \(r_{6}\) is the unique root of the equation_
\[\frac{2^{\frac{2\alpha}{k}}r(2-(1+2\alpha)r^{k}+r^{2k})}{(1-r^{k})^{\frac{2( \alpha+k)}{k}}}\exp\left(\frac{2(1-\alpha)(r^{k}+1)}{k(1-r^{k})}\right)=1.\]
_Here, \(r_{6}\) is the best possible._
Proof.: let \(f(z)=zh(z)\overline{g(z)}\in S^{k*}_{LH}(\alpha),\) then from Lemma 3 and Theorem 4 with \(|z|=r\) and for any real \(\theta,\) we have
\[|zf_{z}(z)| +\exp\left(\sum_{n=1}^{\infty}|a_{nk}|e^{i\theta}b_{nk}||z|^{nk}\right)\] \[\leq\frac{r(1+(1-2\alpha)r^{k})}{(1-r^{k})^{\frac{2(\alpha+k)}{k} }}\exp\left(\frac{4(1-\alpha)r^{k}}{k(1-r^{k})}\right)+\frac{r}{(1-r^{k})^{ \frac{2\alpha}{k}}}\exp\left(\frac{4(1-\alpha)r^{k}}{k(1-r^{k})}\right)\] \[\leq d(f(0),\partial f(\mathbb{D}))\]
if and only if
\[\frac{r(2-(1+2\alpha)r^{k}+r^{2k})}{(1-r^{k})^{\frac{2(\alpha+k)}{k}}}\exp \left(\frac{4(1-\alpha)r^{k}}{k(1-r^{k})}\right)\leq\frac{1}{2^{\frac{2\alpha} {k}}}e^{\frac{2(1-\alpha)}{k}}\]
That is,
\[\frac{2^{\frac{2\alpha}{k}}r(2-(1+2\alpha)r^{k}+r^{2k})}{(1-r^{k})^{\frac{2( \alpha+k)}{k}}}\exp\left(\frac{2(1-\alpha)(r^{k}+1)}{k(1-r^{k})}\right)\leq 1.\]
For this case \(r_{6}\) is the Bohr radius, where \(r_{6}\) is the unique root in \((0,1)\) of the equation
\[\frac{2^{\frac{2\alpha}{k}}r(2-(1+2\alpha)r^{k}+r^{2k})}{(1-r^{k})^{\frac{2( \alpha+k)}{k}}}\exp\left(\frac{2(1-\alpha)(r^{k}+1)}{k(1-r^{k})}\right)=1.\]
To show the uniqueness of \(r_{6},\) consider the function \(f_{6}:[0,1)\to\mathbb{R},\) defined by
\[f_{6}(r)=\frac{2^{\frac{2\alpha}{k}}r(2-(1+2\alpha)r^{k}+r^{2k})}{(1-r^{k})^{ \frac{2(\alpha+k)}{k}}}\exp\left(\frac{2(1-\alpha)(r^{k}+1)}{k(1-r^{k})} \right)-1.\]
It is clear that \(f_{6}(0)=-1,\)\(\lim_{r\to 1}f_{6}(r)=\infty\)\(f_{6}^{\prime}(r)>0,\ \forall r\in(0,1),\) and hence \(r_{6}\) is unique root of \(f_{6}\) according to intermediate value theorem. To show the sharpness of \(r_{6},\) consider the function of the form (6) and \(r=r_{6}\)
\[|zf_{z}(z)|+|z|\exp\left(\sum_{n=1}^{\infty}|a_{nk}+e^{i\theta}b _{nk}||z|^{nk}\right)\] \[=\frac{r(2-(1+2\alpha)r^{k}+r^{2k})}{(1-r^{k})^{\frac{2(\alpha+k) }{2}}}\exp\left(\frac{4(1-\alpha)r^{k}}{k(1-r^{k})}\right)=d(f(0),\partial f( \mathbb{D}))\]
Therefore, \(r_{6}\) is the best possible.
### Numerical and graphical illustration
In this subsection, the variations of Bohr radius and improved Bohr radius with respect to the parameters \(\alpha\) and \(k\) are shown numerically as well as graphically. The improved Bohr radius for the functions \(f\), \(h\) and \(g\) have been computed and compared with the Bohr radius obtained in [16, Theorem 3.1], where \(f(z)=zh(z)\overline{g(z)}\in S^{k*}_{LH}(\alpha).\) In Table 1, corresponding to each \(\alpha,\) the numerical results presented in the second row represents the Bohr radius obtained in [16, Theorem 3.1, (i)] whereas the first
row represents improved Bohr radius obtained in Theorem 6 for different values of \(k\). The graphical representation of Table 1 is shown in Figure 3 for \((\alpha,k)=(0,1),\)\((0.2,1),\)\((0.4,1),\)\((0.6,1),\)\((0,2),\)\((0,3),\) and \((0,4)\), where the dashed and solid curves, respectively, the graphs of the functions whose roots are Bohr and improved Bohr radius. In a similar manner, Bohr radius in [16, Theorem 3.1 (ii) and (ii)] and improved Bohr radius obtained in Theorem 7 for \(h\) and \(g\) are given in Table 2 and Table 3 respectively. The corresponding graphical illustrations are presented in Figure 4 and Figure 5. The numerical and graphical evidence of the improved Bohr inequality given in Theorem 8, Theorem 9 and Theorem 10, respectively, are in Table 4 and Figure 6, Table 5 and Figure 7, and Table 6 and Figure 8. All these calculations and figures are carried out using Matlab \(R2013a\).
In view of the above tables and graphs, we have the following observations. The improved Bohr radii obtained for \(f\), \(h\) and \(g\) are smaller than the corresponding Bohr radii. It is pointed out that the Bohr radius increases with increases in the number of folds in the functions. In this regard, another important point is that the functions that provide the Bohr radius and the improved Bohr radius are strictly increasing in \((0,1)\) and move towards \(\infty\) as \(r\to 1\), that is, the line \(x=1\) is a tangent for these functions.
## 5 Pre-Schwarzian and Schwarzian derivative
For an analytic univalent function \(\kappa:\mathbb{D}\rightarrow\mathbb{C}\), the classical pre-Schwarzian and Schwarzian derivatives are respectively defined as (cf. [31])
\[P_{\kappa}(z)=(\log(\kappa^{\prime}(z)))^{\prime}=\frac{\kappa^{\prime\prime}( z)}{\kappa^{\prime}(z)}\]
and
\[S_{\kappa}(z)=(P_{\kappa}(z))^{\prime}-\frac{1}{2}(P_{\kappa}(z))^{2}=\left( \frac{\kappa^{\prime\prime}(z)}{\kappa^{\prime}(z)}\right)^{\prime}-\frac{1}{ 2}\left(\frac{\kappa^{\prime\prime}(z)}{\kappa^{\prime}(z)}\right)^{2}.\]
For the case of complex-valued harmonic mappings, this theory was first introduced in [22] by Chuaqui et al. and further investigated in [23, 24, 36]. Mao and Ponnusamy [43] studied the theory of Schwarzian derivative for non-vanishing logharmonic mappings and studied different conditions for the Schwarzian derivative to be analytic. Liu and Ponnusamy in [42] modified the definitions of pre-Schwarzian and Schwarzian derivatives as in [43] and concluded that the new definitions preserve the standard properties of the classical Schwarzian derivative.
In this section, we introduce the pre-Schwarzian and Schwarzian derivatives for univalent logharmonic mappings \(f\) of the form \(f(z)=zh(z)\overline{g(z)}\) and show that the definitions preserve some classical properties as in the analytic univalent case.
We define the pre-Schwarzian and Schwarzian derivatives with the assistance of Jacobian. The pre-Schwarzian derivative of univalent logharmonic mappings of the form \(f(z)=zh(z)\overline{g(z)}\) is de
fined as:
\[P_{f}(z)=\left(\log J_{f}(z)\right)_{z} =\frac{\partial}{\partial z}\left(\log f_{z}(z)+\log\overline{f_{z} (z)}+\log(1-|\omega(z)|^{2})\right)\] \[=\frac{2h^{\prime}(z)+zh^{\prime\prime}(z)}{h(z)+zh^{\prime}(z)}+ \frac{g^{\prime}(z)}{g(z)}-\frac{\omega^{\prime}(z)\overline{\omega(z)}}{1-| \omega(z)|^{2}},\qquad z\in\mathbb{D}\]
and the Schwarzian derivative of \(f(z)=zh(z)\overline{g(z)}\) is
\[S_{f}(z) =\left(P_{f}(z)\right)^{\prime}-\frac{1}{2}\left(P_{f}(z)\right) ^{2}\] \[=\left(\frac{2h^{\prime}(z)+zh^{\prime\prime}(z)}{h(z)+zh^{ \prime}(z)}+\frac{g^{\prime}(z)}{g(z)}-\frac{\omega^{\prime}(z)\overline{ \omega(z)}}{1-|\omega(z)|^{2}}\right)^{\prime}-\frac{1}{2}\left(\frac{2h^{ \prime}(z)+zh^{\prime\prime}(z)}{h(z)+zh^{\prime}(z)}+\frac{g^{\prime}(z)}{g( z)}-\frac{\omega^{\prime}(z)\overline{\omega(z)}}{1-|\omega(z)|^{2}}\right)^{2}\] \[=\left(\frac{2h^{\prime}(z)+zh^{\prime\prime}(z)}{h(z)+zh^{ \prime}(z)}+\frac{g^{\prime}(z)}{g(z)}\right)^{\prime}-\frac{1}{2}\left(\frac{ 2h^{\prime}(z)+zh^{\prime\prime}(z)}{h(z)+zh^{\prime}(z)}\right)^{2}-\frac{ \omega^{\prime\prime}(z)\overline{\omega(z)}}{1-|\omega(z)|^{2}}-\frac{3}{2} \left(\frac{\omega^{\prime}(z)\overline{\omega(z)}}{1-|\omega(z)|^{2}}\right)^ {2}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\left( \frac{2h^{\prime}(z)+zh^{\prime\prime}(z)}{h(z)+zh^{\prime}(z)}+\frac{g^{ \prime}(z)}{g(z)}\right)\left(\frac{\omega^{\prime}(z)\overline{\omega(z)}}{1-| \omega(z)|^{2}}\right),\qquad z\in\mathbb{D}.\]
The pre-Schwarzian and Schwarzian derivatives of logharmonic mapping \(f\) obey the same chain rule property as in the analytic case. The following theorem narrates detail.
**Theorem 11**.: _Let \(f\) be a logharmonic mapping of the form\(f(z)=zh(z)\overline{g(z)}\) and \(\phi\) be a univalent analytic function, then_
* \(P_{f\circ\phi}(z)=\left(P_{f}\circ\phi(z)\right)\cdot\phi^{\prime}(z)+P_{\phi}(z)\)__
* \(S_{f\circ\phi}(z)=\left(S_{f}\circ\phi(z)\right)\cdot\left(\phi^{\prime}(z) \right)^{2}+S_{\phi}(z)\)__
Proof.: _(i) From the definition of Jacobian of logharmonic mappings, we have_
\[J_{f\circ\phi}(z)=|(f\circ\phi)_{z}(z)|^{2}\left(1-|(\omega\circ\phi)(z)|^{2} \right). \tag{25}\]
_The logarithmic derivative of (25) with respect to \(z\) gives_
\[\frac{\partial}{\partial z}\left(\log\left(J_{f\circ\phi}(z) \right)\right) =\frac{\partial}{\partial z}\left(\log(f\circ\phi)_{z}(z)+\log( \overline{(f\circ\phi)}_{z})+\log(1-|(\omega\circ\phi)(z)|^{2})\right)\] \[=\frac{\partial}{\partial z}\left(\log\phi^{\prime}(z)+\log \left(h(\phi(z))\overline{g(\phi(z))}+\phi(z)h^{\prime}(\phi(z))\overline{g( \phi(z))}\right)\right)\] \[\quad+\frac{\partial}{\partial z}\left(\log\overline{\phi^{ \prime}(z)}+\log\left(\overline{h(\phi(z))}g(\phi(z))+\overline{\phi(z)h^{ \prime}(\phi(z))}g(\phi(z))\right)\right)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad+\frac{\partial}{\partial z}\left(1-|\omega(f(\phi(z)))|^{2}\right)\] \[=\frac{\phi^{\prime\prime}(z)}{\phi^{\prime}(z)}+\phi^{\prime}(z) \left(\frac{2h^{\prime}(\phi(z))+\phi(z)h^{\prime\prime}(\phi(z))}{h(\phi(z)) +\phi(z)h^{\prime}(\phi(z))}+\frac{g^{\prime}(\phi(z))}{g(\phi(z))}\right)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad-\frac{\phi^{\prime}(z) \omega(\phi(z))\overline{\omega(\phi(z))}}{1-|\omega(\phi(z))|^{2}}\]
\[= \left(P_{f}\circ\phi(z)\right)\cdot\phi^{\prime}(z)+P_{\phi}(z)).\]
_(ii) From the definition of Schwarzian derivative, we have_
\[S_{f\circ\phi}(z) = \left(P_{f\circ\phi}(z)\right)^{\prime}-\frac{1}{2}\left(P_{f \circ\phi}(z)\right)^{2}\] \[= \left(\left(P_{f}\circ\phi(z)\right)\cdot\phi^{\prime}(z)+P_{ \phi}(z)\right)^{\prime}-\frac{1}{2}\left(\left(P_{f}\circ\phi(z)\right) \cdot\phi^{\prime}(z)+P_{\phi}(z)\right)^{2}\] \[= \left(\left(P_{f}(\phi(z))\right)^{\prime}-\frac{1}{2}(P_{f}( \phi(z))^{2}\right)(\phi^{\prime}(z))^{2}+\left(\frac{\phi^{\prime\prime}(z)}{ \phi^{\prime}(z)}\right)^{\prime}-\frac{1}{2}\left(\frac{\phi^{\prime\prime}( z)}{\phi^{\prime}(z)}\right)^{2}\] \[= \left(S_{f}\circ\phi(z)\right)\cdot(\phi^{\prime}(z))^{2}+S_{ \phi}(z).\]
The next theorem is a direct consequence of Theorem \(5.1\) of [42].
**Theorem 12**.: _Let \(f\) be a univalent logharmonic mapping bearing the form \(f(z)=zh(z)\overline{g(z)}\) with dilatation \(\omega(z)\), then \(P_{f}\) is harmonic if and only if \(\omega\) is constant._
## Declarations
**Conflict of interest.** Both authors have no conflict of interest.
**Author contributions.** The authors contributed equally to this article.
**Funding.** The first author is supported by the DST-INSPIRE Fellowship, Government of India, INSPIRE code IF190766, and the second author acknowledges the support from the OSHEC of the OURIIP Seed Fund of the Government of Odisha in India, sanction order No. 1040/69/OSHEC.
|
2303.04113 | Ehrhart positivity for a certain class of panhandle matroids | We give a combinatorial formula for the Ehrhart coefficients of a certain
class of weighted multi-hypersimplices. In a special case, where these
polytopes coincide with the base polytope of the panhandle matroid
$\textrm{Pan}_{k,n-2,n}$, we show that the Ehrhart coefficients are positive. | Daniel McGinnis | 2023-03-07T18:25:27Z | http://arxiv.org/abs/2303.04113v2 | A combinatorial formula for the Ehrhart coefficients of a certain class of weighted multi-hypersimplices
###### Abstract.
We give a combinatorial formula for the Ehrhart coefficients of a certain class of weighted multi-hypersimplices. In a special case, where these polytopes correspond coincide with the base polytope of the panhandle matroid \(\operatorname{Pan}_{k,n-2,n}\), we show that the Ehrhart coefficients are positive.
The author was supported by NSF grant DMS-1839918 (RTG)..
## 1. Introduction
The _Ehrhart polynomial_ (introduced by Ehrhart in [1]) of a polytope \(P\subset\mathbb{R}^{n}\) with integral vertices is an invariant of \(P\) which counts the number of integer coordinates lying inside integer dilates of \(P\). Specifically, the Ehrhart function of \(P\), denoted by \(\operatorname{ehr}(P,t)\) takes as input a positive integer \(t\) and outputs the quantity
\[\operatorname{ehr}(P,t)=\#\left(tP\cap\mathbb{Z}^{n}\right),\]
namely, the number of integer coordinates lying in \(tP\). Ehrhart proved that this function is actually a polynomial in \(t\) whose degree is the dimension of \(P\). Therefore, for \(d=\dim(P)\), we may write
\[\operatorname{ehr}(P,t)=a_{d}t^{d}+a_{d-1}t^{d-1}+\cdots+a_{0}.\]
An important feature of the Ehrhart polynomial is that \(a_{d}=\operatorname{Vol}(P)\) and \(a_{d-1}=\frac{1}{2}\operatorname{Vol}(\partial P)\) (see [1][2] for proofs and more information). It is also known that \(a_{0}=1\), but the remaining coefficients can be negative in general. Thus, an interesting problem which has received a significant amount of attention is determine families of polytopes having the property that their Ehrhart polynomials have positive coefficients. These polytopes are then called _Ehrhart positive_. Additionally, it is of interest to determine a combinatorial or geometric meaning to the Ehrhart coefficients of Ehrhart positive polytopes. See [14] for a survey on Ehrhart positivity.
In this paper we extend further upon the work of [13], where it is shown that the Ehrhart polynomials of polytopes of the form
\[\mathcal{R}_{k,\mathbf{c}}=\left\{x\in[0,c_{1}]\times\cdots\times[0,c_{n}]\mid \sum_{i=1}^{n}x_{i}=k\right\}.\]
for positive integers \(c_{1},\ldots,c_{n}\) and \(k\) are Ehrhart positive, and a combinatorial formula is given for the coefficients as well. Note that when \(c_{1}=\cdots=c_{n}=1\), we recover the hypersimplex \(\Delta_{k,n}\), so the work of [13] extends the results of [12], in which the Ehrhart positivity of hypersimplices is proven using a generating function approach. A combinatorial proof of the the Ehrhart positivity of hypersimplices is given in [10] which relies only on an inclusion-exclusion argument. In this paper we attempt to further our current understanding of the Ehrhart coefficients of the hypersimplex by providing a more explicit combinatorial interpretation for these values.
The class of polytopes described above are examples of _alcoved polytopes_ introduced in [10], and more specifically, they are contained in the class of polytopes called _weighted multi-hypersimplices_ defined in the same paper.
One main result of this paper is to provide a combinatorial description for the Ehrhart coefficients of the weighted multi-hypersimplices of the following form:
\[\left\{(x_{1},\ldots,x_{n})\mid 0\leq x_{i}\leq c_{i}\text{ for all }1\leq i\leq n -2,\,0\leq x_{n-1}+x_{n}\leq 1\text{ and }\sum_{i=1}^{n}x_{i}=k\right\}\]
for positive integers \(c_{1},\ldots,c_{n-2}\) and \(k\).
In the case that \(\mathbf{c}=(1,\ldots,1)\), this polytope coincides with the base polytope associated to the panhandle matroid \(\operatorname{Pan}_{k,n-2,n}\), defined in [11], and we are able to use our derived combinatorial formula to show that in this case, the polytope is Ehrhart positive. Although a promising approach to proving Ehrhart positivity for the general panhandle matroid \(\operatorname{Pan}_{k,s,n}\) via a solely enumerative combinatorial conjecture is outlined in [11], our method of proof takes a substantially different route and follows more along the lines with the reasoning of [10]. We hope that the ideas presented here will aid future research toward proving Ehrhart positivity for panhandle matroids and other classes of weighted multi-hypersimplices.
We note that the panhandle matroids are certain lattice path matroids and hence, they lie within the class of _positroids_, introduced in [14]. It is conjectured in [13] that positroids are Ehrhart positive (a matroid is said to be Ehrhart positive if its associated base polytope is Ehrhart positive).
**Conjecture 1.1** (Conjecture 6.3 in [13]): _Positroids are Ehrhart positive._
Since we prove that a certain class of panhandle matroids are Ehrhart positive, our result supports Conjecture 1.1.
It was originally conjectured in [15] that all matroids are Ehrhart positive, moreover, the even stronger conjecture that the larger class of _generalized permutahedra_ are Ehrhart positive was stated in [12]. However, both of these conjectures are shown to be false in [11] where examples of matroids with negative Ehrhart coefficients with rank between rank 3 and corank 3 are provided. However, it is shown in [13] that matroids with rank 2 are Ehrhart positive, and it is noted in the same paper that all matroids of rank 2 are in addition positroids.
Throughout the progression of ideas in [12], [13], [11], [13], [14] and [15], it became clearer that the Ehrhart positivity of these matroid polytopes requires the introduction of complicated combinatorial structures whose enumeration yields a description of the Ehrhart coefficients, along with a proof of this positivity. We note and emphasize that in [15], such a combinatorial structure is particularly involved; moreover, in [11] the conjectured structure is challenging to understand. Our new main contribution is the description of a combinatorial gadget whose enumeration yields an arguably more elegant description of the coefficients of the Ehrhart polynomial of the hypersimplex, and we show how this allows extensions to other weighted multi-hypersimplices. It will become apparent to the reader that the search of a more general structure that covers more (if not, all) weighted multi-hypersimplices would demand a deep and possibly cumbersome combinatorial insight.
It is worth mentioning that the study of the \(h^{*}\)-polynomial for polytopes related to those discussed above is an intriguing and active area of research, although we obtain no new results in this direction. For instance, the \(h^{*}\)-polynomial of hypersimplices is shown to have very interesting combinatorial properties in [10] and also in [1][12] using a different approach. The methods of [13] are also used in [15] to find a combinatorial
interpretation for the coefficients of the \(h^{*}\)-polynomial of \(\mathcal{R}_{k,\mathfrak{c}}\). Further research on the \(h^{*}\)-polynomial for alcoved polytopes can also be found in [10][10] for instance.
## 2. The Ehrhart coefficients for hypersimplices revisited
Recall that the hypersimplex \(\Delta_{k,n}\) is the polytope given by
\[\Delta_{k,n}=\left\{x\in[0,1]^{n}\mid\sum_{i=1}^{n}x_{i}=k\right\}.\]
To write the formula for the Ehrhart polynomial of \(\Delta_{k,n}\), we first set up the following notation. Let \(P_{a,b}^{n}=\sum_{a\leq i_{1}<\cdots<i_{n}\leq b}i_{1}\cdots i_{n}\), and let \(\genfrac{[}{]}{0.0pt}{}{n}{m}\) denote the number of permutations of \([n]\) with \(m\) cycles, known as the unsigned Stirling numbers of the first kind. Recall that \(P_{1,n-1}^{n-m}=\genfrac{[}{]}{0.0pt}{}{n}{m}\).
The Ehrhart polynomial for the hypersimplex \(\Delta_{k,n}\) is given by
\[\operatorname{ehr}(\Delta_{k,n},t) =\sum_{i=0}^{k-1}(-1)^{i}\binom{n}{i}\binom{(k-i)t-i-1+n-1}{n-1} \tag{2}\] \[=\frac{1}{(n-1)!}\sum_{m=0}^{n-1}t^{m}\sum_{i=0}^{k-1}(-1)^{i} \binom{n}{i}(k-i)^{m}P_{-i+1,n-1-i}^{n-1-m}. \tag{1}\]
This is proven for instance in [11].
For a permutation \(\sigma\), let \(C(\sigma)\) be the set of cycles in the cycle decomposition of \(\sigma\). Additionally, if we write a permutation \(p=[p_{1},\ldots,p_{n}]\) in one-line notation, the _descent set of \(p\)_, \(\operatorname{Des}(p)\), is the set of indices \(1\leq i\leq n-1\) such that \(p_{i}>p_{i+1}\). The main combinatorial object of this paper is introduced in Definition 4.1 below.
**Definition 2.1**.: A cycle-ordered, weighted permutation is a triple \((\sigma,p,w)\) consisting of a permutation \((\sigma)\), a permutation of \([|C(\sigma)|]\) for which \(|C(\sigma)|\mapsto 1\)\((p)\), and a weight function \(w:C(\sigma)\to\mathbb{N}_{0}\).
The _total weight_ of \((\sigma,p,w)\), which we denote by \(w(\sigma)\), is the sum of the weights of each cycle, namely,
\[w(\sigma)=\sum_{\mathfrak{c}\in C(\sigma)}w(\mathfrak{c}).\]
Additionally, we say that \((\sigma,p,w)\) is _of type \((n,m,k)\)_ if \(\sigma\) is a permutation of \([n]\) with \(m\) cycles such that \(w(\sigma)+|\operatorname{Des}(p)|=k\).
We can think of \(p\) as ordering the cycles of \(\sigma\) according to the smallest elements of the cycles. For example, if \(\sigma=(1\ 3)(2\ 6)(4\ 5)\) and \(p=[3\ 2\ 1]\), then the order of the cycles of \(\sigma\) according to \(p\) is \((4\ 5)(2\ 6)(1\ 3)\). Essentially, the order of the cycles according to their smallest elements matches the order of the elements from \(p\) when \(p\) is written in one-line notation. The reason why we add the condition that \(|C(\sigma)|\mapsto 1\) under \(p\) is simply because these will be be the only orders of the cycles that will be relevant to us throughout the paper. We also note that because \(|C(\sigma)|\mapsto 1\), the cycle of \(\sigma\) containing \(1\) will always come last in the corresponding ordering of its cycles.
Weighted permutations (without an ordering on the cycles) were defined in [14], providing another way to view previously defined combinatorial objects from [11], which are enumerated by the _weighted Lah numbers_.
Here, we add an ordering to the cycles of weighted permutations to provide a more explicit combinatorial description for the coefficients of the Ehrhart polynomial of certain polytopes, including hypersimplices.
Let \(\mathbf{c}=(c_{1},\ldots,c_{n})\) be a tuple of positive integers. A cycle-ordered, weighted permutation \((\sigma,p,w)\) of \([n]\) is said to be \(\mathbf{c}\)_-compatible_ if
\[w(\mathfrak{c})<\sum_{i\in\mathfrak{c}}c_{i}\text{ for all }\mathfrak{c}\in C( \sigma).\]
We note that the notion of \(\mathbf{c}\)-compatibility was initially defined in [13].
**Theorem 2.2**: _The coefficient of \(t^{m}\) in \((n-1)!\operatorname{ehr}(\Delta_{k,n},t)\) is the number of \((1,\ldots,1)\)-compatible cycle-ordered, weighted permutations \((\sigma,p,w)\) of type \((n,m+1,k)\)._
_Proof._ We have that
\[(n-1)!\operatorname{ehr}(\Delta_{k,n},t)\] \[=\sum_{m=0}^{n-1}t^{m}\sum_{i=0}^{k-1}(-1)^{i}\binom{n}{i}(k-i)^{ m}P_{-i+1,n-1-i}^{n-1-m}\] \[=\sum_{m=0}^{n-1}t^{m}\sum_{i=0}^{k-1}(-1)^{i}\binom{n}{i}(k-i)^{ m}\sum_{j=0}^{n-1-m}(-1)^{j}P_{1,i-1}^{j}P_{1,n-1-i}^{n-1-m-j}\] \[=\sum_{m=0}^{n-1}t^{m}\sum_{i=0}^{k-1}\sum_{j=0}^{n-1-m}(-1)^{i- j}\binom{n}{i}(k-i)^{m}\genfrac{[}{]}{0.0pt}{}{i}{i-j}\genfrac{[}{]}{0.0pt}{}{n-i}{m+1-i +j}\] \[=\sum_{m=0}^{n-1}t^{m}\sum_{i=0}^{k-1}\sum_{j=0}^{n-1-m}\sum_{ \begin{subarray}{c}A\subset[n]\\ |A|=i\end{subarray}}(-1)^{i-j}(k-i)^{m}\genfrac{[}{]}{0.0pt}{}{i}{i-j}\genfrac{ [}{]}{0.0pt}{}{n-i}{m+1-i+j}\]
We will show that for a fixed set \(A\subset[n]\), the quantity \((k-i)^{m}\genfrac{[}{]}{0.0pt}{}{i}{i-j}\genfrac{[}{]}{0.0pt}{}{n-i}{m+1-i+j}\) is the number of cycle-ordered, weighted permutations \((\sigma,p,w)\) of type \((n,m+1,k)\) with the following properties. Here, \(\ell(\mathfrak{c})=\sum_{i\in\mathfrak{c}}1\) denotes the length of \(\mathfrak{c}\).
* \(i-j\) cycles consist only of elements in \(A\).
* The remaining \(m+1-(i-j)\) cycles consist only of elements from \([n]\setminus A\).
* For each cycle \(\mathfrak{c}\) of \(\sigma\) consisting of elements of \(A\), \(w(\mathfrak{c})\geq\ell(\mathfrak{c})\).
First we show this for the case that \(i=0\), i.e., we demonstrate that \(k^{m}\genfrac{[}{]}{0.0pt}{}{n}{m+1}\) is the number of cycle-ordered, weighted permutations \((\sigma,p,w)\) of type \((n,m+1,k)\).
Now, \(\genfrac{[}{]}{0.0pt}{}{n}{m+1}\) is the number of permutations of \([n]\) with \(m+1\) cycles, and \(k^{m}\) is the number of functions \(f\) that assign each of the cycles that do not contain \(1\) a value between \(0\) and \(k-1\) and assigns the cycle containing \(1\) the value \(k\). Let \(v_{1}<\cdots<v_{p}\) be the distinct values that were assigned to the cycles. Order the cycles that were assigned \(v_{1}\) in an increasing manner according to their smallest element. Order the cycles that were assigned \(v_{2}\) in the same way, and place them after the cycles that were assigned \(v_{1}\). We continue in this way to obtain an ordering \(\mathfrak{c}_{1},\ldots,\mathfrak{c}_{m+1}\) of the cycles of \(\sigma\) (recall that \(1\in\mathfrak{c}_{m+1}\)). Let \(p\) be the permutation of \([m+1]\) corresponding to this ordering. We define the weight \(w(\mathfrak{c}_{1})\) of \(\mathfrak{c}_{1}\) to be \(v_{1}\), and the weight of \(\mathfrak{c}_{i}\) for \(i>1\) is given by
\[w(\mathfrak{c}_{i})=\begin{cases}f(\mathfrak{c}_{i})-f(\mathfrak{c}_{i-1})-1& \text{if }i-1\in\operatorname{Des}(p),\\ f(\mathfrak{c}_{i})-f(\mathfrak{c}_{i-1})&\text{otherwise}.\end{cases}\]
We have that \(w(\sigma)=\sum_{i=1}^{m+1}w(\mathfrak{c}_{i})=f(\mathfrak{c}_{m+1})-| \operatorname{Des}(p)|=k-|\operatorname{Des}(p(\sigma))|\). Thus \((\sigma,p,w)\) is of type \((n,m+1,k)\). Furthermore, any cycle-ordered, weighted permutation \((\sigma,p,w)\) of type \((n,m+1,k)\) with its cycles and ordering given by \(\mathfrak{c}_{1},\ldots,\mathfrak{c}_{m+1}\) can be obtained in this way from the function \(f\) defined by \(f(\mathfrak{c}_{i})=|\operatorname{Des}(p)\cap\{1,\ldots,i-1\}|+\sum_{j<i}w( \mathfrak{c}_{j})\).
Note that for a given set \(A\) of size \(i\), \(\genfrac{[}{]}{0.0pt}{}{i}{i-j}\genfrac{[}{]}{0.0pt}{}{n-i}{m+1-i+j}\) is the number of permutations of \([n]\) with \(m+1\) cycles where \(i-j\) cycles consist only of elements from \(A\), and the remaining cycles consist only of elements from \([n]\setminus A\). Thus, by a similar reasoning to the arguments above, we have that \((k-i)^{m}\genfrac{[}{]}{0.0pt}{}{i}{i-j}\genfrac{[}{]}{0.0pt}{}{n-i}{m+1-i+j}\) is the number cycle-ordered, weighted permutations of type \((n,m+1,k-i)\), where \(i-j\) cycles consist only of elements from \(A\), and the remaining cycles consist only of elements from \([n]\setminus A\). Now, for each cycle \(\mathfrak{c}\) of \(\sigma\) that consists only of elements in \(A\), we define \(w^{\prime}(\mathfrak{c})=w(\mathfrak{c})+\ell(\mathfrak{c})\). Otherwise, we define \(w^{\prime}(\mathfrak{c})=w(\mathfrak{c})\). The resulting cycle-ordered, weighted permutation \((\sigma,p,w^{\prime})\) satisfies the bullet points above.
Thus, we have demonstrated that the quantity \((k-i)^{m}\genfrac{[}{]}{0.0pt}{}{i}{i-j}\genfrac{[}{]}{0.0pt}{}{n-i}{m+1-i+j}\) is the number of cycle-ordered, weighted permutations that satisfy the above bullet points.
For a cycle-ordered, weighted permutation \((\sigma,p,w)\) where \(\sigma=\mathfrak{c}_{1}\cdots\mathfrak{c}_{m+1}\), let \(I\) be the set of indices \(j\) for which \(w(\mathfrak{c}_{j})\geq\ell(\mathfrak{c}_{j})\). For each, \(J\subset I\), \((\sigma,p,w)\) contributes \((-1)^{|J|}\) in the sum of the coefficient of \(t^{m}\) above in the term where \(A=\bigcup_{j\in J}\mathfrak{c}_{j}\) and \(i=|A|\) (we are slightly abusing notation by associating \(\mathfrak{c}_{j}\) with the set of its elements).
Therefore, the total contribution of \((\sigma,p,w)\) to the sum is \(\sum_{J\subset I}(-1)^{|J|}\), which is \(0\) if \(|I|\geq 1\) and \(1\) if \(I=\emptyset\). This completes the proof of the theorem.
Let \(A(n,k)\) denote the _Eulerian numbers_, namely, the number of permutations of \([n]\) with \(k\) descents, and let \(W(n,m+1,\ell)\) denote the number of \((1,\ldots,1)\)-compatible weighted permutations \((\sigma,w)\) of \([n]\) (here there is no ordering \(p\) of the cycles) with \(m+1\) cycles. The numbers \(W(n,m+1,\ell)\) are precisely the weighted Lah numbers defined in [10]. It is shown in [10] that the coefficient of \(t^{m}\) in \((n-1)!\operatorname{ehr}(\Delta_{k,n},t)\) is given by
\[\sum_{\ell=0}^{k-1}W(n,m+1,\ell)A(m,k-1-\ell).\]
This result is proven in part by the use of _Worpitzky's identity_ (see [1] for instance) to further break up and rewrite equation (2). We see that this already implies Theorem 2.2. Indeed, \(W(n,m+1,\ell)A(m,k-1-\ell)\) is the number of cycle-ordered, weighted permutations \((\sigma,p,w)\) with total weight \(\ell\) where the permutation of \(\{2,\ldots,n\}\) given in one-line notation by \([p_{1},\ldots,p_{n-1}]\) has \(k-1-\ell\) descents. This means that \(p=[p_{1},\ldots,p_{n}]\) has \(k-\ell\) descents since \(p_{n}=1\). Thus, \((\sigma,p,w)\) is of type \((n,m+1,k)\) with total weight \(\ell\). Since we are summing up over all \(\ell\), we obtain the result of Theorem 2.2.
**Remark 2.3**.: We note that the proof of Theorem 2.2 implicitly contains a proof of Worpitzky's identity, which states that for positive integers \(k\) and \(m\)
\[k^{m}=\sum_{i=0}^{m-1}A(m,i)\binom{k+i}{m}.\]
Indeed, by reasoning similar to that of the proof of Theorem 2.2, \(k^{m}\) counts the number of pairs \((p,w)\) consisting of a permutation \(p\) of \([m+1]\) where \(p(m+1)=1\) and a weight function \(w:[m+1]\to\mathbb{N}_{0}\) such that \(|\operatorname{Des}(p)|+\sum_{i=1}^{m+1}w(i)=k\). On the other hand, \(A(m,i)\binom{k+i}{m}=A(m,m-i-1)\binom{k-(m-i)+m}{m}\) is the number of such pairs \((p,w)\) where \(|\operatorname{Des}(p)|=m-i\) since \(\binom{k-(m-i)+m}{m}\) is the number of ways to distribute \(k-(m-i)\) total weight among \(m+1\) elements. Since we are summing up over \(i\), we obtain Worpitzky's identity.
In light of the previous remark, the proof of Theorem 2.2 is similar in spirit to the proof that \([t^{m}](n-1)!\operatorname{ehr}(\Delta_{k,n},t)=\sum_{\ell=0}^{k-1}W(n,m+1, \ell)A(m,k-1-\ell)\) in [10] since they both involve the use of Worpitzky's identity to some extent along with an inclusion-exclusion argument. However, the proof we provide allows us to interpret the coefficient of \(t^{m}\) in equation (2) more explicitly and keeps us from having to needlessly rewrite formulas
via Worpitzky's identity later in the paper. Moreover, this interpretation of the coefficients allows for extensions to other weighted multi-hypersimplices as we will show.
For a tuple of integers \(\mathbf{c}=(c_{1},\ldots,c_{n})\), let \(\mathcal{R}_{k,\mathbf{c}}\) be the polytope defined by
\[\mathcal{R}_{k,\mathbf{c}}=\left\{x\in[0,c_{1}]\times\cdots\times[0,c_{n}]\mid \sum_{i=1}^{n}x_{i}=k\right\}.\]
In [10], the authors show that the Ehrhart polynomial \(\mathrm{ehr}(\mathcal{R}_{k,\mathbf{c}},t)\) has positive coefficients by expressing the coefficients with a combinatorial formula in terms of \(\mathbf{c}\)-compatible weighted permutations of \([n]\). Here, we describe the analogue of Theorem 2.2 for the polytopes \(\mathcal{R}_{k,\mathbf{c}}\).
For an integer \(v\), let \(\rho_{\mathbf{c},j}(v)\) be defined as
\[\rho_{\mathbf{c},j}(v):=\#\left\{I\in\binom{[n]}{j}:\sum_{i\in I}c_{i}=v \right\}. \tag{3}\]
For example, if \(\mathbf{c}\) consists of all \(1\)'s, then \(\rho_{\mathbf{c},j}(j)=\binom{n}{j}\). It is shown in [10] that the Ehrhart polynomial of \(\mathcal{R}_{k,\mathbf{c}}\) can be written as
\[\mathrm{ehr}(\mathcal{R}_{k,\mathbf{c}},t)=\sum_{j=0}^{k-1}(-1)^{j}\sum_{v=0}^ {k-1}\binom{t(k-v)+n-1-j}{n-1}\rho_{\mathbf{c},j}(v),\]
and the coefficient of \(t^{m}\) in this polynomial is
\[\frac{1}{(n-1)!}\sum_{v=0}^{k}(k-v)^{m}\,\sum_{j=0}^{n}(-1)^{j}\,P_{-j+1,n-1-j }^{n-1-m}\,\rho_{\mathbf{c},j}(v).\]
Using the same reasoning as in the proof of Theorem 2.2, the quantity
\[\sum_{v=0}^{k}(k-v)^{m}\,\sum_{j=0}^{n}(-1)^{j}\,P_{-j+1,n-1-j}^{n-1-m}\,\rho_ {\mathbf{c},j}(v)\]
is the number of \(\mathbf{c}\)-compatible cycle-ordered, weighted permutations of type \((n,m+1,k)\).
## 3. Formulas for Ehrhart polynomials
Let \(\mathbf{a}=(a_{1},\ldots,a_{r})\) be an \(r\)-tuple of positive integers such that \(a_{1}+\cdots+a_{r}=n\) and let \(\mathbf{c}=(c_{1},\ldots,c_{r})\) be an \(r\)-tuple of positive integers. We denote what we call the weighted multi-hypersimplex of type \((k,\mathbf{a},\mathbf{c})\) as the polytope
\[\Delta_{k,\mathbf{a},\mathbf{c}}=\left\{x\in\mathbb{R}_{\geq 0}^{n}:\sum_{i=1} ^{n}x_{i}=k\text{ and }\sum_{j=1+a_{1}+\cdots+a_{i-1}}^{a_{1}+\cdots+a_{i}}x_{j}\leq c_{i} \text{ for all }1\leq i\leq r\right\},\]
where \(a_{0}:=0\). We note that these polytopes were explicitly defined in [11].
Here, we would like to find a formula for the Ehrhart polynomial of \(\Delta_{k,\mathbf{a},\mathbf{c}}\) or at least show how it can be computed for particular cases.
For a given integer \(u\), the number of nonnegative integer solutions to \(\sum_{j=1}^{a_{i}}x_{i}=u\) is \(\binom{u+a_{i}-1}{a_{i}-1}\). We can use this fact to compute
\[\mathrm{ehr}(\Delta_{k,\mathbf{a},\mathbf{c}},t)=\#\left\{x\in\mathbb{Z}_{ \geq 0}^{n}:\sum_{i=1}^{n}x_{i}=kt\text{ and }\sum_{j=1+a_{1}+\cdots+a_{i-1}}^{a_{1}+\cdots+a_{i}}x_{j}\leq c_{i}t \text{ for all }1\leq i\leq r\right\}\]
for any positive integer \(t\) as a coefficient of a product of polynomials.
Indeed,
\[\mathrm{ehr}(\Delta_{k,\mathbf{a},\mathbf{c}},t)=[x^{kt}]\prod_{i=1}^{r}\left( \sum_{j=0}^{c_{i}t}\binom{j+a_{i}-1}{a_{i}-1}x^{j}\right).\]
We can write \(\sum_{j=0}^{c_{i}t}\binom{j+a_{i}-1}{a_{i}-1}x^{j}=\frac{1}{(a_{i}-1)!}D^{a_{i} -1}(1+x+\cdots+x^{c_{i}t+a_{i}-1})=\frac{1}{(a_{i}-1)!}D^{a_{i}-1}\left(\frac{1 -x^{c_{i}t+a_{i}}}{1-x}\right)\), where \(D^{k}\) denotes the \(k\)th derivative with respect to \(x\). Therefore, we have the following theorem.
**Theorem 3.1**: \[\mathrm{ehr}(\Delta_{k,\mathbf{a},\mathbf{c}},t)=[x^{kt}]\prod_{i=1}^{r}\left( \frac{1}{(a_{i}-1)!}D^{a_{i}-1}\left(\frac{1-x^{c_{i}t+a_{i}}}{1-x}\right) \right).\]
When \(\mathbf{a}=(1,\ldots,1,2)=(1^{(n-2)},2)\) (\(1^{(n-2)}\) denotes \(n-2\) copies of \(1\)), we have the following formula. Let \(\mathbf{c}^{\prime}=(c_{1},\ldots,c_{n-2})\) be the first \(n-2\) entries of \(\mathbf{c}\).
\[\mathrm{ehr}(\Delta_{k,(1,\ldots,1,2),\mathbf{c}},t)=[x^{kt}] \prod_{i=1}^{n-2}\left(\frac{1-x^{c_{i}t+1}}{1-x}\right)\left(\frac{1-(c_{n-1} t+2)x^{c_{n-1}t+1}+(c_{n-1}t+1)x^{c_{n-1}t+2}}{(1-x)^{2}}\right)\] \[=[x^{kt}]\frac{1}{(1-x)^{n}}\Bigg{(}\prod_{i=1}^{n-2}\left(1-x^{ c_{i}t+1}\right)-(c_{n-1}t+2)x^{c_{n-1}t+1}\prod_{i=1}^{n-2}\left(1-x^{c_{i}t+1}\right)\] \[+(c_{n-1}t+1)x^{c_{n-1}t+2}\prod_{i=1}^{n-2}\left(1-x^{c_{i}t+1} \right)\Bigg{)}\] \[=\sum_{i=0}^{k-1}(-1)^{i}\sum_{v=0}^{k-1}\binom{(k-v)t-i+n-1}{n-1} \rho_{\mathbf{c}^{\prime},i}(v)\] \[-(c_{n-1}t+2)\sum_{i=0}^{k-1}(-1)^{i}\sum_{v=0}^{k-c_{n-1}-1} \binom{(k-v-c_{n-1}t-i-1+n-1}{n-1}\rho_{\mathbf{c}^{\prime},i}(v)\] \[+(c_{n-1}t+1)\sum_{i=0}^{k-1}(-1)^{i}\sum_{v=0}^{k-c_{n-1}-1} \binom{(k-v-c_{n-1}t-i-2+n-1}{n-1}\rho_{\mathbf{c}^{\prime},i}(v)\] \[=\sum_{m=0}^{n-1}t^{m}\sum_{i=0}^{k-1}(-1)^{i}\sum_{v=0}^{k-1}(k- v)^{m}P_{-i+1,n-1-i}^{n-1-m}\rho_{\mathbf{c}^{\prime},i}(v)\] \[-c_{n-1}\sum_{m=0}^{n-1}t^{m+1}\sum_{i=0}^{k-1}(-1)^{i}\sum_{v=0} ^{k-c_{n-1}-1}(k-v-c_{n-1})^{m}P_{-i,n-2-i}^{n-1-m}\rho_{\mathbf{c}^{\prime},i }(v)\] \[+c_{n-1}\sum_{m=0}^{n-1}t^{m}\sum_{i=0}^{k-1}(-1)^{i}\sum_{v=0}^{ k-c_{n-1}-1}(k-v-c_{n-1})^{m}P_{-i-1,n-3-i}^{n-1-m}\rho_{\mathbf{c}^{\prime},i}(v)\] \[+\sum_{m=0}^{n-1}t^{m}\sum_{i=0}^{k-1}(-1)^{i}\sum_{v=0}^{k-c_{n-1 }-1}(k-v-c_{n-1})^{m}P_{-i-1,n-3-i}^{n-1-m}\rho_{\mathbf{c}^{\prime},i}(v).\]
## 4. A combinatorial formula for the Ehrhart coefficients
When \(c_{n-1}=1\), we will use the above formula in the proof of Theorem 4.2 below. First we will need the following definition for the statement of Theorem 4.2.
**Definition 4.1**.: For \(\mathbf{a}=(1^{(n-2)},2)\) and \(\mathbf{c}=(c_{1},\ldots,c_{n-2},1)\), we will say that a cycle \(\mathfrak{c}\in C(\sigma)\) of a cycle-ordered, weighted permutation \((\sigma,p,w)\) is _properly \((\mathbf{a},\mathbf{c})\)-weighted_ if the following conditions are satisfied:
1. If neither \(n-1\) or \(n\) are in \(\mathfrak{c}\), then \(w(\mathfrak{c})<\sum_{i\in\mathfrak{c}}c_{i}\).
2. If \(n-1\) or \(n\) (or both) are in \(\mathfrak{c}\), then \(w(\mathfrak{c})<1+\sum_{i\in\mathfrak{c},\,i\leq n-2}c_{i}\).
If a cycle is not properly \((\mathbf{a},\mathbf{c})\)-weighted then we say it is improperly \((\mathbf{a},\mathbf{c})\)-weighted. Moreover, a cycle-ordered, weighted permutation \((\sigma,p,w)\) is said to be \((\mathbf{a},\mathbf{c})\)_-compatible_ if each cycle of \(\sigma\) is properly \((\mathbf{a},\mathbf{c})\)-weighted.
We make Definition 4.1 solely for the purpose of articulating Theorem 4.2. We do not attempt to generalize this definition for other values of \(\mathbf{a}\) and \(\mathbf{c}\) since we are not confident on what the definition should be in general.
We are now ready to state and prove Theorem 4.2.
**Theorem 4.2**: _The coefficient of \(t^{m}\) in \((n-1)!\operatorname{ehr}(\Delta_{k,(1^{(n-2)},2),(c_{1},\ldots,c_{n-2},1)},t)\) can be described as the following._
_If \(k<n-1\) or \(m+1>1\), let \(S_{1}\) be the set of \((\mathbf{a},\mathbf{c})\)-compatible cycle-ordered, weighted permutations of type \((n,m+1,k)\). Otherwise, if \(k=n-1\) and \(m+1=1\), let \(S_{1}\) be the set of cycle-ordered, weighted permutations of type \((n,1,n-1)\) (note that these cycle-ordered, weighted permutations are not \((\mathbf{a},\mathbf{c})\)-compatible)._
_Let \(S_{2}\) be the set of cycle-ordered, weighted permutations of type \((n,m+1,k)\) such that_
* _the cycles of_ \(\sigma\) _not containing_ \(n-1\) _are properly_ \((\mathbf{a},\mathbf{c})\)_-weighted,_
* \(n-1\) _and_ \(n\) _are in different cycles,_
* _the cycle containing_ \(n-1\) _is improperly_ \((\mathbf{a},\mathbf{c})\)_-weighted,_
* _the cycle_ \(\mathfrak{c}\) _containing_ \(n\) _satisfies_ \[w(\mathfrak{c})=\sum_{i\in\mathfrak{c},\,i\neq n}c_{i}.\]
_._
_Let \(S_{3}\) be the set of cycle-ordered, weighted permutations \((\sigma,p,w)\) of type \((n,m,k)\) such that_
* _the cycles not containing_ \(n-1\) _or_ \(n\) _are properly_ \((\mathbf{a},\mathbf{c})\)_-weighted,_
* \(n-1\) _and_ \(n\) _are in distinct cycles,_
* _the cycle containing_ \(n-1\) _is improperly_ \((\mathbf{a},\mathbf{c})\)_-weighted,_
* _the cycle_ \(\mathfrak{c}\) _containing_ \(n\) _satisfies_ \(w(\mathfrak{c})<\sum_{i\in\mathfrak{c},\,i\neq n}c_{i}\)_._
_If \(m>1\), let \(S_{4}\) be the set of cycle-ordered, weighted permutations \((\sigma,p,w)\) of type \((n,m,k)\) such that \(n-1\) and \(n\) are in the same cycle, which is improperly \((\mathbf{a},\mathbf{c})\)-weighted, and the remaining cycles are properly \((\mathbf{a},\mathbf{c})\)-weighted. Otherwise, if \(m=1\), let \(S_{4}=\emptyset\)._
_Then we have that_
\[[t^{m}](n-1)!\operatorname{ehr}(\Delta_{k,(1^{n-2},2),(c_{1},\ldots,c_{n-2},1) },t)=|S_{1}|+|S_{2}|-|S_{3}|-|S_{4}|=|S_{1}\cup S_{2}|-|S_{3}\cup S_{4}|.\]
Proof.: As we saw before, the Ehrhart polynomial of \(\Delta_{k,(1^{(n-2)},2),(c_{1},\ldots,c_{n-2},1)}\) is equal to
\[=\sum_{m=0}^{n-1}t^{m}\sum_{i=0}^{k-1}(-1)^{i}\sum_{v=0}^{k-1}(k-v)^{m}P_{-i+ 1,n-1-i}^{n-1-m}\rho_{\mathfrak{c}^{\prime},i}(v)\]
\[+\sum_{m=0}^{n-1}t^{m+1}\sum_{i=0}^{k-1}(-1)^{i+1}\sum_{v=0}^{k-2}(k- v-1)^{m}P_{-i,n-2-i}^{n-1-m}\rho_{\mathfrak{e}^{\prime},i}(v)\] \[+2\sum_{m=0}^{n-1}t^{m}\sum_{i=0}^{k-1}(-1)^{i+1}\sum_{v=0}^{k-2}( k-v-1)^{m}P_{-i,n-2-i}^{n-1-m}\rho_{\mathfrak{e}^{\prime},i}(v)\] \[+\sum_{m=0}^{n-1}t^{m+1}\sum_{i=0}^{k-1}(-1)^{i}\sum_{v=0}^{k-1}( k-v-1)^{m}P_{-i-1,n-3-i}^{n-1-m}\rho_{\mathfrak{e}^{\prime},i}(v)\] \[+\sum_{m=0}^{n-1}t^{m}\sum_{i=0}^{k-1}(-1)^{i}\sum_{v=0}^{k-2}(k- v-1)^{m}P_{-i-1,n-3-i}^{n-1-m}\rho_{\mathfrak{e}^{\prime},i}(v),\]
where in the second and third line, we absorbed the minus sign in front of the sum into the \((-1)^{i+1}\) term. The coefficient of \(t^{m}\) can be read off to be the sum of the terms
\[a_{1} =\sum_{i=0}^{k-1}(-1)^{i}\sum_{v=0}^{k-1}(k-v)^{m}P_{-i+1,n-1-i}^ {n-1-m}\rho_{\mathfrak{e}^{\prime},i}(v)\] \[a_{2} =2\sum_{i=0}^{k-1}(-1)^{i+1}\sum_{v=0}^{k-2}(k-v-1)^{m}P_{-i,n-2- i}^{n-1-m}\rho_{\mathfrak{e}^{\prime},i}(v)\] \[a_{3} =\sum_{i=0}^{k-1}(-1)^{i}\sum_{v=0}^{k-2}(k-v-1)^{m}P_{-i-1,n-3- i}^{n-1-m}\rho_{\mathfrak{e}^{\prime},i}(v)\] \[a_{4} =\sum_{i=0}^{k-1}(-1)^{i+1}\sum_{v=0}^{k-2}(k-v-1)^{m-1}P_{-i,n-2 -i}^{n-m}\rho_{\mathfrak{e}^{\prime},i}(v)\] \[a_{5} =\sum_{i=0}^{k-1}(-1)^{i}\sum_{v=0}^{k-2}(k-v-1)^{m-1}P_{-i-1,n-3 -i}^{n-m}\rho_{\mathfrak{e}^{\prime},i}(v)\]
Using similar reasoning as in the proof of Theorem 2.2, we have the following interpretation of the values above.
* If \(k<n-1\) or \(m+1>1\), let \(A_{1}\) be the set of cycle-ordered, weighted permutations \((\sigma,p,w)\) of type \((n,m+1,k)\) where the cycles that do not contain \(n-1\) or \(n\) are properly \((\mathbf{a},\mathbf{c})\)-weighted. (Note that there are no restrictions on the weight of the cycles containing \(n-1\) or \(n\).) Otherwise, \(k=n-1\), \(m+1=1\) and we let \(A_{1}\) be the set of cycle-ordered, weighted permutations of type \((n,1,n-1)\). Then \(a_{1}=|A_{1}|\).
* Let \(A_{2}\) be the set of cycle-ordered, weighted permutations \((\sigma,p,w)\) of type \((n,m+1,k)\) where the cycles that do not contain \(n-1\) or \(n\) are properly \((\mathbf{a},\mathbf{c})\)-weighted, \(n-1\) and \(n\) are in distinct cycles, and exactly one of the cycles containing \(n-1\) or \(n\) is improperly \((\mathbf{a},\mathbf{c})\)-weighted. Let \(A_{2}^{\prime}\) be defined in the same way as \(A_{2}\) except both of the cycles containing \(n-1\) and \(n\) are improperly \((\mathbf{a},\mathbf{c})\)-weighted. Then \(a_{2}=-|A_{2}|-2|A_{2}^{\prime}|\)
* Let \(A_{3}\) be the set of cycle-ordered, weighted permutations \((\sigma,p,w)\) of type \((n,m+1,k)\) such that
* the cycles not containing \(n-1\) or \(n\) are properly \((\mathbf{a},\mathbf{c})\)-weighted,
* \(n-1\) and \(n\) are in distinct cycles, the cycle containing \(n-1\) is improperly \((\mathbf{a},\mathbf{c})\)-weighted and the weight of the cycle \(\mathfrak{c}\) containing \(n\) is at least \(\sum_{i\in\mathfrak{c},i\neq n}c_{i}\).
If \(m+1>1\), let \(A^{\prime}_{3}\) be the set of cycle-ordered, weighted permutations \((\sigma,p,w)\) of type \((n,m+1,k)\) such that
* the cycles not containing \(n-1\) or \(n\) are properly \((\mathbf{a},\mathbf{c})\)-weighted,
* \(n-1\) and \(n\) are in the same cycle, which is improperly \((\mathbf{a},\mathbf{c})\)-weighted. Otherwise, \(m+1=1\) and we let \(A^{\prime}_{3}=\emptyset\). Then \(a_{3}=|A_{3}|-|A^{\prime}_{3}|\).
* Let \(A_{4}\) be the set of cycle-ordered, weighted permutations \((\sigma,p,w)\) of type \((n,m,k)\) such that \(n-1\) and \(n\) are in distinct cycles, the cycle containing \(n-1\) is improperly \((\mathbf{a},\mathbf{c})\)-weighted, and the cycles not containing \(n-1\) or \(n\) are properly \((\mathbf{a},\mathbf{c})\)-weighted. Then \(a_{4}=-|A_{4}|\).
* Let \(A_{5}\) be the set of cycle-ordered, weighted permutations \((\sigma,p,w)\) of type \((n,m,k)\) such that
* the cycles not containing \(n-1\) or \(n\) are properly \((\mathbf{a},\mathbf{c})\)-weighted,
* \(n-1\) and \(n\) are in distinct cycles, and the cycle containing \(n-1\) is improperly \((\mathbf{a},\mathbf{c})\)-weighted,
* the weight of the cycle \(\mathfrak{c}\) containing \(n\) is at least \(\sum_{i\in{\mathfrak{c}},i\neq n}c_{i}\). If \(m>1\), let \(A^{\prime}_{5}\) be the set of cycle-ordered, weighted permutations \((\sigma,p,w)\) of type \((n,m,k)\) such that
* the cycles not containing \(n-1\) or \(n\) are properly \((\mathbf{a},\mathbf{c})\)-weighted.
* \(n-1\) and \(n\) are in the same cycle, which is improperly \((\mathbf{a},\mathbf{c})\)-weighted. Otherwise, \(m=1\) and we let \(A^{\prime}_{5}=\emptyset\). Then \(a_{5}=|A_{5}|-|A^{\prime}_{5}|\).
We can now see that \(a_{1}+a_{2}+a_{3}=|S_{1}|+|S_{2}|\). To see this, let \((\sigma,p,w)\) be of type \((n,m+1,k)\).
* If \((\sigma,p,w)\in S_{1}\), then it is contained in \(A_{1}\) and contributes \(+1\) to the sum \(a_{1}+a_{2}+a_{3}\).
* If \((\sigma,p,w)\in S_{2}\), then it is contained in \(A_{1},\,A_{2}\), and \(A_{3}\) and contributes \(1-1+1=1\) to the sum. Note in the case that \(m+1=1\) we have that \(S_{2}=\emptyset\).
* If \((\sigma,p,w)\) has some cycle not containing \(n-1\) or \(n\) that is improperly \((\mathbf{a},\mathbf{c})\)-weighted, then it not contained in any \(A_{i}\) or \(A^{\prime}_{i}\) and does not contribute to the sum.
* Assume that \(n-1\) and \(n\) are in distinct cycles and exactly one of these cycles is improperly \((\mathbf{a},\mathbf{c})\)-weighted. Additionally, if \(n-1\) is improperly \((\mathbf{a},\mathbf{c})\)-weighted, then the cycle \(\mathfrak{c}\) containing \(n\) has weight less than \(\sum_{i\in{\mathfrak{c}},\,i\neq n}c_{i}\). If the remaining cycles not containing \(n-1\) or \(n\) are properly \((\mathbf{a},\mathbf{c})\)-weighted, then \((\sigma,p,w)\) is contained in \(A_{1}\) and \(A_{2}\) and thus contributes \(1-1=0\) to the sum.
* If \(n-1\) and \(n\) are in distinct cycles which are both improperly \((\mathbf{a},\mathbf{c})\)-weighted and the cycles not containing \(n-1\) or \(n\) are properly \((\mathbf{a},\mathbf{c})\)-weighted, then \((\sigma,p,w)\) is contained in \(A_{1}\), \(A^{\prime}_{2}\), and \(A_{3}\) and contributes \(1-2+1=0\) to the sum.
* If \(n-1\) and \(n\) are in the same cycle which is improperly \((\mathbf{a},\mathbf{c})\)-weighted, and the remaining cycles are properly \((\mathbf{a},\mathbf{c})\)-weighted, then \((\sigma,p,w)\) is contained in \(A_{1}\) and \(A^{\prime}_{3}\) and contributes \(1-1=0\) to the sum.
It can be seen, using reasoning similar to the above argument showing \(a_{1}+a_{2}+a_{3}=|S_{1}|+|S_{2}|\), that \(a_{4}+a_{5}=-|S_{3}|-|S_{4}|\).
This completes the proof.
When \(\mathbf{c}\) consists of all \(1\)'s, we are able to use Theorem 4.2 to show that the coefficients are positive. We note that \(\Delta_{k,(1^{(n-2)},2),\mathbf{1}}\) is the base polytope of the panhandle matroid \(\operatorname{Pan}_{k,n-2,n}\) defined in [13], where a promising method to prove Ehrhart positivity of panhandle matroids is outlined. However, the proof we present uses a different approach than what is suggested there.
**Theorem 4.3**: _Let \(\mathbf{1}\in\mathbb{Z}^{n}\) be the tuple of all 1's. Then the coefficient of \(t^{m}\) in \((n-1)!\operatorname{ehr}(\Delta_{k,(1^{(n-2)},2),\mathbf{1}},t)\) is positive._
Proof.: Let \(S_{1},S_{2},S_{3},S_{4}\) be defined as in Theorem 4.2. We will show that there are injections \(f_{1}:S_{3}\to S_{1}\cup S_{2}\) and \(f_{2}:S_{4}\to S_{1}\cup S_{2}\) such that \(\operatorname{Im}(f_{1})\cap\operatorname{Im}(f_{2})=\emptyset\). Then we will show that there is an element of \(S_{1}\cup S_{2}\) that is not contained in \(\operatorname{Im}(f_{1})\cup\operatorname{Im}(f_{2})\). This will complete the proof.
We note that when \(k=n-1\), \(\Delta_{k,(1^{(n-2)},2),\mathbf{1}}\) is integrally equivalent to the standard 2-dimensional simplex, which is well known to be Ehrhart positive. Therefore, we assume throughout the proof that \(1\leq k\leq n-2\); in particular, \(\Delta_{k,(1^{(n-2)},2),\mathbf{1}}\) is \((n-1)\)-dimensional.
We first note that if \(m=0\) or \(m=1\), then \(S_{3},S_{4}=\emptyset\), so we may assume that \(m\geq 2\).
Let \((\sigma,p,w)\in S_{3}\); we define \((\sigma^{\prime},p^{\prime},w^{\prime})=f_{1}((\sigma,p,w))\) as follows. We note that the condition on the weight of the cycle containing \(n\) implies that this cycle contains some element other than \(n\). First we write the cycle containing \(n\) as \((i_{1},\ldots,i_{r},n)\), and we break this cycle into two by \((i_{1},\ldots,i_{w((i_{1},\ldots,i_{r},n))+1})(i_{w((i_{1},\ldots,i_{r},n))+2}, \ldots,i_{r},n)\). We obtain \(\sigma^{\prime}\) and the ordering of its cycles given by \(p^{\prime}\) by simply replacing \((i_{1},\ldots,i_{r},n)\) with
\[(i_{1},\ldots,i_{w((i_{1},\ldots,i_{r},n))+1})(i_{w((i_{1},\ldots,i_{r},n))+2},\ldots,i_{r},n)\]
in the ordering with the following exception: if \((i_{1},\ldots,i_{w((i_{1},\ldots,i_{r},n))+1})\) contains the element 1, then we place \((i_{w((i_{1},\ldots,i_{r},n))+2},\ldots,i_{r},n)\) at the beginning in the ordering of the cycles.
For example, if \(\sigma\) and the order of its cycles is \((2\ 8)(4\ 3\ 5\ 7)(1\ 6)\) where \(w((4\ 3\ 5\ 7))=1\), then \(\sigma^{\prime}\) and the order of its cycles is given by \((2\ 8)(4\ 3)(5\ 7)(1\ 6)\).
We initially define a weight \(w^{*}\) on the cycle \(\mathfrak{c}\) of \(\sigma^{\prime}\) containing \(n-1\) that will be modified later. Notice that \(p^{\prime}\) has either the same or one more descent than \(p\). If \(p^{\prime}\) has the same number of descents as \(p\), then we take \(w^{*}(\mathfrak{c})=w(\mathfrak{c})\), otherwise, we take \(w^{*}(\mathfrak{c})=w(\mathfrak{c})-1\). The reason why we make this definition is to ensure that the weight function \(w^{\prime}\) defined below will make \((\sigma^{\prime},p^{\prime},w^{\prime})\) of type \((n,m+1,k)\) rather than \((n,m+1,k+1)\).
The weight function \(w^{\prime}\) is defined in the same way as \(w\) on all cycles not containing \(n-1\) or \(n\).
We define \(w^{\prime}((i_{1},\ldots,i_{w((i_{1},\ldots,i_{r},n))+1}))\) to be the integer \(d\) where \(i_{w((i_{1},\ldots,i_{r},n))}\) is the \((d+1)\)'th smallest element among \(i_{1},\ldots,i_{w((i_{1},\ldots,i_{r},n))}\). For example, if \(i_{w((i_{1},\ldots,i_{r},n))}\) is the smallest among these values, then \(d=0\), or if \(i_{w((i_{1},\ldots,i_{r},n))}\) is the largest, then \(d=w((i_{1},\ldots,i_{r},n))\).
Let \(\mathfrak{c}\) be the cycle containing \(n-1\). We define
\[w^{\prime}((i_{w((i_{1},\ldots,i_{r},n))+2},\ldots,i_{r},n))=\min\left\{\sum_{j =w((i_{1},\ldots,i_{r},n))+2}^{r}c_{i_{j}},\,w((i_{1},\ldots,i_{r},n))-d+w^{* }(\mathfrak{c})\right\}.\]
Finally, we define
\[w^{\prime}(\mathfrak{c})=w^{*}(\mathfrak{c})-(w^{\prime}((i_{w((i_{1},\ldots,i _{r},n))+2},\ldots,i_{r},n))-(w((i_{1},\ldots,i_{r},n))-d)).\]
The resulting cycle-ordered, weighted permutation \((\sigma^{\prime},p^{\prime},w^{\prime})\) is either properly \((\mathbf{a},\mathbf{1})\)-weighted, or the cycle containing \(n-1\) is improperly \((\mathbf{a},\mathbf{1})\)-weighted, every other cycle is properly \((\mathbf{a},\mathbf{1})\)-weighted, and the cycle \((i_{w((i_{1},\ldots,i_{r},n))+2},\ldots,i_{r},n)\) containing \(n\) has weight \(\sum_{j=w((i_{1},\ldots,i_{r},n))+2}^{r}c_{i_{j}}\). Also, we have that either \(w^{\prime}(\sigma^{\prime})=w(\sigma)\) and \(|\operatorname{Des}(p^{\prime})|=|\operatorname{Des}(p)|\) or \(w^{\prime}(\sigma^{\prime})=w(\sigma)-1\) and \(|\operatorname{Des}(p^{\prime})|=|\operatorname{Des}(p)|+1\), so \((\sigma^{\prime},p^{\prime},w^{\prime})\) is of type \((n,m+1,k)\). Therefore, \((\sigma^{\prime},p^{\prime},w^{\prime})\) is an element of \(S_{1}\cup S_{2}\).
To see that \(f_{1}\) is injective, we show that the above process in the definition of \(f_{1}\) can be reversed. Let \((\sigma^{\prime},p^{\prime},w^{\prime})\) be in the image of \(f_{1}\). Let \((i_{1},\ldots,i_{r^{\prime}})\) be the cycle preceding the cycle containing \(n\) (if the cycle containing \(n\) is the first cycle, then we mean \((i_{1},\ldots,i_{r^{\prime}})\) to be the last cycle), where we have written these elements so that \(i_{r^{\prime}}\) is the \((w^{\prime}((i_{1},\ldots,i_{r^{\prime}}))+1)\)'th smallest element among \(i_{1},\ldots,i_{r^{\prime}}\). Then we merge the cycles \((i_{1},\ldots,i_{r^{\prime}})(j_{1},\ldots,j_{s},n)\) into
one cycle: \((i_{1},\ldots,i_{r^{\prime}},j_{1},\ldots,j_{s},n)\) and we keep the relative ordering of the cycles to obtain the permutation and ordering \(\sigma\) and \(p\). We then define \(w((i_{1},\ldots,i_{r^{\prime}},j_{1},\ldots,j_{s},n))=r^{\prime}-1\), and for the remaining cycles of \(\sigma\) not containing \(n-1\), \(w\) is defined in the same way as \(w^{\prime}\). For the cycle \(\mathfrak{c}\) containing \(n-1\), we define \(w(\mathfrak{c})\) to be
\[w(\mathfrak{c})=w^{\prime}(\mathfrak{c})+w^{\prime}((i_{1},\ldots,i_{r^{ \prime}}))+w^{\prime}((j_{1},\ldots,j_{s},n))-(r^{\prime}-1)\]
if \(p\) has the same number of descents as \(p^{\prime}\), and
\[w(\mathfrak{c})=w^{\prime}(\mathfrak{c})+w^{\prime}((i_{1},\ldots,i_{r^{ \prime}}))+w^{\prime}((j_{1},\ldots,j_{s},n))-(r^{\prime}-1)+1\]
if \(p\) has one less descent than \(p^{\prime}\).
For example, if \(\sigma^{\prime}\) and the ordering of its cycles is given by \((2\ 9)(4\ 6\ 3\ 7)(5\ 10)(1\ 8)\) with weights \(2,2,1,1\) respectively, then we arrange the elements of \((4\ 6\ 3\ 7)\) so that its 3rd smallest element is written on the right: \((3\ 7\ 4\ 6)\). Then we merge the cycles \((3\ 7\ 4\ 6)(5\ 10)\) into one to obtain the permutation \((2\ 9)(3\ 7\ 4\ 6\ 5\ 10)(1\ 8)\) and we give the weights \(2,3,1\) respectively.
Thus, \(f_{1}\) is injective.
Now we define the injection \(f_{2}:S_{4}\to S_{1}\cup S_{2}\). Let \((\sigma,p,w)\in S_{4}\), and let \((i_{1},\ldots,i_{r},n-1,j_{1},\ldots,j_{s},n)\) be the cycle containing \(n-1\) and \(n\). We define \(f_{2}((\sigma,p,w))=(\sigma^{\prime},p^{\prime},w^{\prime})\) as follows. The permutation \(\sigma^{\prime}\) and the ordering of its cycles is obtained by simply splitting the cycle \((i_{1},\ldots,i_{r},n-1,j_{1},\ldots,j_{s},n)\) into two cycles \((i_{1},\ldots,i_{r},n-1)(j_{1},\ldots,j_{s},n)\), where in the case that \(1\) is an element among \(i_{1},\ldots,i_{r}\), then \((j_{1},\ldots,j_{s},n)\) is placed in the beginning in the ordering of the cycles. We define
\[w^{\prime}((j_{1},\ldots,j_{s},n))=\sum_{i=1}^{s}c_{j_{i}}\]
and
\[w^{\prime}((i_{1},\ldots,i_{r},n-1))=\] \[\begin{cases}w((i_{1},\ldots,i_{r},n-1,j_{1},\ldots,j_{s},n))- \sum_{i=1}^{s}c_{j_{i}}&\text{if }|\operatorname{Des}(p^{\prime})|=| \operatorname{Des}(p)|\\ w((i_{1},\ldots,i_{r},n-1,j_{1},\ldots,j_{s},n))-1-\sum_{i=1}^{s}c_{j_{i}}& \text{if }|\operatorname{Des}(p^{\prime})|=|\operatorname{Des}(p)|+1.\end{cases}\]
An argument similar to the one above by merging the cycles containing \(n-1\) and \(n\) shows that this process can be reversed, and thus \(f_{2}\) is injective. Also, the cycle containing \(n-1\) is potentially the only cycle that is improperly \((\mathbf{a},\mathbf{1})\)-weighted, and since \(w^{\prime}((j_{1},\ldots,j_{s},n))=\sum_{i=1}^{s}c_{j_{i}}\) and \((\sigma^{\prime},p^{\prime},w^{\prime})\) is of type \((n,m+1,k)\), \((\sigma^{\prime},p^{\prime},w^{\prime})\in S_{1}\cup S_{2}\).
Notice that an element in \(\operatorname{Im}(f_{1})\) has the property that the cycle preceding the cycle containing \(n\) does not contain the element \(n-1\), and this is not the case for an element in \(\operatorname{Im}(f_{2})\). This shows that \(\operatorname{Im}(f_{1})\cap\operatorname{Im}(f_{2})=\emptyset\).
Finally, we show that there is an element in \((S_{1}\cup S_{2})\setminus(\operatorname{Im}(f_{1})\cup\operatorname{Im}(f_{ 2}))\), which will conclude the proof. We can assume that \(m<n-1\) since we know that the coefficient of \(t^{n-1}\) is positive. Let \((\sigma,p,w)\) be a \(((1^{(n-1)}),(1^{(n-1)}))\)-compatible cycle-ordered, weighted permutation of type \((n-1,m+1,k)\) (recall that the number of such \((\sigma,p,w)\) is the coefficient of \(t^{m}\) in \(\operatorname{chr}(\Delta_{k,n},t)\)). We can modify the permutation \(\sigma\) by simply replacing the element \(n-1\) with \(n-1\)\(n\) in its cycle decomposition. For example, if \(\sigma=(1\ 5\ 4)(2\ 3)\), then we obtain the permutation \((1\ 5\ 6\ 4)(2\ 3)\) This results in an \((\mathbf{a},\mathbf{1})\)-compatible cycle-ordered, weighted permutation of type \((n,m+1,k)\) where \(n-1\) and \(n\) are in the same cycle, which is not an element of \((\operatorname{Im}(f_{1})\cup\operatorname{Im}(f_{2}))\).
This completes the proof.
**Remark 4.4**.: In the proof of Theorem 4.3, the injection \(f_{2}\) can be defined analogously when \(\mathbf{c}\) is of the more general form \((c_{1},\ldots,c_{n-2},1)\). However, the injection \(f_{1}\) does not
appear to carry over in a simple way to this more general case, so the result of Theorem 4.3 may be extended by finding a suitable injection to replace \(f_{1}\).
## 5. Acknowledgements
The author would like to thank Luis Ferroni for helpful comments and suggestions.
|
2310.12891 | Vertex-critical graphs far from edge-criticality | Let $r$ be any positive integer. We prove that for every sufficiently large
$k$ there exists a $k$-chromatic vertex-critical graph $G$ such that
$\chi(G-R)=k$ for every set $R \subseteq E(G)$ with $|R|\le r$. This partially
solves a problem posed by Erd\H{o}s in 1985, who asked whether the above
statement holds for $k \ge 4$. | Anders Martinsson, Raphael Steiner | 2023-10-19T16:41:35Z | http://arxiv.org/abs/2310.12891v1 | # Vertex-critical graphs far from edge-criticality
###### Abstract.
Let \(r\) be any positive integer. We prove that for every sufficiently large \(k\) there exists a \(k\)-chromatic vertex-critical graph \(G\) such that \(\chi(G-R)=k\) for every set \(R\subseteq E(G)\) with \(|R|\leq r\). This partially solves a problem posed by Erdos in 1985, who asked whether the above statement holds for \(k\geq 4\).
## 1. Introduction
The chromatic number \(\chi(G)\) of a graph \(G\) is among the oldest and most fundamental graph parameters, but despite its intensive study by researchers across the field for more than a century, many fundamental open problems remain. In many instances, we would like to show that for some number \(k\), all graphs in an infinite class \(\mathcal{G}\) of graphs have chromatic number less than \(k\). Often times, the graph class \(\mathcal{G}\) at hand will also have the property that it is closed under taking induced, or even arbitrary, subgraphs. In this case, a central idea for bounding the chromatic number is to consider the _minimal_ graphs in \(\mathcal{G}\) with chromatic number \(k\). These graphs have the special property that removing any vertex (if \(\mathcal{G}\) is closed under induced subgraphs) or any edge (if \(G\) is closed under subgraphs) reduces the chromatic number from \(k\) to \(k-1\). This enforces many constraints on such minimal graphs, for instance sufficiently high minimum degree and edge-connectivity, among others. Such properties can then prove useful when showing the non-existence of minimal \(k\)-chromatic graphs in \(\mathcal{G}\), which in turn establishes that the chromatic number of graphs in \(\mathcal{G}\) is less than \(k\).
Because of this and many other applications, the notion of _color-critical graphs_ has emerged. Given an integer \(k\), a graph \(G\) is called _\(k\)-chromatic vertex-critical_ if \(\chi(G)=k\), but \(\chi(G-v)=k-1\) for every \(v\in V(G)\). Similarly, it is called _\(k\)-chromatic edge-critical_, if \(\chi(G)=k\) but \(\chi(G-e)=k\) for every \(e\in E(G)\). Note that edge-criticality implies vertex-criticality if we exclude redundant cases in which \(G\) has isolated vertices.
A considerable amount of effort has been put into understanding how different the notions of vertex-criticality and edge-criticality can be. Already in 1970, G. Dirac [5] conjectured that for every integer \(k\geq 4\), there exists a \(k\)-chromatic vertex-critical graph \(G\) which at the same time is very much not edge-critical, in the sense that the deletion of any single edge does _not_ lower its chromatic number. In the following, let us say that such a graph _has no critical edges_. Dirac's problem for a long time remained poorly understood. It was not before 1992 that Brown [1] finally found a first construction of some vertex-critical graph with no critical edges, in fact, he found such a construction for \(k=5\). Later, in 2002, Lattanzio [5] found a more general construction which proved Dirac's conjecture for every integer \(k\geq 5\) such that \(k-1\) is not a prime number. Shortly after, Jensen [6] provided a construction of \(k\)-chromatic vertex-critical graphs with no critical edges for every \(k\geq 5\)
This leaves only the case \(k=4\) of Dirac's conjecture open today, which remains an intriguing open problem. A wide-ranging strengthening of Dirac's conjecture was proposed by Erdos in 1985 [4], as follows.
"_I recently heard from Toft the following conjecture of Dirac: Is it true that for every \(k\geq 4\) there is a \(k\)-chromatic vertex-critical graph which remains \(k\)-chromatic if any of its edges is omitted. If the answer as expected is yes, then one could ask whether it is true that for every \(k\geq 4\) and \(r\) there is a vertex-critical \(k\)-chromatic graph which remains \(k\)-chromatic if any \(r\) of its edges are omitted._"
(Paul Erdos, 1985, top of page 113 in [4])
This problem is also mentioned in several other sources, for instance it is listed as Problem 5.14 in the book [8] by Jensen and Toft and on page 66 in Chapter 4 of the Erdos open problem collection by Chung and Graham [2], see also the online version of the problem [3].
The question of Erdos can be rephrased as asking whether for arbitrarily large numbers \(r\) there exist \(k\)-chromatic vertex-critical graphs for \(k\geq 4\) that are "pretty far" from any of their \((k-1)\)-chromatic spanning subgraphs, in the sense that one has to remove more than \(r\) edges to reach any such subgraph. As described above, the case \(r=1\) of this problem is well-understood, however, not much seems to be known beyond that, when \(r\geq 2\).
**Our contribution.** In this paper, we resolve the problem by Erdos for any value \(r\) and all sufficiently large values \(k\). To the best of our knowledge, these are the first known examples of such graphs for arbitrarily large values of \(r\).
**Theorem 1**.: _For every \(r\in\mathbb{N}\) there is some \(k_{0}\in\mathbb{N}\) such that for every \(k\geq k_{0}\) there exists a \(k\)-chromatic vertex-critical graph \(G\) such that \(\chi(G-R)=k\) for every \(R\subseteq E(G)\) with \(|R|\leq r\)._
Our result still leaves open Erdos' question when \(k\geq 4\) is fixed as a small value and \(r\) tends to infinity, and this remains an interesting open case of the problem. The rest of this note is devoted to presenting our proof of Theorem 1. The main idea of the construction is to use the existence of uniform hypergraphs that admit a perfect matching upon the removal of any single vertex, but at the same time are locally rather sparse. Such hypergraphs in turn can be constructed randomly, using the recent advances on Shamir's hypergraph matching problem.
**Notation.** For a graph \(G\) and a subset \(X\subseteq V(G)\) of its vertices, \(G[X]\) denotes the subgraph of \(G\) induced by \(X\). A _hypergraph_ is a tuple \((V,E)\) where \(V\) is a finite set and \(E\subseteq 2^{V}\setminus\{\emptyset\}\). Given a hypergraph \(H=(V,E)\), we denote by \(V(H)=V\) its vertex- and by \(E(H)=E\) its hyperedge-set. For \(v\in V(H)\), we denote by \(H-v\) the hypergraph with vertex-set \(V(H)\setminus\{v\}\) and hyperedge-set \(\{e\in E(H)|v\notin e\}\). For \(e\in E(H)\), \(H-e:=(V(H),E(H)\setminus\{e\})\) is the hypergraph obtained by omitting \(e\). Given a hypergraph \(H\), its \(2\)_-section_ is the graph \(G_{2}^{H}\) on the same vertex-set \(V\) and where \(uv\in E(G_{2}^{H})\) if and only if there is some \(e\in E(H)\) with \(u,v\in e\).
## 2. Proof of Theorem 1
In the following, given positive integers \(n,k\) and a probability value \(p\in[0,1]\), we denote by \(\mathcal{H}_{s}(n,p)\) the binomial \(s\)-uniform random hypergraph on vertex-set \(V=[n]=\{1,\ldots,n\}\), obtained by including every \(s\)-subset of \(V\) as a hyperedge independently with probability \(p\). Given a hypergraph \(H\), a _perfect matching_ of \(H\) is a collection \(\{e_{1},\ldots,e_{t}\}\subseteq E(H)\) of hyperedges that form a set-partition of \(V(H)\). Note that if \(H\) is an \(s\)-uniform hypergraph, then the existence of a perfect matching necessitates \(|V(H)|\equiv 0\pmod{s}\). One of the most famous problems in probabili
graph theory for a long time was _Shamir's problem_, that asked to determine the threshold for the random hypergraph \(\mathcal{H}_{s}(n,p)\) with \(n\equiv 0\pmod{s}\) to contain a perfect matching. This threshold was determined up to a multiplicative error in a breakthrough-result by Johannson, Kahn and Vu [9] in 2008, as follows.
**Theorem 2** (cf. [9]).: _For every integer \(s\geq 1\) there exists a constant \(C=C(s)>0\) such that with \(p=p(n)=\frac{C\log n}{n^{s-1}}\) it holds that \(\mathcal{H}_{s}(n,p)\) has a perfect matching w.h.p. provided that \(n\equiv 0\pmod{s}\)._
We remark that recently, Kahn [10] has determined the threshold in Shamir's problem even more precisely, showing that taking \(C=(1+o(1))(s-1)!\) is sufficient (and best-possible). We now use this probabilistic result to deduce the existence of uniform hypergraphs with special properties, as follows.
**Lemma 3**.: _Let \(s\geq 2,m\geq 1\) be fixed integers. Then for every sufficiently large integer \(n\) such that \(n\equiv 1\pmod{s}\), there exists an \(s\)-uniform hypergraph \(H\) on \(n\) vertices with the following properties._
1. _For every_ \(v\in V(H)\)_, the hypergraph_ \(H-v\) _admits a perfect matching._
2. _For every set_ \(F\subseteq E(H)\) _of hyperedges with_ \(|F|\leq m\)_, we have_ \[\left|\bigcup_{e\in F}e\right|\geq(s-1)|F|.\]
Proof.: Let \(p(n):=\frac{C\log n}{n^{s-1}}\) be as in the statement of Theorem 2. Then, for every \(n\equiv 1\pmod{s}\) chosen large enough, by Theorem 2 we have
\[\mathbb{P}(\mathcal{H}_{s}(n-1,p(n-1))\text{ has a perfect matching})\geq\frac{1}{2}.\]
Now, define \(q(n):=\lceil 2\log_{2}(n)\rceil p(n-1)=\Theta\left(\frac{\log^{2}n}{n^{s-1}}\right)\). In the following, we show that \(\mathcal{H}_{s}(n,q(n))\) satisfies both (i) and (ii) w.h.p. provided \(n\equiv 1\pmod{s}\), which will then imply the statement of the lemma.
Imagine sampling a random \(s\)-uniform hypergraph \(\tilde{H}\) on vertex-set \([n]\) as the union of \(l:=\lceil 2\log_{2}(n)\rceil\) independently generated instances of \(\mathcal{H}_{s}(n,p(n-1))\), which we call \(H_{1},\ldots,H_{l}\). Note that the distribution of the random hypergraph \(\tilde{H}=H_{1}\cup\cdots\cup H_{l}\) follows that of a binomial random hypergraph \(\mathcal{H}_{s}(n,q^{\prime}(n))\) with edge-probability \(q^{\prime}(n)=1-(1-p(n-1))^{l}=1-(1-p(n-1))^{\lceil 2\log_{2}(n)\rceil}\leq q(n)\). Now fix a vertex \(v\in[n]\). From the above we have, since the property of having a perfect matching is monotone,
\[\mathbb{P}(\mathcal{H}_{s}(n,q(n))-v\text{ has no perfect matching})\] \[\leq \mathbb{P}(\mathcal{H}_{s}(n,q^{\prime}(n))-v\text{ has no perfect matching})\] \[= \mathbb{P}(\tilde{H}-v\text{ has no perfect matching})\] \[\leq \prod_{i=1}^{l}\mathbb{P}(H_{i}-v\text{ has no perfect matching}).\]
Since for every \(i\) the distribution of \(H_{i}-v\) follows that of an \(\mathcal{H}_{s}(n-1,p(n-1))\), from the above we have that \(\mathbb{P}(H_{i}-v\text{ has no perfect matching})\leq\frac{1}{2}\) for \(i=1,\ldots,l\). Altogether, it follows that
\[\mathbb{P}(\mathcal{H}_{s}(n,q(n))-v\text{ has no perfect matching})\leq\left(\frac{1}{2}\right)^{2\log_{2}(n)}=\frac{1}{n^{2}}.\]
Using a union bound over all choices of \(v\), this implies that
\[\mathbb{P}\left(\bigcup_{v\in[n]}\{\mathcal{H}_{s}(n,q(n))-v\text{ has no perfect matching}\}\right)\leq\frac{n}{n^{2}}=\frac{1}{n}.\]
Thus, w.h.p. \(\mathcal{H}_{s}(n,q(n))\) satisfies property (i).
Let us now move on to property (ii). For that purpose, we want to show that w.h.p. for every number \(f=1,\ldots,m\), no subset of \([n]\) of size \((s-1)f-1\) contains \(f\) hyperedges from \(\mathcal{H}_{s}(n,q(n))\). Let \(T(s,f)\) denote the number of labelled hypergraphs on \((s-1)f-1\) vertices containing \(f\) hyperedges. Using a simple union bound over all choices of subsets of \([n]\) of size \((s-1)f-1\) and the possible configurations of edges on those subsets, we obtain that the probability that there exist \(f\) hyperedges in \(\mathcal{H}_{s}(n,q(n))\) spanning less than \((s-1)f\) vertices is at most
\[\binom{n}{(s-1)f-1}\cdot T(s,f)\cdot q(n)^{f}=O\left(n^{(s-1)f-1}\cdot\left( \frac{\log^{2}n}{n^{s-1}}\right)^{f}\right)=O\left(\frac{\log^{2f}n}{n}\right).\]
Thus, w.h.p. we have that \(\mathcal{H}_{s}(n,q(n))\) also satisfies item (ii) of the lemma. This concludes the proof.
Next, would like to use the hypergraphs from the previous lemma to construct graphs that satisfy the conditions of Theorem 1. To do so, we first need to prove a technical result about the number of edges that can be spanned by any \((s+1)\)-subset of vertices in the \(2\)-section of these hypergraphs, namely Lemma 5. To prove Lemma 5, we first establish an auxiliary result on hypergraphs in the form of Lemma 4, which in turn needs the following elementary but important observation.
**Observation 1**.: _Let \(H=(V,E)\) be a connected hypergraph (that is, \(G_{2}^{H}\) is connected). Then_
\[|V|\leq 1+\sum_{e\in E}{(|e|-1)}.\]
Proof.: Let \(T\) be a spanning tree of \(G_{2}^{H}\). For every edge \(t\in E(T)\), assign a hyperedge \(e(t)\in E\) such that \(t\subseteq e(t)\). For each \(e\in E\), let \(T_{e}\subseteq T\) be the forest induced by the edges \(\{t\in E(T)|e(t)=e\}\). Clearly, \(V(T_{e})\subseteq e\) for every \(e\in E\), and thus
\[|V|-1=|E(T)|=\sum_{e\in E}{|E(T_{e})|}\leq\sum_{e\in E}\max\{0,|V(T_{e})|-1\} \leq\sum_{e\in E}{(|e|-1)},\]
as desired.
**Lemma 4**.: _Let \(H=(V,E)\) be a hypergraph with \(|V|\geq 4\) and \(V\notin E\). Suppose further that for every set \(F\subseteq E\) of hyperedges, we have_
\[\left|\bigcup_{e\in F}e\right|\geq\sum_{e\in F}{(|e|-1)}.\]
_Then there exists a set \(W\subseteq V\) of size at most \(2\) such that \(G_{2}^{H}-W\) is disconnected._
Proof.: Suppose first that there exists at least one hyperedge \(e_{0}\in E\) with \(|e_{0}|\geq 3\). By assumption, \(V\notin E\), and thus there exists some vertex \(v\in V\setminus e_{0}\). Let us now consider the graph \(G=G_{2}^{H-e_{0}}\), the \(2\)-section of the hypergraph \(H-e_{0}\) obtained from \(H\) by deleting \(e_{0}\). Let \(C\) be the vertex-set of the unique connected component of \(G\) that contains \(v\). We claim that \(|C\cap e_{0}|\leq 2\). To that end, define \(F\) as the set of hyperedges of \(H\) that are contained in \(C\). Clearly, \(e_{0}\notin F\), since \(v\in C\) and \(v\notin e_{0}\). Note that, since every hyperedge \(e\in E\setminus\{e_{0}\}\) induces a clique in \(G\), we
have that \(\bigcup_{e\in F}e=C\) and that the hypergraph \(H^{\prime}=(C,F)\) is connected. These facts imply via Observation 1 that
\[\left|\bigcup_{e\in F}e\right|=|C|\leq 1+\sum_{e\in F}{(|e|-1)}.\]
On the other hand, by applying the assumption of the lemma to the edge-set \(F\cup\{e_{0}\}\), we find
\[\sum_{e\in F\cup\{e_{0}\}}{(|e|-1)}\leq\left|e_{0}\cup\bigcup_{e\in F}e\right| =|e_{0}\cup C|=|e_{0}|+|C|-|e_{0}\cap C|.\]
Subtracting \((|e_{0}|-1)\) from both sides yields
\[\sum_{e\in F}{(|e|-1)}\leq|C|+1-|e_{0}\cap C|.\]
Plugging the above into the first inequality we get \(|C|\leq|C|+2-|e_{0}\cap C|\), and thus \(|e_{0}\cap C|\leq 2\), as claimed. We now set \(W:=e_{0}\cap C\) and claim that \(G_{2}^{H}-W\) is disconnected. Indeed, it follows readily from the definition of \(C\) that no edge in \(G_{2}^{H}-W\) connects a vertex in \(C\setminus W=C\setminus e_{0}\) to a vertex in \(V\setminus C\). Further, since \(v\in C\setminus e_{0}\) we have that the first set is non-empty, and since \(|V\setminus C|\geq|e_{0}\setminus C|=|e_{0}|-|e_{0}\cap C|\geq 3-2=1>0\), the second set is also non-empty. Thus, \(G_{2}^{H}-W\) is indeed disconnected, which concludes the proof in this case.
For the second case, assume that \(|e|\leq 2\) for every \(e\in E\). W.l.o.g. (since they do not have an effect on \(G_{2}^{H}\)) we may assume that \(H\) contains no hyperedges of size \(1\), i.e., \(H\) is a graph and \(G_{2}^{H}=H\). If \(H\) has a vertex of degree at most \(1\), then the statement of the lemma trivially holds, so suppose that \(H\) has minimum degree at least \(2\). The condition of the lemma now yields \(|E|=\sum_{e\in E}{(|e|-1)}\leq|\bigcup_{e\in E}e|\leq|V|\). This directly implies via the handshake-lemma that \(H\) is a \(2\)-regular graph. It is trivial to see that every such graph on at least \(4\) vertices contains a cut-set \(W\) consisting of at most \(2\) vertices, and this concludes the proof.
**Lemma 5**.: _Let \(s\geq 3\) be an integer, let \(H\) be an \(s\)-uniform hypergraph such that \(\left|\bigcup_{e\in F}e\right|\geq(s-1)|F|\) holds for all \(F\subseteq E(H)\) with \(|F|<2^{s+1}\). Let \(G\) be the \(2\)-section of \(H\). Then, for every set \(X\subseteq V(G)\) of size \(s+1\), it holds that \(|E(G[X])|\leq{s\choose 2}+2\)._
Proof.: Let \(H_{X}\) denote the hypergraph obtained by _restricting_\(H\) to \(X\), that is, \(V(H_{X}):=X\) and \(E(H_{X})=\{e\cap X|e\in E(H),|e\cap X|\geq 2\}\). Note that the \(2\)-section of \(H_{X}\) equals \(G[X]\). Further note that for every subset \(F\subseteq E(H)\) of size less than \(2^{s+1}\), it holds that
\[\left|\bigcup_{e\in F}{(e\cap X)}\right|\geq\left|\bigcup_{e\in F}e\right|- \sum_{e\in F}{|e\setminus X|}\]
This directly implies that \(\left|\bigcup_{e\in F}e\right|\geq\sum_{e\in F}{(|e|-1)}\) for every subset \(F\subseteq E(H_{X})\). We can therefore apply Lemma 4, which implies that there exists a set \(W\subseteq X\) of size at most \(2\) such that \(G[X]-W\) is disconnected. Thus, there exist disjoint non-empty sets \(A,B\) such that \(A\cup B=X\setminus W\) and no edge in \(G[X]\) connects \(A\) and \(B\). Note that as \(|A|,|B|\geq 1\) and \(|A|+|B|=|X|-|W|\geq(s+1)-2=s-1\), we have \(|A||B|\geq s-2\). We conclude that
\[|E(G[X])|\leq{s+1\choose 2}-|A||B|\leq{s+1\choose 2}-(s-2)={s\choose 2}+2.\]
This concludes the proof.
Proof of Theorem 1.: Let an integer \(r\geq 1\) be given. Define \(s:=r+3\) and \(m:=2^{s+1}\). By Lemma 3 there exists some \(n_{0}\in\mathbb{N}\) such that for every integer \(n\geq n_{0}\) with \(n\equiv 1\;(\mathrm{mod}\;s)\), there exists an \(s\)-uniform hypergraph \(H\) on \(n\) vertices with the following properties.
* For every \(v\in V(H)\), the hypergraph \(H-v\) admits a perfect matching.
* For every set \(F\subseteq E(H)\) of hyperedges with \(|F|\leq m=2^{s+1}\), we have \[\left|\bigcup_{e\in F}e\right|\geq(s-1)|F|.\]
Define \(k_{0}:=\lceil\frac{n_{0}-1}{s}\rceil+1\) and let \(k\geq k_{0}\) be any given integer. Let \(H\) be an \(s\)-uniform hypergraph on \(n:=s(k-1)+1\geq n_{0}\) vertices satisfying the properties above. Finally, we define a graph \(G\) as the complement of the \(2\)-section \(G_{2}^{H}\) of \(H\). We claim that it satisfies the properties required by the theorem, that is,
* \(G-v\) is \((k-1)\)-colorable for every \(v\in V(G)\), and
* for every set \(R\subseteq E(G)\) of edges with \(|R|\leq r\), we have \(\chi(G-R)\geq k\).
To verify the first statement, consider any vertex \(v\) and a perfect matching of \(H-v\). Since \(H\) is \(s\)-uniform, the perfect matching forms a partition of \(V(H)\setminus\{v\}=V(G)\setminus\{v\}\) into \(\frac{n-1}{s}=k-1\) sets, each inducing a hyperedge in \(H\) and thus an independent set in \(G\). Hence we have \(\chi(G-v)\leq k-1\).
Now let \(R\subseteq E(G)\) with \(|R|\leq r\) be given. We claim that \(\alpha(G-R)\leq s\), i.e., that there exists no independent set in \(G-R\) of size \(s+1\), which will then imply \(\chi(G-R)\geq\frac{n}{\alpha(G-R)}\geq\frac{n}{s}>k-1\), as desired. Suppose towards a contradiction that there is some \(X\subseteq V(G)\) of size \(s+1\) that is independent in \(G-R\). Then \(G[X]\) contains at most \(r\) edges, and thus its complement graph, namely \(G_{2}^{H}[X]\), contains at least \(\binom{s+1}{2}-r=\binom{s}{2}+s-r=\binom{s}{2}+3\) edges. However, by Lemma 5 applied to \(H\), we find that \(|E(G_{2}^{H}[X])|\leq\binom{s}{2}+2\), a contradiction. This shows that indeed, \(\alpha(G-R)\leq s\) for every \(R\subseteq E(G)\) with \(|R|\leq r\), concluding the proof.
|
2305.16841 | Differentiable Random Partition Models | Partitioning a set of elements into an unknown number of mutually exclusive
subsets is essential in many machine learning problems. However, assigning
elements, such as samples in a dataset or neurons in a network layer, to an
unknown and discrete number of subsets is inherently non-differentiable,
prohibiting end-to-end gradient-based optimization of parameters. We overcome
this limitation by proposing a novel two-step method for inferring partitions,
which allows its usage in variational inference tasks. This new approach
enables reparameterized gradients with respect to the parameters of the new
random partition model. Our method works by inferring the number of elements
per subset and, second, by filling these subsets in a learned order. We
highlight the versatility of our general-purpose approach on three different
challenging experiments: variational clustering, inference of shared and
independent generative factors under weak supervision, and multitask learning. | Thomas M. Sutter, Alain Ryser, Joram Liebeskind, Julia E. Vogt | 2023-05-26T11:45:10Z | http://arxiv.org/abs/2305.16841v2 | # Differentiable Random Partition Models
###### Abstract
Partitioning a set of elements into an unknown number of mutually exclusive subsets is essential in many machine learning problems. However, assigning elements, such as samples in a dataset or neurons in a network layer, to an unknown and discrete number of subsets is inherently non-differentiable, prohibiting end-to-end gradient-based optimization of parameters. We overcome this limitation by proposing a novel two-step method for inferring partitions, which allows its usage in variational inference tasks. This new approach enables reparameterized gradients with respect to the parameters of the new random partition model. Our method works by inferring the number of elements per subset and, second, by filling these subsets in a learned order. We highlight the versatility of our general-purpose approach on three different challenging experiments: variational clustering, inference of shared and independent generative factors under weak supervision, and multitask learning.
## 1 Introduction
Partitioning a set of elements into subsets is a classical mathematical problem that attracted much interest over the last few decades (Rota, 1964; Graham et al., 1989). A partition over a given set is a collection of non-overlapping subsets such that their union results in the original set. In machine learning (ML), partitioning a set of elements into different subsets is essential for many applications, such as clustering (Bishop and Svensen, 2004) or classification (De la Cruz-Mesia et al., 2007).
Random partition models (RPM, Hartigan, 1990) define a probability distribution over the space of partitions. RPMs can explicitly leverage the relationship between elements of a set, as they do not necessarily assume _i.i.d._ set elements. On the other hand, most existing RPMs are intractable for large datasets (MacQueen, 1967; Plackett, 1975; Pitman, 1996) and lack a reparameterization scheme, prohibiting their direct use in gradient-based optimization frameworks.
In this work, we propose the differentiable random partition model (DRPM), a fully-differentiable relaxation for RPMs that allows reparametrizable sampling. The DRPM follows a two-stage procedure: first, we model the number of elements per subset, and second, we learn an ordering of the elements with which we fill the elements into the subsets. The DRPM enables the integration of partition models into state-of-the-art ML frameworks and learning RPMs from data using stochastic optimization.
We evaluate our approach in three experiments, demonstrating the proposed DRPM's versatility and advantages. First, we apply the DRPM to a variational clustering task, highlighting how the reparametrizable sampling of partitions allows us to learn a novel kind of Variational Autoencoder (VAE, Kingma and Welling, 2014). By leveraging potential dependencies between samples in a dataset, DRPM-based clustering overcomes the simplified _i.i.d._ assumption of previous works, which used categorical priors (Jiang et al., 2016). In our second experiment, we demonstrate how to retrieve
sets of shared and independent generative factors of paired images using the proposed DRPM. In contrast to previous works (Bouchacourt et al., 2018; Hosoya, 2018; Locatello et al., 2020), which rely on strong assumptions or heuristics, the DRPM enables end-to-end inference of generative factors. Finally, we perform multitask learning (MTL) by using the DRPM as a building block in a deterministic pipeline. We show how the DRPM learns to assign subsets of network neurons to specific tasks. The DRPM can infer the subset size per task based on its difficulty, overcoming the tedious work of finding optimal loss weights (Kurin et al., 2022; Xin et al., 2022).
To summarize, we introduce the DRPM, a novel differentiable and reparametrizable relaxation of RPMs. In extensive experiments, we demonstrate the versatility of the proposed method by applying the DRPM to clustering, inference of generative factors, and multitask learning.
## 2 Related Work
Random Partition ModelsPrevious works on RPMs include product partition models (Hartigan, 1990), species sampling models (Pitman, 1996), and model-based clustering approaches (Bishop and Svensen, 2004). Further, Lee and Sang (2022) investigate the balancedness of subset sizes of RPMs. They all require tedious manual adjustment, are non-differentiable, and are, therefore, unsuitable for modern ML pipelines. A fundamental RPM application is clustering, where the goal is to partition a given dataset into different subsets, the clusters. Previous works in variational clustering (Jiang et al., 2016; Dilokthanakul et al., 2016; Manduchi et al., 2021) implicitly define RPMs to perform clustering. They compute partitions in a variational fashion by making _i.i.d._ assumptions about the samples in the dataset and imposing soft assignments of the clusters to data points during training. A problem related to set partitioning is the earth mover's distance problem (EMD, Monge, 1781; Rubner et al., 2000). However, EMD aims to assign a set's elements to different subsets based on a cost function and given subset sizes. Iterative solutions to the problem exist (Sinkhorn, 1964), and various methods have recently been proposed, e.g., for document ranking (Adams and Zemel, 2011) or permutation learning (Santa Cruz et al., 2017; Mena et al., 2018).
Differentiable and Reparameterizable Discrete DistributionsFollowing the proposition of the Gumbel-Softmax trick (GST, Jang et al., 2016; Maddison et al., 2017), interest in research around continuous relaxations for discrete distributions and non-differentiable algorithms rose. The GST enabled the reparameterization of categorical distributions and their integration into gradient-based optimization pipelines. Based on the same trick, Xie and Ermon (2019) describe a top-\(k\) elements selection procedure, and Sutter et al. (2023) propose a differentiable formulation for the multivariate hypergeometric distribution. Multiple works on differentiable sorting procedures and permutation matrices have been proposed, e.g., Linderman et al. (2018); Prillo and Eisenschlos (2020); Petersen et al. (2021). Further, Grover et al. (2019) described the distribution over permutation matrices \(p(\pi)\) for a permutation matrix \(\pi\) using the Plackett-Luce distribution (PL, Luce, 1959; Plackett, 1975). Prillo and Eisenschlos (2020) proposed a computationally simpler variant of Grover et al. (2019).
Figure 1: Illustration of the proposed DRPM method. We first sample a permutation matrix \(\pi\) and a set of subset sizes \(\mathbf{n}\) separately in two stages. We then use \(\mathbf{n}\) and \(\pi\) to generate the assignment matrix \(Y\), the matrix representation of a partition \(\rho\).
## 3 Preliminaries
Set PartitionsA partition \(\rho=(\mathcal{S}_{1},\ldots,\mathcal{S}_{K})\) of a set \([n]=\{1,\ldots,n\}\) with \(n\) elements is a collection of \(K\) subsets \(\mathcal{S}_{k}\subseteq[n]\) where \(K\) is _a priori_ unknown [Mansour and Schork, 2016]. For a partition \(\rho\) to be valid, it must hold that
\[\mathcal{S}_{1}\cup\cdots\cup\mathcal{S}_{K}=[n]\ \ \text{and}\ \ \forall k\neq l:\ \mathcal{S}_{k}\cap\mathcal{S}_{l}=\emptyset \tag{1}\]
In other words, every element \(i\in[n]\) has to be assigned to precisely one subset \(\mathcal{S}_{k}\). We denote the size of the \(k\)-th subset \(\mathcal{S}_{k}\) as \(n_{k}=|\mathcal{S}_{k}|\). Alternatively, we can describe a partition \(\rho\) through an assignment matrix \(Y=[\mathbf{y}_{1},\ldots,\mathbf{y}_{K}]^{T}\in\{0,1\}^{K\times n}\). Every row \(\mathbf{y}_{k}\in\{0,1\}^{1\times n}\) is a multi-hot vector, where \(\mathbf{y}_{ki}=1\) assigns element \(i\) to subset \(\mathcal{S}_{k}\).
Probability distribution over subset sizesThe multivariate non-central hypergeometric distribution (MVHG) describes sampling without replacement and allows to skew the importance of groups with an additional importance parameter \(\mathbf{\omega}\)[Fisher, 1935, Wallenius, 1963, Chesson, 1976]. The MVHG is an urn model and is described by the number of different groups \(K\in\mathbb{N}\), the number of elements in the urn of every group \(\mathbf{m}=[m_{1},\ldots,m_{K}]\in\mathbb{N}^{K}\), the total number of elements in the urn \(\sum_{k=1}^{K}m_{k}\in\mathbb{N}\), the number of samples to draw from the urn \(n\in\mathbb{N}_{0}\), and the importance factor for every group \(\mathbf{\omega}=[\omega_{1},\ldots,\omega_{K}]\in\mathbb{R}_{0+}^{K}\)[Johnson, 1987]. Then, the probability of sampling \(\mathbf{n}=\{n_{1},\ldots,n_{K}\}\), where \(n_{k}\) describes the number of elements drawn from group \(K\) is
\[p(\mathbf{n};\mathbf{\omega},\mathbf{m})=\frac{1}{P_{0}}\prod_{k=1}^{K}\binom{m_{k}}{n_{k }}\omega_{k}^{n_{k}} \tag{2}\]
where \(P_{0}\) is a normalization constant. Hence, the MVHG \(p(\mathbf{n};\mathbf{\omega},\mathbf{m})\) allows us to model dependencies between different elements of a set since drawing one element from the urn influences the probability of drawing one of the remaining elements, creating interdependence between them. For the rest of the paper, we assume \(\forall\ m_{k}\in\mathbf{m}:m_{k}=n\). We thus use the shorthand \(p(\mathbf{n};\mathbf{\omega})\) to denote the density of the MVHG. We refer to Appendix A.1 for more details.
Probability distribution over Permutation MatricesLet \(p(\pi)\) denote a distribution over permutation matrices \(\pi\in\{0,1\}^{n\times n}\). A permutation matrix \(\pi\) is doubly stochastic [Marcus, 1960], meaning that its row and column vectors sum to \(1\). This property allows us to use \(\pi\) to describe an order over a set of \(n\) elements, where \(\pi_{ij}=1\) means that element \(j\) is ranked at position \(i\) in the imposed order. In this work, we assume \(p(\pi)\) to be parameterized by scores \(\mathbf{s}\in\mathbb{R}_{+}^{n}\), where each score \(s_{i}\) corresponds to an element \(i\). The order given by sorting \(\mathbf{s}\) in decreasing order corresponds to the most likely permutation in \(p(\pi;\mathbf{s})\). Sampling from \(p(\pi;\mathbf{s})\) can be achieved by resampling the scores as \(\tilde{s}_{i}=\beta\log s_{i}+g_{i}\) where \(g_{i}\sim\text{Gumbel}(0,\beta)\) for fixed scale \(\beta\), and sorting them in decreasing order. Hence, resampling scores \(\mathbf{s}\) enables the resampling of permutation matrices \(\pi\). The probability over orderings \(p(\pi;\mathbf{s})\) is then given by [Thurstone, 1927, Luce, 1959, Plackett, 1975, Yellott, 1977]
\[p(\pi;\mathbf{s})=p((\pi\tilde{\mathbf{s}})_{1}\geq\cdots\geq(\pi\tilde{\mathbf{s}})_{n}) =\frac{(\pi\mathbf{s})_{1}}{Z}\frac{(\pi\mathbf{s})_{2}}{Z-(\pi\mathbf{s})_{1}}\cdots\frac{ (\pi\mathbf{s})_{n}}{Z-\sum_{j=1}^{n-1}(\pi\mathbf{s})_{j}} \tag{3}\]
where \(\pi\) is a permutation matrix and \(Z=\sum_{i=1}^{n}s_{i}\). The resulting distribution is a Plackett-Luce (PL) distribution [Luce, 1959, Plackett, 1975] if and only if the scores \(\mathbf{s}\) are perturbed with noise drawn from Gumbel distributions with identical scales [Yellott, 1977]. For more details, we refer to Appendix A.2).
## 4 A two-stage Approach to Random Partition Models
We propose the DRPM \(p(Y;\mathbf{\omega},\mathbf{s})\), a differentiable and reparameterizable two-stage Random Partition Model (RPM). The proposed formulation separately infers the number of elements \(i\) per subset \(\mathbf{n}\in\mathbb{N}_{0}^{K}\), where \(\sum_{k=1}^{K}n_{k}=n\), and the assignment of elements to subsets \(\mathcal{S}_{k}\) by inducing an order on the \(n\) elements and filling \(\mathcal{S}_{1},...,\mathcal{S}_{K}\) sequentially in this order. To model the order of the elements, we use a permutation matrix \(\pi=[\mathbf{\pi}_{1},\ldots,\mathbf{\pi}_{n}]^{T}\in\{0,1\}^{n\times n}\), from which we infer \(Y\) by sequentially summing up rows according to \(\mathbf{n}\). Note that the doubly-stochastic property of all permutation matrices \(\pi\) ensures that the columns of \(Y\) remain one-hot vectors, assigning every
element \(i\) to precisely one of the \(K\) subsets. At the same time, the \(k\)-th row of \(Y\) corresponds to an \(n_{k}\)-hot vector \(\mathbf{y}_{k}\) and therefore serves as a subset selection vector, i.e.
\[\mathbf{y}_{k}=\sum_{i=\nu_{k}+1}^{\nu_{k}+n_{k}}\mathbf{\pi}_{i},\quad\text{ where }\ \nu_{k}=\sum_{\iota=1}^{k-1}n_{\iota} \tag{4}\]
such that \(Y=[\mathbf{y}_{1},\dots,\mathbf{y}_{K}]^{T}\). Additionally, Figure 1 provides an illustrative example. Note that \(K\) defines the maximum number of possible subsets, and not the effective number of non-empty subsets, because we allow \(\mathcal{S}_{k}\) to be the empty set \(\emptyset\)(Mansour and Schork, 2016). We base the following Proposition 4.1 on the MVHG distribution \(p(\mathbf{n};\mathbf{\omega})\) for the subset sizes \(\mathbf{n}\) and the PL distribution \(p(\pi;\mathbf{s})\) for assigning the elements to subsets. However, the proposed two-stage approach to RPMs is not restricted to these two classes of probability distributions.
**Proposition 4.1** (Two-stage Random Partition Model).: _Given a probability distribution over subset sizes \(p(\mathbf{n};\mathbf{\omega})\) with \(\mathbf{n}\in\mathbb{N}_{0}^{K}\) and distribution parameters \(\mathbf{\omega}\in\mathbb{R}_{+}^{K}\) and a PL probability distribution over random orderings \(p(\pi;\mathbf{s})\) with \(\pi\in\{0,1\}^{n\times n}\) and distribution parameters \(\mathbf{s}\in\mathbb{R}_{+}^{n}\), the probability mass function \(p(Y;\mathbf{\omega},\mathbf{s})\) of the two-stage RPM is given by_
\[p(Y;\mathbf{\omega},\mathbf{s})=p(\mathbf{y}_{1},\dots,\mathbf{y}_{K};\mathbf{\omega},\mathbf{s})=p( \mathbf{n};\mathbf{\omega})\sum_{\pi\in\Pi_{\mathrm{V}}}p(\pi;\mathbf{s}) \tag{5}\]
_where \(\Pi_{Y}=\{\pi:\mathbf{y}_{k}=\sum_{i=\nu_{k}+1}^{\nu_{k}+n_{k}}\mathbf{\pi}_{i},k=1, \dots,K\}\), and \(\mathbf{y}_{k}\) and \(\nu_{k}\) as in Equation (4)._
In the following, we outline the proof of Proposition 4.1 and refer to Appendix B for a formal derivation. We calculate \(p(Y;\mathbf{\omega},\mathbf{s})\) as a probability of subsets \(p(\mathbf{y}_{1},\dots,\mathbf{y}_{K};\mathbf{\omega},\mathbf{s})\), which we compute sequentially over subsets, i.e.
\[p(\mathbf{y}_{1},\dots,\mathbf{y}_{K};\mathbf{\omega},\mathbf{s})=p(\mathbf{y}_{1};\mathbf{\omega}, \mathbf{s})\cdots p(\mathbf{y}_{K}\mid\mathbf{y}_{<K};\mathbf{\omega},\mathbf{s}), \tag{6}\]
where \(\mathbf{y}_{<k}=[\mathbf{y}_{1},\dots,\mathbf{y}_{k-1}]\) and
\[p(\mathbf{y}_{k}\mid\mathbf{y}_{<k};\mathbf{\omega},\mathbf{s})=p(n_{k}\mid n_{<k};\mathbf{\omega })\sum_{\pi\in\Pi_{\mathbf{y}_{k}}}p(\bar{\pi}\mid n_{k},\mathbf{y}_{<k};\mathbf{s}), \tag{7}\]
where \(\Pi_{\mathbf{y}_{k}}\) in Equation (7) is the set of all subset permutations of elements \(i\in\mathcal{S}_{k}\). A subset permutation matrix \(\bar{\pi}\) represents an ordering over only \(n_{k}\) out of the total \(n\) elements. The probability \(p(\mathbf{y}_{k}\mid\mathbf{y}_{<k};\mathbf{\omega},\mathbf{s})\) describes the probability of a subset of a given size \(n_{k}\) by marginalizing over the probabilities of all subset permutations \(p(\bar{\pi}\mid n_{k},\mathbf{y}_{<k};\mathbf{s})\). Hence, the sum over all \(p(\bar{\pi}\mid n_{k},\mathbf{y}_{<k};\mathbf{s})\) makes \(p(\mathbf{y}_{k}\mid\mathbf{y}_{<k};\mathbf{\omega},\mathbf{s})\) invariant to the ordering of elements \(i\in\mathcal{S}_{k}\)(Xie and Ermon, 2019). Note that in a slight abuse of notation, we use \(p(\bar{\pi}\mid n_{k},\mathbf{y}_{<k};\mathbf{\omega},\mathbf{s})\) as the probability of a subset permutation \(\bar{\pi}\) given that there are \(n_{k}\) elements in \(\mathcal{S}_{k}\) and thus \(\bar{\pi}\in\{0,1\}^{n_{k}\times n}\).
The probability of a subset permutation matrix \(p(\bar{\pi}\mid n_{k},\mathbf{y}_{<k};\mathbf{s})\) describes the probability of drawing the elements \(i\in\mathcal{S}_{k}\) in the order defined by the subset permutation matrix \(\bar{\pi}\) given that the elements in \(\mathcal{S}_{<k}\) are already determined. Hence, we condition on the subsets \(\mathbf{y}_{<k}\). This property follows from Luce's choice axiom (Luce, 1959). Additionally, we condition on \(n_{k}\), the size of the subset \(\mathcal{S}_{k}\). The probability of a subset permutation is given by
\[p(\bar{\pi}\mid n_{k},\mathbf{y}_{<k};\mathbf{s})=\prod_{i=1}^{n_{k}}\frac{(\bar{\pi} \mathbf{s})_{i}}{Z_{k}-\sum_{j=1}^{i-1}(\bar{\pi}\mathbf{s})_{j}} \tag{8}\]
In contrast to the distribution over permutations matrices \(p(\pi;\mathbf{s})\) in Equation (3), we compute the product over \(n_{k}\) terms and have a different normalization constant \(Z_{k}\), which is the sum over the scores \(s_{i}\) of all elements \(i\in\mathcal{S}_{k}\). Although we induce an ordering over all elements \(i\) by using a permutation matrix \(\pi\), the probability \(p(\mathbf{y}_{k}\mid\mathbf{y}_{<k};\mathbf{\omega},\mathbf{s})\) is invariant to intra-subset orderings of elements \(i\in\mathcal{S}_{k}\). Finally, we arrive at Equation (5) by substituting Equation (7) into Equation (6), and applying the definition of the conditional probability \(p(\mathbf{n};\mathbf{\omega})=\prod_{k=1}^{K}p(n_{k}\mid n_{<k};\mathbf{\omega})\) and by reshuffling indices \(\sum_{\pi\in\Pi_{Y}}p(\pi;\mathbf{s})=\prod_{k=1}^{K}\sum_{\bar{\pi}\in\Pi_{\mathbf{y }_{k}}}p(\bar{\pi}\mid n_{k},\mathbf{y}_{<k};\mathbf{s})\).
Note that in contrast to previous RPMs, which often need exponentially many distribution parameters (Plackett, 1975), the proposed two-stage approach to RPMs only needs \((n+K)\) parameters to create an RPM for \(n\) elements: the score parameters \(\mathbf{s}\in\mathbb{R}_{+}^{n}\) and the group importance parameters \(\mathbf{\omega}\in\mathbb{R}_{+}^{K}\).
Finally, to sample from the two-stage RPM of Proposition 4.1 we apply the following procedure: First sample \(\pi\sim p(\pi;\mathbf{s})\) and \(\mathbf{n}\sim p(\mathbf{n};\mathbf{\omega})\). From \(\pi\) and \(\mathbf{n}\), compute partition \(Y\) by summing the rows of \(\pi\) according to \(\mathbf{n}\) as described in Equation (4) and illustrated in Figure 1.
### Approximating the Probability Mass Function
The number of permutations per subset \(|\Pi_{\mathbf{y}_{\mathbf{x}}}|\) scales factorially with the subset size \(n_{k}\), i.e. \(|\Pi_{\mathbf{y}_{\mathbf{x}}}|=n_{k}!\). Consequently, the number of valid permutation matrices \(|\Pi_{Y}|\) is given as a function of \(\mathbf{n}\), i.e.
\[|\Pi_{Y}|=\prod_{k=1}^{K}|\Pi_{\mathbf{y}_{k}}|=\prod_{k=1}^{K}n_{k}! \tag{9}\]
Although Proposition 4.1 describes a well-defined distribution for \(p(Y;\mathbf{\omega},\mathbf{s})\), it is in general computationally intractable due to Equation (9). In practice, we thus approximate \(p(Y;\mathbf{\omega},\mathbf{s})\) using the following Lemma.
**Lemma 4.2**.: \(p(Y;\mathbf{\omega},\mathbf{s})\) _can be upper and lower bounded as follows_
\[\forall\pi\in\Pi_{Y}:\;p(\mathbf{n};\mathbf{\omega})p(\pi;\mathbf{s})\;\leq\;p(Y;\mathbf{ \omega},\mathbf{s})\;\leq\;|\Pi_{Y}|p(\mathbf{n};\mathbf{\omega})\max_{\hat{\pi}}p(\tilde {\pi};\mathbf{s}) \tag{10}\]
We provide the proof in Appendix B. Note that from Equation (3) we see that \(\max_{\tilde{\pi}}p(\tilde{\pi};\mathbf{s})=p(\pi_{\mathbf{s}};\mathbf{s})\), where \(\pi_{\mathbf{s}}\) is the permutation that results from sorting the unperturbed scores \(\mathbf{s}\).
### The Differentiable Random Partition Model
To incorporate our two-stage RPM into gradient-based optimization frameworks, we require that efficient computation of gradients is possible for every step of the method. The following Lemma guarantees differentiability, allowing us to train deep neural networks with our method in an end-to-end fashion:
**Lemma 4.3** (DRPM).: _A two-stage RPM is differentiable and reparameterizable if the distribution over subset sizes \(p(\mathbf{n};\mathbf{\omega})\) and the distribution over orderings \(p(\pi;\mathbf{s})\) are differentiable and reparameterizable._
We provide the proof in Appendix B. Note that Lemma 4.3 enables us to learn variational posterior approximations and priors using Stochastic Gradient Variational Bayes (SGVB, Kingma and Welling, 2014). In our experiments, we apply Lemma 4.3 using the recently proposed differentiable formulations of the MVHG (Sutter et al., 2023) and the PL distribution (Grover et al., 2019), though other choices would also be valid.
## 5 Experiments
We demonstrate the versatility and effectiveness of the proposed DRPM in three different experiments. First, we propose a novel generative clustering method based on the DRPM, which we compare against state-of-the-art variational clustering methods and demonstrate its conditional generation capabilities. Then, we demonstrate how the DRPM can infer shared and independent generative factors under weak supervision. Finally, we apply the DRPM to multitask learning (MTL), where the DRPM enables an adaptive neural network architecture that partitions layers based on task difficulty.
### Variational Clustering with Random Partition Models
In our first experiment, we introduce a new version of a Variational Autoencoder (VAE, Kingma and Welling, 2014), the DRPM Variational Clustering (DRPM-VC) model. The DRPM-VC enables clustering and unsupervised conditional generation in a variational fashion. To that end, we assume that each sample \(\mathbf{x}\) of a dataset \(X\) is generated by a latent vector \(\mathbf{z}\in\mathbb{R}^{l}\), where \(l\in\mathbb{N}\) is the latent space size. Traditional VAEs would then assume that all latent vectors \(\mathbf{z}\) are generated by a single Gaussian prior distribution \(\mathcal{N}(\mathbf{0},\mathbb{I}_{l})\). Instead, we assume every \(\mathbf{z}\) to be sampled from one of \(K\) different latent Gaussian distributions \(\mathcal{N}(\mathbf{\mu}_{k},\operatorname{diag}(\mathbf{\sigma}_{k})),k=1,\ldots,K\), with \(\mathbf{\mu}_{k},\mathbf{\sigma}_{k}\in\mathbb{R}^{l}\). Further, note that similar to an urn model (Section 3), if we draw a batch from a given finite dataset with samples from different clusters, the cluster assignments within that batch are not entirely independent. Since there is only a finite number of samples per cluster, drawing a sample from a specific cluster decreases the chance of drawing a sample from that cluster again, and the distribution of the number of samples drawn per cluster will follow an MVHG distribution. Previous work on variational clustering proposes to model the cluster assignment \(\mathbf{y}\in\{0,1\}^{K}\) of each sample \(\mathbf{x}\) through independent
categorical distributions (Jiang et al., 2016), which might thus be over-restrictive and not correctly reflect reality. Instead, we propose explicitly modeling the dependency between the \(\mathbf{y}\) of different samples by assuming they are drawn from an RPM. Hence, the generative process leading to \(X\) can be summarized as follows: First, the cluster assignments are represented as a partition matrix \(Y\) and sampled from our DRPM, i.e., \(Y\sim p(Y;\mathbf{\omega},\mathbf{s})\). Given an assignment \(\mathbf{y}\) from \(Y\), we can sample the respective latent variable \(\mathbf{z}\), where \(\mathbf{z}\sim\mathcal{N}(\mathbf{\mu_{y}},\mathrm{diag}(\mathbf{\sigma_{y}}))\), \(\mathbf{z}\in\mathbb{R}^{l}\). Note that we use the notational shorthand \(\mathbf{\mu_{y}}\coloneqq\mathbf{\mu}_{\mathrm{arg\,max}(\mathbf{y})}\). Like in vanilla VAEs, we infer \(\mathbf{x}\) by independently passing the corresponding \(\mathbf{z}\) through a decoder model. Assuming this generative process, we derive the following evidence lower bound (ELBO) for \(p(X)\):
\[\mathcal{L}_{ELBO}=\sum_{\mathbf{x}\in X}\mathbb{E}_{q(\mathbf{z}|\mathbf{x})}\left[\log p (\mathbf{x}|\mathbf{z})\right]-\sum_{\mathbf{x}\in X}\mathbb{E}_{q(Y|X)}\left[KL[q(\mathbf{z}| \mathbf{x})||p(\mathbf{z}|Y)]\right]-KL[q(Y|X)||p(Y)]\]
Note that computing \(KL[q(Y|X)||p(Y)]\) directly is computationally intractable, and we need to upper bound it according to Lemma 4.2. For an illustration of the generative assumptions and more details on the ELBO, we refer to Appendix C.1.
To assess the clustering performance, we train our model on two different datasets, namely MNIST (LeCun et al., 1998) and Fashion-MNIST (FMNIST, Xiao et al., 2017), and compare it to three baselines. Two of the baselines are based on a Gaussian Mixture Model, where one is directly trained on the original data space (GMM), whereas the other takes the embeddings from a pretrained encoder as input (Latent GMM). The third baseline is variational deep embedding (VADE, Jiang et al., 2016), which is similar to the DRPM-VC but assumes _i.i.d._ categorical cluster assignments. For all methods except GMM, we use the weights of a pretrained encoder to initialize the models and priors at the start of training. We present the results of these experiments in Table 1. As can be seen, we outperform all baselines, indicating that modeling the inherent dependencies implied by finite datasets benefits the performance of variational clustering. While achieving decent clustering performance, another benefit of variational clustering methods is that their reconstruction-based nature intrinsically allows unsupervised conditional generation. In Figure 2, we present the result of sampling a partition and the corresponding generations from the respective clusters after training the DRPM-VC on FMNIST. The model produces coherent generations despite not having access to labels, allowing us to investigate the structures learned by the model more closely. We refer to Appendix C.1 for more illustrations of the learned clusters, details on the training procedure, and ablation studies.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{MNIST} & \multicolumn{3}{c}{FMNIST} \\ \cline{2-7} & NMI & ARI & ACC & NMI & ARI & ACC \\ \hline GMM & \(0.32{\pm}0.01\) & \(0.22{\pm}0.02\) & \(0.41{\pm}0.01\) & \(0.49{\pm}0.01\) & \(0.33{\pm}0.00\) & \(0.44{\pm}0.01\) \\ Latent GMM & \(0.86{\pm}0.02\) & \(0.83{\pm}0.06\) & \(0.88{\pm}0.07\) & \(0.60{\pm}0.00\) & \(0.47{\pm}0.01\) & \(0.62{\pm}0.01\) \\ VADE & \(0.84{\pm}0.01\) & \(0.76{\pm}0.05\) & \(0.82{\pm}0.04\) & \(0.56{\pm}0.02\) & \(0.40{\pm}0.04\) & \(0.56{\pm}0.03\) \\ DRPM-VC & \(\mathbf{0.89{\pm}}0.01\) & \(\mathbf{0.88{\pm}}0.03\) & \(\mathbf{0.94{\pm}}0.02\) & \(\mathbf{0.64{\pm}}0.00\) & \(\mathbf{0.51{\pm}}0.01\) & \(\mathbf{0.65{\pm}}0.00\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: We compare the clustering performance of the DRPM-VC on test sets of MNIST and FMNIST between Gaussian Mixture Models (GMM), GMM in latent space (Latent GMM), and Variational Deep Embedding (VADE). We measure performance in terms of the Normalized Mutual Information (NMI), Adjusted Rand Index (ARI), and cluster accuracy (ACC) over five seeds and put the best model in bold.
Figure 2: A sample drawn from a DRPM-VC model trained on FMNIST. On top is the sampled partition with the cluster assignments, and on the bottom are generated images corresponding to the sampled assignment matrix. The DRPM-VC learns consistent clusters for different pieces of clothing and can generate new samples of each cluster with great variability.
### Variational Partitioning of Generative Factors
Data modalities not collected as _i.i.d._ samples, such as consecutive frames in a video, provide a weak-supervision signal for generative models and representation learning (Sutter et al., 2023). Here, on top of learning meaningful representations of the data samples, we are also interested in discovering the relationship between coupled samples. If we assume that the data is generated from underlying generative factors, weak supervision comes from the fact that we know that certain factors are shared between coupled pairs while others are independent. The supervision is weak because we neither know the underlying generative factors nor the number of shared and independent factors. In such a setting, we can use the DRPM to learn a partition of the generative factors and assign them to be either shared or independent.
In this experiment, we use paired frames \(\mathbf{X}=\left[\mathbf{x}_{1},\mathbf{x}_{2}\right]\) from the _mpi3d_ dataset (Gondal et al., 2019). Every pair of frames shares a subset of its seven generative factors. We introduce the DRPM-VAE, which models the division of the latent space into shared and independent latent factors as RPM. We add a posterior approximation \(q(Y\mid\mathbf{X})\) and additionally a prior distribution of the form \(p(Y)\). The model maximizes the following ELBO on the marginal log-likelihood of images through a VAE (Kingma and Welling, 2014):
\[\mathcal{L}_{ELBO} = \sum_{j=1}^{2}\mathbb{E}_{q(\mathbf{z}_{s},\mathbf{z}_{j},Y|\mathbf{X})} \left[\log p(\mathbf{x}_{j}\mid\mathbf{z}_{s},\mathbf{z}_{j})\right]\] \[-\mathbb{E}_{q(Y|\mathbf{X})}\left[KL\left[q(\mathbf{z}_{s},\mathbf{z}_{1}, \mathbf{z}_{2}\mid Y,\mathbf{X})||p(\mathbf{z}_{s},\mathbf{z}_{1},\mathbf{z}_{2})\right]\right]- KL\left[q(Y\mid\mathbf{X})||\;p(Y)\right]\]
Similar to the ELBO for variational clustering in Section 5.1, computing \(KL\left[q(Y\mid\mathbf{X})||\;p(Y)\right]\) directly is intractable, and we need to bound it according to Lemma 4.2.
We compare the proposed DRPM-VAE to three methods, which only differ in how they infer shared and latent dimensions. While the Label-VAE (Bouchacourt et al., 2018; Hosoya, 2018) assumes that the number of independent factors is known, the Ada-VAE (Locatello et al., 2020) relies on a heuristic-based approach to infer shared and independent latent factors. Like in Locatello et al. (2020) and Sutter et al. (2023), we assume a single known factor for Label-VAE in all experiments. HG-VAE (Sutter et al., 2023) also relies on the MVHG to model the number of shared and independent factors. Unlike the proposed DRPM-VAE approach, HG-VAE must rely on a heuristic to assign latent dimensions to shared factors, as the MVHG only allows to model the number of shared and independent factors but not their position in the latent vector. We use the code from Locatello et al. (2020) and follow the evaluation in Sutter et al. (2023). We refer to Appendix C.2 for details on the ELBO, the setup of the experiment, the implementation, and an illustration of the generative assumptions.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline & \(n_{s}=0\) & \multicolumn{2}{c}{\(n_{s}=1\)} & \multicolumn{2}{c}{\(n_{s}=3\)} & \multicolumn{2}{c}{\(n_{s}=5\)} \\ \cline{2-7} & I & S & I & S & I & S & I \\ \hline Label & \(0.14_{\pm 0.01}\) & \(0.19_{\pm 0.03}\) & \(0.16_{\pm 0.01}\) & \(0.10_{\pm 0.00}\) & \(0.23_{\pm 0.01}\) & \(0.34_{\pm 0.00}\) & \(0.00_{\pm 0.00}\) \\ \hline Ada & \(0.12_{\pm 0.01}\) & \(0.19_{\pm 0.01}\) & \(0.15_{\pm 0.01}\) & \(0.10_{\pm 0.03}\) & \(0.22_{\pm 0.02}\) & \(0.33_{\pm 0.03}\) & \(0.00_{\pm 0.00}\) \\ \hline HG & \(0.18_{\pm 0.01}\) & \(0.22_{\pm 0.05}\) & \(0.19_{\pm 0.01}\) & \(0.08_{\pm 0.02}\) & \(0.28_{\pm 0.01}\) & \(0.28_{\pm 0.01}\) & \(0.01_{\pm 0.00}\) \\ \hline DRPM & \(\mathbf{0.26}_{\pm 0.02}\) & \(\mathbf{0.39}_{\pm 0.07}\) & \(\mathbf{0.2}_{\pm 0.01}\) & \(\mathbf{0.15}_{\pm 0.01}\) & \(\mathbf{0.29}_{\pm 0.02}\) & \(\mathbf{0.42}_{\pm 0.03}\) & \(0.01_{\pm 0.00}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Partitioning of Generative Factors. We evaluate the learned latent representations of the four methods (Label-VAE, Ada-VAE, HG-VAE, DRPM-VAE) with respect to the shared (S) and independent (I) generative factors. We do this by fitting linear classifiers on the shared and independent dimensions of the representation, predicting the respective generative factors. We report the results in adjusted balanced accuracy (Sutter et al., 2023) across five seeds.
Figure 3: The mean squared errors between the estimated number of shared factors \(\hat{n}_{s}\) and the true number of shared factors \(n_{s}\) across five seeds for the Label-VAE, Ada-VAE, HG-VAE, and DRPM-VAE.
We evaluate all methods according to their ability to estimate the number of shared generative factors (Figure 3) and how well they partition the latent representations into shared and independent factors (Table 2). Because we have access to the data-generating process, we can control the number of shared \(n_{s}\) and independent \(n_{i}\) factors. We compare the methods on four different datasets with \(n_{s}\in\{0,1,3,5\}\). In Figure 3, we demonstrate that the DRPM-VAE accurately estimates the true number of shared generative factors. It matches the performance of HG-VAE and outperforms the other two baselines, which consistently overestimate the true number of shared factors. In Table 2, we see a considerable performance improvement compared to previous work when assessing the learned latent representations. We attribute this to our ability to not only estimate the subset sizes of latent and shared factors like HG-VAE but also learn to assign specific latent dimensions to the corresponding shared or independent representations. Thus, the DRPM-VAE dynamically learns more meaningful representations and can better separate and infer the shared and independent subspaces for all dataset versions.
The DRPM-VAE provides empirical evidence of how RPMs can leverage weak supervision signals by learning to maximize the data likelihood while also inferring representations that capture the relationship between coupled data samples. Additionally, we can explicitly model the data-generating process in a theoretically grounded fashion instead of relying on heuristics.
### Multitask Learning
Many ML applications aim to solve specific tasks, where we optimize for a single objective while ignoring potentially helpful information from related tasks. Multitask learning (MTL) aims to improve the generalization across all tasks, including the original one, by sharing representations between related tasks (Caruana, 1993; Caruana and de Sa, 1996) Recent works (Kurin et al., 2022; Xin et al., 2022) show that it is difficult to outperform a convex combination of task losses if the task losses are appropriately scaled. L.e., in case of equal difficulty of the two tasks, a classifier with equal weighting of the two classification losses serves as an upper bound in terms of performance. However, finding suitable task weights is a tedious and inefficient approach to MTL. A more automated way of weighting multiple tasks would thus be vastly appreciated.
In this experiment, we demonstrate how the DRPM can learn task difficulty by partitioning a network layer. Intuitively, a task that requires many neurons is more complex than a task that can be solved using a single neuron. Based on this observation, we propose the DRPM-MTL. The DRPM-MTL learns to partition the neurons of the last shared layer such that only a subset of the neurons are used for every task. In contrast to the other experiments (Sections 5.1 and 5.2), we use the DRPM without resampling and infer the partition \(Y\) as a deterministic function. This can be done by applying the two-step procedure of Proposition 4.1 but skipping the resampling step of the MVHG and PL distributions. We compare the DRPM-MTL to the unitary loss scaling method (ULS, Kurin et al., 2022), which has a fixed architecture and scales task losses equally. Both DRPM-MTL and ULS use a network with shared architecture up to some layer, after which the network branches into two task-specific layers that perform the classifications. Note the difference between the methods. While the task-specific branches of the ULS method access all neurons of the last shared layer, the task-specific branches of the DRPM-MTL access only the subset of neurons reserved for the respective task.
We perform experiments on MultiMNIST (Sabour et al., 2017), which overlaps two MNIST digits in one image, and we want to classify both numbers from a single sample. Hence, the two tasks,
Figure 4: Results for noisyMultiMNIST experiment. In the upper plot, we compare the task accuracy of the two methods ULS and the DRPM-MTL. We see that the DRPM-MTL can reach higher accuracy for most of the different noise ratios \(\alpha\) while it assigns the number of dimensions per task according to their difficulty.
classification of the left and the right digit (see Appendix C.3 for an example), are approximately equal in difficulty by default. To increase the difficulty of one of the two tasks, we introduce the noisyMultiMNIST dataset. There, we control task difficulty by adding salt and pepper noise to one of the two digits, subsequently increasing the difficulty of that task with increasing noise ratios. Varying the noise, we evaluate how our DRPM-MTL adapts to imbalanced difficulties, where one usually has to tediously search for optimal loss weights to reach good performance. We base our pipeline on (Sener and Koltun, 2018). For more details and additional CelebA MTL experiments we refer to Appendix C.3.
We evaluate the DRPM-MTL concerning its classification accuracy on the two tasks and compare the inferred subset sizes per task for different noise ratios \(\alpha\in\{0.0,\ldots,0.9\}\) of the noisyMultiMNIST dataset (see Figure 4). The DRPM-MTL achieves the same or better accuracy on both tasks for most noise levels (upper part of Figure 4). It is interesting to see that, the more we increase \(\alpha\), the more the DRPM-MTL tries to overcome the increased difficulty of the right task by assigning more dimensions to it (lower part of Figure 4, noise ratio \(\alpha\)\(0.6\)-\(0.8\)). Note that for the maximum noise ratio of \(\alpha=0.9\), it seems that the DRPM-MTL basically surrenders and starts neglecting the right task, instead focusing on getting good performance on the left task, which impacts the average accuracy.
## Limitations & Future Work
The proposed two-stage approach to RPMs requires distributions over subset sizes and permutation matrices. The memory usage of the permutation matrix used in the two-stage RPM increases quadratically in the number of elements \(n\). Although we did not experience memory issues in our experiments, this may lead to problems when partitioning vast sets. Furthermore, learning subsets by first inferring an ordering of all elements can be a complex optimization problem. Approaches based on minimizing the earth mover's distance (Monge, 1781) to learn subset assignments could be an alternative to the ordering-based approach in our DRPM and pose an interesting direction for future work. Finally, note that we compute the probability mass function (PMF) \(p(Y;\mathbf{\omega},\mathbf{s})\) by approximating it with the bounds in Lemma 4.2. While the upper bound is tight when all scores have similar magnitude, the bound loosens if scores differ a lot, leading Equation (10) to overestimate the value of the PMF. In practice, we thus reweight the respective terms in the loss function, but in the future, we will investigate better estimates for the PMF.
Ultimately, we are interested in exploring how to apply the DRPM to multimodal learning under weak supervision, for instance, in medical applications. Section 5.2 demonstrated the potential of learning from coupled samples, but further research is needed to ensure fairness concerning underlying, hidden attributes when working with sensitive data.
## Conclusion
In this work, we proposed the differentiable random partition model, a novel approach to random partition models. Our two-stage method enables learning partitions end-to-end by separately controlling subset sizes and how elements are assigned to subsets. This new approach to partition learning enables the integration of random partition models into probabilistic and deterministic gradient-based optimization frameworks. We show the versatility of the proposed differentiable random partition model by applying it to three vastly different experiments. We demonstrate how learning partitions enables us to explore the modes of the data distribution, infer shared and independent generative factors from coupled samples, and learn task-specific sub-networks in applications where we want to solve multiple tasks on a single data point.
|
2310.03744 | Improved Baselines with Visual Instruction Tuning | Large multimodal models (LMM) have recently shown encouraging progress with
visual instruction tuning. In this note, we show that the fully-connected
vision-language cross-modal connector in LLaVA is surprisingly powerful and
data-efficient. With simple modifications to LLaVA, namely, using
CLIP-ViT-L-336px with an MLP projection and adding academic-task-oriented VQA
data with simple response formatting prompts, we establish stronger baselines
that achieve state-of-the-art across 11 benchmarks. Our final 13B checkpoint
uses merely 1.2M publicly available data, and finishes full training in ~1 day
on a single 8-A100 node. We hope this can make state-of-the-art LMM research
more accessible. Code and model will be publicly available. | Haotian Liu, Chunyuan Li, Yuheng Li, Yong Jae Lee | 2023-10-05T17:59:56Z | http://arxiv.org/abs/2310.03744v2 | # Improved Baselines with Visual Instruction Tuning
###### Abstract
Large multimodal models (LMM) have recently shown encouraging progress with visual instruction tuning. In this note, we show that the fully-connected vision-language cross-modal connector in LLaVA is surprisingly powerful and data-efficient. With simple modifications to LLaVA, namely, using CLIP-ViT-L-336px with an MLP projection and adding academic-task-oriented VQA data with simple response formatting prompts, we establish stronger baselines that achieve state-of-the-art across 11 benchmarks. Our final 13B checkpoint uses merely 1.2M publicly available data, and finishes full training in \(\sim\)1 day on a single 8-A100 node. We hope this can make state-of-the-art LMM research more accessible. Code and model will be publicly available.
## 1 Introduction
Large multimodal models (LMMs) have become increasingly popular in the research community, as they are the key building blocks towards general-purpose assistants [1, 22, 35]. Recent studies on LMMs are converging on a central concept known as visual instruction tuning [28]. The results are promising, _e.g_. LLaVA [28] and MiniGPT-4 [49] demonstrate impressive results on natural instruction-following and visual reasoning capabilities. To better understand the capability of LMMs, multiple benchmarks [11, 20, 26, 29, 43] have been proposed. Recent works further demonstrate improved performance by scaling up the pretraining data [2, 9], instruction-following data [9, 21, 45, 46], visual encoders [2], or language models [31], respectively. The LLaVA architecture is also leveraged in different downstream tasks and domains, including region-level [6, 44] and pixel-level [19] understanding, biomedical assistants [23], image generation [3], adversarial studies [4, 47].
This note establishes stronger and more feasible baselines built upon the LLaVA framework. We report that two simple improvements, namely, an MLP cross-modal connector and incorporating academic task related data such as VQA, are orthogonal to the framework of LLaVA, and when used with LLaVA, lead to better multimodal understanding capabilities. In contrast to InstructBLIP [9] or Owen-VL [2], which trains specially designed visual resamplers on hundreds of millions or even billions of image-text paired data, LLaVA uses the simplest architecture design for LMMs and requires only training a simple fully-connected projection layer on merely 600K image-text pairs. Our final model can finish training in \(\sim\)1 day on a single 8-A100 machine and achieves state-of-the-art results on a wide range of benchmarks. Moreover, unlike Owen-VL [2] that includes in-house data in training, LLaVA utilizes only publicly available data. We hope these improved and easily-reproducible baselines will provide a reference for future research in open-source LMM.
Figure 1: **LLaVA-1.5** achieves SoTA on a broad range of 11 tasks (Top), with high training sample efficiency (Left) and simple modifications to LLaVA (Right): an MLP connector and including academic-task-oriented data with response formatting prompts.
## 2 Background
**Instruction-following LMM.** Common architectures include a pre-trained visual backbone to encode visual features, a pre-trained large language model (LLM) to comprehend the user instructions and produce responses, and a vision-language cross-modal connector to align the vision encoder outputs to the language models. As shown in Fig. 1, LLAVA [28] is perhaps the simplest architecture for LMMs. Optionally, visual resamplers (_e.g_. Qformer [24]) are used to reduce the number of visual patches [2, 9, 49]. Training an instruction-following LMM usually follows a two-stage protocol. First, the vision-language alignment pretraining stage leverages image-text pairs to align the visual features with the language model's word embedding space. Earlier works utilize relatively few image-text pairs (_e.g_. \(\sim\)600K [28] or \(\sim\)6M [49]), while some recent works pretrain the vision-language connector for a specific language model on a large amount of image-text pairs (_e.g_. 129M [9] and 1.4B [2]), to maximize the LMM's performance. Second, the visual instruction tuning stage tunes the model on visual instructions, to enable the model to follow users' diverse requests on instructions that involve the visual contents.
**Multimodal instruction-following data.** In NLP, studies show that the quality of instruction-following data largely affects the capability of the resulting instruction-following models [48]. For visual instruction tuning, LLAVA [28] is the pioneer to leverage text-only GPT-4 to expand the existing COCO [27] bounding box and caption dataset to a multimodal instruction-following dataset that contains three types of instruction-following data: conversational-style QA, detailed description, and complex reasoning. LLAVA's pipeline has been employed to expand to textual understanding [45], million-scales [46], and region-level conversations [6]. InstructBLIP [9] incorporates academic-task-oriented VQA datasets to further enhance the model's visual capabilities. Conversely, [5] identifies that such naive data merging can result in the models that tend to overfit to VQA datasets and thus are inability to participate in natural conversations. The authors further proposes to leverage the LLAVA pipeline to convert VQA datasets to a conversational style. While this proves effective for training, it introduces added complexities in data scaling.
## 3 Improved Baselines of LLaVA
**Overview.** As the initial work of visual instruction tuning, LLAVA has showcased commendable proficiency in visual reasoning capabilities, surpassing even more recent models on diverse benchmarks for real-life visual instruction-following tasks, while only falling short on academic benchmarks that typically require short-form answers (_e.g_. single-word). The latter was attributed to the fact that LLAVA has not been pretrained on large-scale data, as other approaches do. In this note, we first study the scaling effect of data, models and input image resolution on a selection of three datasets in Table 1, and then compare the final model against existing LMMs on a diverse set of 12 benchmarks in Table 2. We show that the LLAVA's architecture is powerful and data-efficient for visual instruction tuning, and achieves the best performance using significantly less compute and training data than all other methods.
**Response formatting prompts.** We find that the inability [5] to balance between short- and long-form VQA for approaches like InstructBLIP [9] is mainly due to the following reasons. First, _ambiguous prompts on the response format_. For example, _Q: [Question] A: [Answer]_. Such prompts do not clearly indicate the desirable output format, and can overfit an LLM behaviorally to short-form answers even for natural visual conversations. Second, _not finetuning the LLM_. The first issue is worsened by InstructBLIP only finetuning the Qformer for instruction-tuning. It requires the Qformer's visual output tokens to control the length of the LLM's output to be either long-form or short-form, as in prefix tuning [25], but Qformer may lack the capability of properly doing so, due to its limited capacity compared with LLMs like LLaMA. See Table 6 for a qualitative example.
To address this, we propose to use a single response formatting prompt that clearly indicates the output format, to be appended at the end of VQA questions when promoting short answers: _Answer the question using a single word or phrase_. We empirically show that when LLM is _finetuned_ with such prompts, LLAVA is able to properly adjust the output format according to the user's instructions, and does not require additional processing of the VQA data us
\begin{table}
\begin{tabular}{l l c|c c c} \hline \hline Method & LLM & Res. & GQA & MME & MM-Vet \\ \hline \hline \multicolumn{6}{l}{InstructBLIP} \\ \hline \multicolumn{6}{l}{_Only using a subset of InstructBLIP training data_} \\ \multicolumn{6}{l}{0} & **LLaVA** & 7B & 224 & – & 502.8 & 23.8 \\ \multicolumn{6}{l}{1} & +VQA-v2 & 7B & 224 & 47.0 & 1197.0 & 27.7 \\ \multicolumn{6}{l}{2} & +Format prompt & 7B & 224 & 46.8 & 1323.8 & 26.3 \\ \multicolumn{6}{l}{3} & +MLP VL connector & 7B & 224 & 47.3 & 1355.2 & 27.8 \\ \multicolumn{6}{l}{4} & +OKVQA/OCR & 7B & 224 & 50.0 & 1377.6 & 29.6 \\ \hline \multicolumn{6}{l}{_Additional scaling_} \\ \multicolumn{6}{l}{5} & +Region-level VQA & 7B & 224 & 50.3 & 1426.5 & 30.8 \\ \multicolumn{6}{l}{6} & +Scale up resolution & 7B & 336 & 51.4 & 1450 & 30.3 \\ \multicolumn{6}{l}{7} & +GQA & 7B & 336 & 62.0\({}^{*}\) & 1469.2 & 30.7 \\ \multicolumn{6}{l}{8} & +ShareGPT & 7B & 336 & 62.0\({}^{*}\) & 1510.7 & 30.5 \\ \multicolumn{6}{l}{9} & +Scale up LLM & 13B & 336 & **63.3\({}^{*}\)** & **1531.3** & **36.3** \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Scaling results on \(\sqd\) data, \(\sqd\) model, and \(\sqd\) resolution. We choose to conduct experiments on GQA [14], MME [11], and MM-Vet [43] to examine the representative capabilities of VQA with short answers, VQA with output formatting, and natural visual conversations, respectively. \({}^{*}\)Training images of GQA were observed during training.**
ing ChatGPT [5], which further enables scaling to various data sources. As shown in Table 1, by merely including VQAv2 [12] in training, LLaVA's performance on MME significantly improves (1323.8 _vs_ 502.8) and outperforms InstructBLIP by 111 points.
**MLP vision-language connector.** Inspired by the improved performance in self-supervised learning by changing from a linear projection to an MLP [7, 8], we find that improving the vision-language connector's representation power with a two-layer MLP can improve LLaVA's multimodal capabilities, compared with the original linear projection design.
**Academic task oriented data.** We further include additional academic-task-oriented VQA datasets for VQA, OCR, and region-level perception, to enhance the model's capabilities in various ways, as shown in Table 1. We first include four additional datasets that are used in InstructBLIP: open-knowledge VQA (OKVQA [33], A-OKVQA [37]) and OCR (OCRVQA [34], TextCaps [39]). A-OKVQA is converted to multiple choice questions and a specific response formatting prompt is used: _Answer with the option's letter from the given choices directly_. With only a subset of the datasets InstructBLIP uses, LLaVA already surpasses it on all three tasks in Table 1, suggesting LLaVA's effective design. Furthermore, we find further adding region-level VQA datasets (Visual Genome [18], RefCOCO [17, 32]) improves the model's capability of localizing fine-grained visual details.
**Additional scaling.** We further scale up the input image resolution to allow LLM to clearly "see" the details of images, and add the GQA dataset as an additional visual knowledge source. We also incorporate ShareGPT [38] data and scale up the LLM to 13B as in [2, 6, 31]. Results on MM-Vet shows the most significant improvement when scaling the LLM to 13B, suggesting the importance of the base LLM's capability for visual conversations. We denote the final model with all the modifications as LLaVA-1.5 (the last two rows in Table 1), which achieves an impressive performance that significantly outperforms the original LLaVA [28].
## 4 Discussion
**Comparison with SoTA.** We benchmark LLaVA-1.5 on a wide range of academic VQA benchmarks and recent benchmarks specifically proposed for instruction-following LMMs, totalling 12 benchmarks. We show that it achieves the best performance across 11 out of 12 benchmarks, despite using magnitudes smaller pretraining and instruction tuning data compared with other methods [2, 9]. It is encouraging that _LLaVA-1.5 achieves the best performance with the simplest architecture, academic compute and public datasets, and yields a fully-reproducible and affordable baseline for future research_. The results also suggest that visual instruction tuning plays a more important role in improving an LMM's capabilities than pretraining, and raises questions upon the common belief that LMMs require significant amount of vision-language alignment pretraining [2, 9, 24], despite that the vision encoders (CLIP [36], OpenCLIP [16], EVA-CLIP [10], _etc._) are already pretrained on web-scale image-text paired dataset. LLaVA-1.5 (even the 7B model) outperforms 80B IDEFICS [15], a Flamingo-like LMM with billions of trainable parameters for cross-modal connection. This also makes us rethink the benefits of the vision samplers and the necessity of the additional large-scale pretraining, in terms of multimodal instruction-following capabilities.
**Zero-shot format instruction generalization.** Although LLaVA-1.5 is only trained with a limited number of format instructions, it generalizes to others. First, VizWiz [13] requires the model to output "Unanswerable" when the provided content is insufficient to answer the question, and our response format prompt (Table 8) effectively instructs the model to do so (11.1% \(\rightarrow\) 67.8% on unanswerable questions). We additionally present qualitative examples on instruct
\begin{table}
\begin{tabular}{l l l l l|c c c c|c c c c c c} \hline \hline Method & LLM & Res. & PT & IT & VQA\({}^{\text{22}}\) & GQA & VisWiz & SOQA\({}^{\text{1}}\) & VQA\({}^{\text{4}}\) & POPE & MME & MMB & MMB\({}^{\text{CN}}\) & SEED & LLaVA\({}^{\text{8}}\) & MM-Vet \\ \hline BLIP-2 & Vicuna-13B & 224 & 129M & - & 41.0 & 41 & 19.6 & 61 & 42.5 & 85.3 & 1293.8 & \(-\) & - & 46.4 & 38.1 & 22.4 \\ InstructBLIP & Vicuna-7B & 224 & 129M & 1.2M & \(-\) & 49.2 & 34.5 & 60.5 & 50.1 & \(-\) & \(-\) & 36 & 23.7 & 53.4 & 60.9 & 26.2 \\ InstructBLIP & Vicuna-13B & 224 & 129M & 1.2M & \(-\) & 49.5 & 33.4 & 63.1 & 50.7 & 78.9 & 1212.8 & \(-\) & \(-\) & \(-\) & 58.2 & 25.6 \\ Shikra & Vicuna-13B & 224 & 600K & 5.4 & 57.7 & \({}^{\star}\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & 58.8 & \(-\) & \(-\) & \(-\) & \(-\) \\ IDEFICS-9B & LLaMA-7B & 224 & 353M & 1M & 50.9 & 38.4 & 35.5 & \(-\) & 25.9 & \(-\) & 48.2 & 25.2 & \(-\) & \(-\) & \(-\) \\ IDEFICS-80B & LLaMA-6CB & 224 & 353M & 1M & 60.0 & 45.2 & 36.0 & \(-\) & 30.9 & \(-\) & \(-\) & 54.5 & 38.1 & \(-\) & \(-\) & \(-\) \\ Open-VL & Qaven-7B & 448 & 1.4B\({}^{\dagger}\) & 50M\({}^{\dagger}\) & 78.2\({}^{\star}\) & 59.3\({}^{\star}\) & 35.2 & 67.1 & **63.8** & \(-\) & \(-\) & 38.2 & 7.4 & 56.3 & \(-\) & \(-\) \\ Open-VL-Chat & Qaven-7B & 448 & 1.4B\({}^{\dagger}\) & 78.2\({}^{\star}\) & 57.5\({}^{\star}\) & 38.9 & 68.2 & 61.5 & \(-\) & 1487.5 & 60.6 & 56.7 & 58.2 & \(-\) & \(-\) \\ \hline
**LLaVA-1.5** & Vicuna-7B & 336 & **558K** & **665K** & 78.5\({}^{\star}\) & 62.0\({}^{\star}\) & 50.0 & 66.8 & 58.2 & **85.9** & 1510.7 & 64.3 & 58.3 & 58.6 & 63.4 & 30.5 \\
**LLaVA-1.5** & Vicuna-13B & 336 & **558K** & **665K** & **80.0\({}^{\star}\)** & **63.3\({}^{\star}\)** & **53.6** & **71.6** & 61.3 & **85.9** & **15313.3** & **67.7** & **63.6** & **61.6** & **70.7** & **35.4** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Comparison with SoTA methods on 12 benchmarks.** LLaVA achieves the best performance on 11/12 benchmarks, and ranks the second on the other. Res, PT, IT indicate input image resolution, the number of samples in pretraining and instruction tuning stage, respectively. Benchmark names are abbreviated due to space limits. VQA-v2 [12]; GQA [14]; VisWiz [13]; SQA\({}^{\text{1}}\): ScienceQA-IMG [30]; VQA\({}^{\text{7}}\): TextVQA [40]; POPE [26]; MME [11]; MMB: MMBench [29]; MMB\({}^{\text{CN}}\): MMBench-Chinese [29]; SEED: SEED-Bench [20]; LLaVA\({}^{\text{W}}\): LLaVA-Bench (In-the-Wild) [28]; MM-Vet [43]. \({}^{\star}\)The training images of the datasets are observed during training. \({}^{\dagger}\)Includes in-house data that is not publicly accessible.
LLaVA-1.5 to verify the tricky questions (Fig. 3) and respond in a constrained JSON format (Fig. 4).
**Zero-shot multilingual capability.** Though LLaVA-1.5 is _not_ finetuned for multilingual multimodal instruction following _at all_, we find that it is capable of following multilingual instructions, partly due to the multilingual language instructions in ShareGPT [38]. We quantitatively evaluate the model's generalization capability to Chinese on MMBenchCN [29], where the questions of MMBench are converted to Chinese. Notably, LLaVA-1.5 outperforms Qwen-VL-Chat by 7.3% (63.6% vs 56.7%), despite Qwen being finetuned on Chinese multimodal instructions while LLaVA-1.5 is not.
**Computational cost.** For LLaVA-1.5, we use the same pretraining dataset of LCS-558K1, and keep the training iterations and batch size roughly the same for instruction tuning as LLaVA [28]. Due to the increased image input resolution to 336px, the training of LLaVA-1.5 is \(\sim\)2\(\times\) as long as LLaVA: \(\sim\)6 hours of pretraining and \(\sim\)20 hours of visual instruction tuning, using 8\(\times\) A100s.
**Limitations.** Despite the promising results demonstrated by LLaVA-1.5, several limitations must be acknowledged. First, LLaVA utilizes full image patches, potentially prolonging each training iteration. While visual resamplers [2, 9, 24] reduce the number of visual patches in LLMs, they currently cannot achieve convergence as efficiently as LLaVA with a comparable amount of training data, probably due to more trainable parameters in the resamplers. The development of a sample-efficient visual resampler could pave the way for future scaling-up of instruction-following multimodal models. Second, LLaVA-1.5 is not yet capable of processing multiple images due to the lack of such instruction-following data, and the limit of the context length. Third, although LLaVA-1.5 exhibits proficiency in following complex instructions, its problem-solving capabilities can still be limited in certain domains, which could be improved with a more capable language model and with high-quality, targeted visual instruction tuning data. Finally, despite its significantly reduced propensity for hallucination, LLaVA is not exempt from producing hallucinations and occasionally disseminating misinformation, and should be used with caution in critical applications (_e.g_. medical).
Footnote 1: LCS-558K: a subset of \(\sim\)558K image-text pairs from LAION-CC-SBU with BLIP captions, as used in LLaVA-Lightning series.
**Acknowledgements.** This work was supported in part by NSF CAREER IIS2150012, and Institute of Information & communications Technology Planning & Evaluation(IITP) grants funded by the Korea government(MSIT) (No. 2022-0-00871, Development of AI Autonomy and Knowledge Enhancement for AI Agent Collaboration) and (No. RS-2022-00187238, Development of Large Korean Language Model Technology for Efficient Pre-training).
\begin{table}
\begin{tabular}{l l} \hline \hline \multicolumn{1}{c}{**Visual input example, Tricky Question:**} \\ \hline User & If there are factual errors in the questions, point it out; if not, proceed answering the question. \\ & What’s happening in the desert? \\ \hline GPT-4V & Sorry, I cannot answer that question as there is no context about which desert you are referring to. \\ \hline LLaVA & In the image, there is a city skyline with buildings and a beach with people enjoying the sunset. The scene is set in a desert, which is unusual for a beach setting. \\ \hline LLaVA-1.5 & There are no deserts in the image. The image features a beach with palm trees, a city skyline, and a large body of water. \\ \hline \hline \end{tabular}
\end{table}
Table 3: LLaVA-1.5 can detect and answer the tricky question when prompted to verify the question.
\begin{table}
\begin{tabular}{l l} \hline \hline \multicolumn{1}{c}{**Visual input example, Constrained JSON Output:**} \\ \hline User & Please read the text in this image and return the information in the following JSON format (note xxx is placeholder, if the information is not available in the image, put “N/A” instead). \\ & (‘class’: xxx, “DLN”: xxx, “DOB”: xxx, “Name”: xxx, “Adress”: xxx, “EXP”: xxx, “ISS”: xxx, “SEX”: xxx, “HGT”: xxx, “WGT”: xxx, “EYES”: xxx, “HAR”: xxx, “DONOR”: xxx] \\ \hline GPT-4V & (‘class’: “C, “DLN”: 1234568”, “DOB”: 08/31/1977”, “Name”: “Ima Cradholder”, “Address”: 2570 24th Street, Anytown, CA95818”, “EXP”: 08/31/2011”, “ISS”: 08/31/2011”, “SEX”: "F, “HGT”: -5/5”, “HGT”: -5/5”, “HGT”: -5/5”, “HGT”: -5/5”, “WGT”: “130”, “EYES”: "Brown”, “HAIR”: "Brown”, “DONOR”: “Unknown” \\ \hline LLaVA-1.5 & (‘class’: “C, “DLN”: 1132456789”, “DOB”: 08/23/1971”, “Name”: “Ima Cradholder”, “Address”: "Anytown, State, Zip": "EXP": 08/23/2014”, “ISS”: “California”, “SEX”: "F, “HGT”: "5/5”, “WGT”: “123”, “EYES”: "Brown”, “HAIR”: "Brown”, “DNOROR”: “N/A”] \\ \hline \hline \end{tabular}
\end{table}
Table 4: LLaVA-1.5 can extract information from the image and answer following the required format, despite a few errors compared with GPT-4V. GPT-4V results are obtained from [42].
## Appendix
**Data.** Our final training data mixture contains a variety of datasets: VQA [12, 14, 33, 37], OCR [34, 39], region-level VQA [17, 18, 32], visual conversation [28] and language conversation [38] data. We adopt multiple strategies to reduce training cost and enhance efficiency, detailed as follows:
1. [leftmargin=*]
2. For all VQA datasets, QA pairs from the same training image are merged into a single conversation.
3. For ShareGPT [38], we filter out invalid conversations as [41]. Unlike Vicuna [41], long conversations that surpass 2048 tokens are truncated rather than splitting to multiple conversations. This results in \(\sim\)40K conversations.
4. Each QA pair in A-OKVQA [37] is augmented \(k\) times, where \(k\) is the number of choices per question, to counterbalance the lack of multiple-choice data.
5. 80K conversations are sampled from OCRVQA [34].
6. For Visual Genome, we sample 10 annotations for images with additional annotations.
7. For RefCOCO, conversations are dissected into segments, each containing fewer than 10 conversations.
8. We obverse that language conversations are often longer than visual ones. For each batch, we sample conversations only from a single modality, and this speeds up the training by 25%, and does not affect the final outcome.
All data splits are concatenated together and sampled with the same probability. We present the response formatting prompts of the final instruction-following data mixtures in Table 7 and the response format prompts used for each evaluation benchmark in Table 8.
**Hyperparameters.** LLaVA-1.5 use the same set of hyperparameters as the original LLaVA, except that we halve the learning rate in pretraining due to the usage of the MLP projection layer instead of the original linear projection layer design. We show the training hyperparameters for both first-stage vision-language alignment pretraining and the second-stage visual instruction tuning in Table 5. |
2302.14029 | On the weighted inequality between the Gagliardo and Sobolev seminorms | We prove weighted inequalities between the Gagliardo and Sobolev seminorms
and also between the Marcinkiewicz quasi-norm and the Sobolev seminorm. With
$A_1$ weights we improve earlier results of Bourgain, Brezis, and Mironescu. | Ritva Hurri-Syrjänen, Javier C. Martínez-Perales, Carlos Pérez, Antti V. Vähäkangas | 2023-02-27T18:41:01Z | http://arxiv.org/abs/2302.14029v1 | # On the weighted inequality between the Gagliardo and Sobolev seminorms
###### Abstract.
We prove weighted inequalities between the Gagliardo and Sobolev seminorms and also between the Marcinkiewicz quasi-norm and the Sobolev seminorm. With \(A_{1}\) weights we improve earlier results of Bourgain, Brezis, and Mironescu.
_2020 Mathematics Subject Classification._ Primary: 46E35. Secondary: 42B25.
_Key words and phrases._ Gagliardo seminorm, Sobolev seminorm, Muckenhoupt weight C. P. is supported by grant PID2020-113156GB-I00, Spanish Government; by the Basque Government through grant IT1247-19 and the BERC 2014-2017 program and by the BCAM Severo Ochoa accreditation CEX2021-001142-S, Spanish Government. He is also very grateful to Department of Mathematics of the University of Jyvaskyla where the 3rd author was a visiting faculty and where this research was carried out.
Introduction
Let \(0<\delta<1\), \(w\in A_{1}\) and \(n\geq 2\). Then there exists a constant \(C(n)>0\) such that, for any cube \(Q\) in \(\mathbb{R}^{n}\) and any \(u\in C^{1}(Q)\), the inequality_
\[\ell(Q)^{\delta}\left(\int_{Q}\int_{Q}\frac{|u(x)-u(y)|^{p}}{|x-y|^{n+\delta p} }\,\mathrm{d}y\,w(x)\,\mathrm{d}x\right)^{\frac{1}{p}}\leq\frac{C(n)^{p}}{(1- \delta)^{\frac{1}{p}}}[w]_{A_{1}}^{\frac{2}{p}}\ell(Q)\left(\int_{Q}|\nabla u(x )|^{p}\,w(x)\,\mathrm{d}x\right)^{\frac{1}{p}}.\]
_holds._
This is an interesting result since it provides a weighted inequality between the fractional and classical Sobolev norms. It is a weighted extension of the more classical inequality given in Section 6. Since the weighted \((1,p)\)-Poincare inequality holds for any Muckenhoupt weight in \(A_{p}\) (see [10]), we believe that the above result should also hold for any \(A_{p}\) weight.
**Theorem 2.2**.: _Let \(0<\delta<1\), \(w\in A_{1}\) and \(n\geq 2\). There exists a dimensional constant \(C(n)>0\) such that, for any cube \(Q\) in \(\mathbb{R}^{n}\) and any \(u\in C^{1}(Q)\), the inequality_
\[\ell(Q)^{\delta}\left(\int_{Q}\int_{Q}\frac{|u(x)-u(y)|^{p}}{|x-y|^{n+\delta p }}\,\mathrm{d}y\,w(x)\,\mathrm{d}x\right)^{\frac{1}{p}}\leq\frac{C(n)^{p}}{(1 -\delta)^{\frac{1}{p}}}[w]_{A_{1}}^{\frac{2}{p}}\ell(Q)\left(\int_{Q}|\nabla u (x)|^{p}\,w(x)\,\mathrm{d}x\right)^{\frac{1}{p}}.\]
_holds._
We prove variants of Theorem 2.1 and Theorem 2.2 also for Borel measures \(\mu\) in general. We refer to inequality (5.1) and Section 5.2.
We believe that neither of the two theorems above are fully sharp since the constant \([w]_{A_{1}}^{2}\) we get is quadratic if \(p>1\) while in the case \(p=1\) the constant is linear. However, the factor \((1-\delta)^{-2}\) in the case \(p=1\) is worse and we believe that \((1-\delta)^{-1}\) should be the right constant in the inequality, as it is in the unweighted case. Actually we have the following weak type result in which the conjectured constant is obtained, although the method of proof gives a higher power of \([w]_{A_{1}}\).
**Theorem 2.3**.: _Let \(0<\delta<1\), \(w\in A_{1}\) and \(n\geq 2\). Then there exists a constant \(C(n)>0\) such that, for any cube \(Q\) in \(\mathbb{R}^{n}\) and any \(u\in C^{1}(Q)\), the inequality_
\[\ell(Q)^{\delta}\left\|\frac{u(x)-u(y)}{|x-y|^{n+\delta}}\right\|_{L^{1,\infty }\left(Q\times Q,w(x)\,\mathrm{d}x\times\mathrm{d}y\right)}\leq C(n)\,\frac{[ w]_{A_{1}}^{2+\frac{1-\delta}{n}}}{\delta(1-\delta)}\ell(Q)\int_{Q}|\nabla u(x )|w(x)\,\mathrm{d}x.\]
_holds._
For the unweighted case we refer to the famous paper [3]. We also refer to the related interesting results in [10]. In any case, a result like the one [2] is not known for us in a different setting than the Euclidean one. It may happen that the right constant in terms of \(\delta\) is different from the one in the Euclidean case, namely \((1-\delta)^{-1}\).
## 3. Muckenhoupt weights, maximal operators and Riesz potentials
We review here some definitions and known results on the theory of Muckenhoupt weights, the Marcinkiewicz norm, maximal operators and fractional integrals. It will be useful here to denote by \(\mathcal{Q}\) the set of all cubes in \(\mathbb{R}^{n}\).
### Muckenhoupt weights
We start by recalling some definitions and known results about Muckenhoupt weights. A weight is a function \(w\in L^{1}_{\mathrm{loc}}(\mathbb{R}^{n})\) satisfying \(w(x)>0\) for almost every point \(x\in\mathbb{R}^{n}\). When \(1<p<\infty\), we say that a weight \(w\in L^{1}_{\mathrm{loc}}(\mathbb{R}^{n})\) is in the \(A_{p}\) class if its \(A_{p}\) constant,
\[[w]_{A_{p}}=\sup_{Q\in\mathcal{Q}}\mathchoice{\mathop{\kern 0.0pt\vrule heig ht 6.0pt depth -0.2pt width 0.2pt\kern-3.0pt-2.5pt\kern-6.0pt-6.0pt\kern-6.0pt\kern-6.0pt\kern-6.0pt \intop}}{\mathop{\kern 0.0pt\vrule height 6.0pt depth -0.2pt width 0.2pt\kern-3.0pt-2.5pt\kern-6.0pt\intop}}{ \mathop{\kern 0.0pt\vrule height 6.0pt depth -0.2pt width 0.2pt\kern-3.0pt-2.5pt\kern-6.0pt\intop}}{ \mathop{\kern 0.0pt\vrule height 6.0pt depth -0.2pt width 0.2pt\kern-3.0pt-2.5pt\kern-6.0pt\intop}}_{Q}w(x)\, \mathrm{d}x\left(\mathchoice{\mathop{\kern 0.0pt\vrule height 6.0pt depth -0.2pt width 0.2pt\kern-3.0pt-2.5pt\kern-6.0pt-6.0pt\kern-6.0pt\kern-6.0pt \intop}}{\mathop{\kern 0.0pt\vrule height 6.0pt depth -0.2pt width 0.2pt\kern-3.0pt-2.5pt\kern-6.0pt\intop}}{ \mathop{\kern 0.0pt\vrule height 6.0pt depth -0.2pt width 0.2pt\kern-3.0pt-2.5pt\kern-6.0pt\intop}}{ \mathop{\kern 0.0pt\vrule height 6.0pt depth -0.2pt width 0.2pt\kern-3.0pt-2.5pt\kern-6.0pt\intop}}_{Q}w(x)^{1-p^{ \prime}}\mathrm{d}x\right)^{p-1},\]
is finite. In case \(p=1\), we say that \(w\in A_{1}\) if there is a constant \(C>0\) such that, for any cube \(Q\) in \(\mathbb{R}^{n}\),
\[\frac{1}{|Q|}\int_{Q}w(x)\,\mathrm{d}x\leq C\,\mathrm{ess}\,\inf_{x\in Q}w(x),\]
and the \(A_{1}\) constant \([w]_{A_{1}}\) is defined as the smallest of these constants \(C\).
Given \(1\leq p<\infty\), it turns out (see [11, p. 396]) that a weight \(w\in L^{1}_{\mathrm{loc}}(\mathbb{R}^{n})\) is in the \(A_{p}\) class if and only if there is a constant \(C>0\) such that, for every cube \(Q\) in \(\mathbb{R}^{n}\) and every nonnegative measurable function \(u\) on \(Q\), the inequality
\[\frac{1}{|Q|}\int_{Q}u(x)\,\,\mathrm{d}x\leq C\left(\frac{1}{w(Q)}\int_{Q}u(x)^ {p}w(x)\,\,\mathrm{d}x\right)^{\frac{1}{p}} \tag{3.1}\]
holds. Here, we denote the weighted measure of a measurable set \(E\) by \(w(E)=\int_{\frac{E}{p}}w(x)\,\mathrm{d}x\). Moreover, the smallest constant \(C\) satisfying the above inequality is precisely \([w]_{A_{p}}^{p}\). Given an \(A_{p}\) weight, a cube \(Q\in\mathcal{Q}\) and a measurable subset \(E\subset Q\), one can apply inequality (3.1) to the function \(u=\chi_{E}\) to get
\[\frac{|E|}{w(E)^{\frac{1}{p}}}\leq[w]_{A_{p}}^{\frac{1}{p}}\frac{|Q|}{w(Q)^{ \frac{1}{p}}}. \tag{3.2}\]
### Marcinkiewicz norms and Kolmogorov's inequality
Similarly, we will use the standard notation for the normalized Marcinkiewicz quasinorms: for any \(0<q<\infty\), any measurable set \(E\subset\mathbb{R}^{n}\) and a Borel measure \(\mu\), we define
\[\left\|u\right\|_{L^{q,\infty}\left(E,\frac{\mathrm{d}u}{\mu(E)}\right)}=\sup_ {t>0}t\,\left(\frac{1}{\mu(E)}\mu(\{x\in E:|u(x)|>t\})\right)^{\frac{1}{q}}.\]
We shall use the following Kolmogorov's inequality: given a Borel measure \(\mu\), we have that, for every \(0<q<r<\infty\) and every nonnegative measurable function \(u\) on a cube \(Q\),
\[\frac{1}{\mu(Q)}\int_{Q}u(x)^{q}\mathrm{d}\mu(x)\leq\frac{r}{r-q}\left\|u \right\|_{L^{r,\infty}\left(Q,\frac{\mathrm{d}u}{\mu(Q)}\right)}^{q} \tag{3.3}\]
See [11, p. 485], for instance.
### Maximal functions and Riesz potentials
Let \(\mu\) be a Borel measure in \(\mathbb{R}^{n}\). Let \(\alpha\geq 0\). We will denote by \(M^{c}_{\alpha}\mu\) the fractional centered Hardy-Littlewood maximal function of \(\mu\) on cubes, which is defined by
\[M^{c}_{\alpha}\mu(x):=\sup_{\ell>0}\ell^{\alpha}\,\frac{\mu(Q(x,\ell))}{|Q(x, \ell)|},\qquad x\in\mathbb{R}^{n}, \tag{3.4}\]
where \(Q(x,\ell)\) is the cube of side length \(\ell\) centered at \(x\) and the supremum is taken over all \(\ell>0\). The case \(\alpha=0\) corresponds to the usual centered Hardy-Littlewood maximal function, which we simply denote by \(M^{c}\mu\). We remove the superscript \(c\) from the notation when the supremum is taken over all cubes \(Q\) in \(\mathbb{R}^{n}\) satisfying \(x\in Q\). This allows to define the fractional and classical non-centered Hardy-Littlewood maximal functions \(M_{\alpha}\mu\) and \(M\mu\), respectively, by
\[M_{\alpha}\mu(x):=\sup_{\mathcal{Q}\supset\mathcal{Q}\supset x}\ell(Q)^{\alpha }\,\frac{\mu(Q)}{|Q|},\qquad x\in\mathbb{R}^{n}. \tag{3.5}\]
When a cube \(Q_{0}\) is given, we define \(M_{\alpha,Q_{0}}\mu\) by replacing the supremum in (3.5) by the supremum over all cubes \(Q\) in the class \(\mathcal{Q}(Q_{0})\) of all cubes \(Q\in\mathcal{Q}\) satisfying \(Q\subset Q_{0}\). When \(\mu\) is given by the integral of the absolute value of a function \(u\in L^{1}_{\mathrm{loc}}(\mathbb{R}^{n})\), that is, \(\mathrm{d}\mu=|u|\,\mathrm{d}x\), we replace \(\mu\) by \(u\) in the notation. We also define, for a given weight \(w\in L^{1}_{\mathrm{loc}}(\mathbb{R}^{n})\), the centered weighted maximal function \(M^{c}_{w}u\) of \(u\in L^{1}_{\mathrm{loc}}(\mathbb{R}^{n})\) by
\[M^{c}_{w}u(x):=\sup_{\ell>0}\frac{1}{w((Q(x,\ell)))}\,\int_{Q(x,\ell)}|u(y)| \,\mathrm{d}w(y),\qquad x\in\mathbb{R}^{n}. \tag{3.6}\]
We will use the classical weighted Fefferman-Stein inequality [FS] for a Borel measure \(\mu\) and \(u\in L^{1}_{\mathrm{loc}}(\mathbb{R}^{n})\),
\[\left\|Mu\right\|_{L^{1,\infty}(d\mu)}\leq C(n)\,\int_{\mathbb{R}^{n}}|u(x)| \,M\mu(x)\,\mathrm{d}x. \tag{3.7}\]
The usual proof for weights, namely when \(d\mu=wdx\), goes through using a covering lemma of Vitali type for instance. From (3.7) we deduce, using Marcinkiewicz interpolation, that
\[\left\|Mu\right\|_{L^{p}(d\mu)}\leq C(n)\,p^{\prime}\left\|u\right\|_{L^{p}(M \mu)} \tag{3.8}\]
holds for every \(p\in(1,\infty)\).
Another family of operators which we shall be using and which are closely related to the fractional maximal functions are the fractional integral operators or Riesz potentials which, for \(0<\alpha<n\), are defined for a Borel measure \(\mu\) in \(\mathbb{R}^{n}\) by
\[I_{\alpha}\mu(x)=\int_{\mathbb{R}^{n}}\frac{\mathrm{d}\mu(y)}{|x-y|^{n-\alpha} }\,,\qquad x\in\mathbb{R}^{n}.\]
Whenever the measure \(\mu\) is given by a nonnegative function \(u\in L^{1}_{\mathrm{loc}}(\mathbb{R}^{n})\), that is, \(\mathrm{d}\mu(x)=u(x)\,\mathrm{d}x\), we get the usual fractional integral operator.
## 4. Estimates for operators and representation formulas
In this section we establish some basic results which will be used in the sequel. We first prove several estimates involving fractional maximal functions and Riesz potentials. Then we prove some representation formulas relating the oscillations of the functions under study with the aforementioned operators.
### Estimates for fractional maximal functions and Riesz potentials
We start with the following boundedness result for the local fractional maximal operator.
**Lemma 4.1**.: _There exists a constant \(C(n)\) such that, for any \(0<\alpha<n\), any \(w\in A_{1}\) and any \(u\) nonnegative measurable function on a cube \(Q_{0}\in\mathcal{Q}\),_
\[\int_{Q_{0}}M_{\alpha,Q_{0}}u(x)\,w(x)\,\mathrm{d}x\leq\,\frac{C(n)}{\alpha}\,[ w]^{1+\frac{\alpha}{n}}_{A_{1}}\ell(Q_{0})^{\alpha}\int_{Q_{0}}u(x)\,w(x)\, \mathrm{d}x.\]
Proof.: Let \(x\in Q_{0}\). We first show that for every \(x\in Q_{0}\),
\[M_{\alpha,Q_{0}}u(x)\leq c_{n}\,[w]^{1+\frac{\alpha}{n}}_{A_{1}}\,\ell(Q_{0})^{ \alpha}\left(\frac{1}{w(Q_{0})}\int_{Q_{0}}u(y)\,w(y)\mathrm{d}y\right)^{\frac {\alpha}{n}}\,(M_{w}^{c}(\chi_{Q_{0}}u)(x))^{1-\frac{\alpha}{n}}\,. \tag{4.1}\]
Indeed, let \(\theta=\frac{\alpha}{n}\in(0,1)\). Let \(Q\subset Q_{0}\) be a cube with \(x\in Q\). By using inequality (3.1), we obtain
\[\frac{\ell(Q)^{\alpha}}{|Q|} \int_{Q}u(y)\,\mathrm{d}y=|Q|^{\frac{\alpha}{n}}\,\left(\frac{1}{ |Q|}\int_{Q}u(y)\,\mathrm{d}y\right)^{\theta}\left(\frac{1}{|Q|}\int_{Q}u(y) \,\mathrm{d}y\right)^{1-\theta}\] \[\leq\,|Q|^{\frac{\alpha}{n}}\,\left(\frac{[w]_{A_{1}}}{w(Q)}\int _{Q}u(y)\,w(y)\mathrm{d}y\right)^{\theta}\left(\frac{C(n)}{|Q(x,2\ell(Q))|} \int_{Q(x,2\ell(Q))}\chi_{Q_{0}}(y)u(y)\,\mathrm{d}y\right)^{1-\theta}\] \[\leq[w]^{\theta}_{A_{1}}\,|Q|^{\frac{\alpha}{n}}\,\left(\frac{1}{ w(Q)}\int_{Q}u(y)\,\mathrm{d}w(y)\right)^{\theta}\left(\frac{C(n)[w]_{A_{1}}}{w (Q(x,2\ell(Q)))}\int_{Q(x,2\ell(Q))}\chi_{Q_{0}}(y)u(y)\,\mathrm{d}w(y)\right) ^{1-\theta}\] \[\leq C(n)[w]_{A_{1}}\left(\frac{|Q|}{w(Q)}\right)^{\frac{\alpha}{ n}}\,\left(\int_{Q_{0}}u(y)\,\mathrm{d}w(y)\right)^{\frac{\alpha}{n}}\,(M_{w}^{c}( \chi_{Q_{0}}u)(x))^{1-\frac{\alpha}{n}}\,.\]
Now we can apply (3.2) to get
\[\frac{\ell(Q)^{\alpha}}{|Q|}\int_{Q}u(y)\,\mathrm{d}y \leq C(n)[w]_{A_{1}}\left(\frac{|Q|}{w(Q)}\right)^{\frac{\alpha}{ n}}\,\left(\int_{Q_{0}}u(y)\,\mathrm{d}w(y)\right)^{\frac{\alpha}{n}}\,(M_{w}^{c}( \chi_{Q_{0}}u)(x))^{1-\frac{\alpha}{n}}\] \[\leq C(n)[w]_{A_{1}}\left([w]_{A_{1}}\frac{|Q_{0}|}{w(Q_{0})} \right)^{\frac{\alpha}{n}}\,\left(\int_{Q_{0}}u(y)\,\mathrm{d}w(y)\right)^{ \frac{\alpha}{n}}\,(M_{w}^{c}(\chi_{Q_{0}}u)(x))^{1-\frac{\alpha}{n}}\] \[\leq C(n)[w]^{1+\frac{\alpha}{n}}_{A_{1}}\ell(Q_{0})^{\alpha}\, \left(\frac{1}{w(Q_{0})}\int_{Q_{0}}u(y)\,\mathrm{d}w(y)\right)^{\frac{\alpha }{n}}\,(M_{w}^{c}(\chi_{Q_{0}}u)(x))^{1-\frac{\alpha}{n}}\,.\]
By taking supremum over all cubes \(Q\in\mathcal{Q}(Q_{0})\) with \(x\in Q\), we get inequality (4.1).
Now we can apply inequality (4.1) to obtain
\[\frac{1}{w(Q_{0})}\int_{Q_{0}}M_{\alpha,Q_{0}}u(x)\,\mathrm{d}w(x)\] \[\leq c_{n}[w]^{1+\frac{\alpha}{n}}_{A_{1}}\,\ell(Q_{0})^{\alpha} \left(\frac{1}{w(Q_{0})}\int_{Q_{0}}u(x)\,\mathrm{d}w(x)\right)^{\frac{ \alpha}{n}}\,\frac{1}{w(Q_{0})}\int_{Q_{0}}\,(M_{w}^{c}(\chi_{Q_{0}}u)(x))^{1- \frac{\alpha}{n}}\,\,\mathrm{d}w(x).\]
Since \(M_{w}^{c}\) is of weak type \((1,1)\) with respect to \(\mathrm{d}w(x)\) and with norm depending just on \(n\) by the Besicovitch covering lemma, we can apply Kolmogorov's inequality (3.3) to bound the last integral from above as follows
\[\frac{1}{w(Q_{0})}\int_{Q_{0}}\,(M_{w}^{c}(\chi_{Q_{0}}u)(x))^{1- \frac{\alpha}{n}}\,\,w(x)\,\mathrm{d}x \leq\frac{n}{\alpha}\,\|M_{w}^{c}(\chi_{Q_{0}}u)\|^{1-\frac{ \alpha}{n}}_{L^{1,\infty}\big{(}Q_{0},\frac{\mathrm{d}w}{w(Q_{0})}\big{)}}\] \[\leq\frac{C(n)}{\alpha}\,\left(\frac{1}{w(Q_{0})}\int_{Q_{0}}u(x) \,w(x)\,\mathrm{d}x\right)^{1-\frac{\alpha}{n}},\]
By combining the above estimates, we get
\[\frac{1}{w(Q_{0})}\int_{Q_{0}}M_{\alpha,Q_{0}}u(x)\,w(x)\,\mathrm{d}x\leq\frac{C( n)}{\alpha}\left[w\right]_{A_{1}}^{1+\frac{\alpha}{n}}\frac{\ell(Q_{0})^{\alpha}}{w(Q_ {0})}\int_{Q_{0}}u(x)\,w(x)\,\mathrm{d}x\]
which is the result we wanted to prove.
The following lemma will be very useful. Inequality (4.2) is probably known but we provide a proof for convenience of the reader. This can also be found in [GLP]. Inequality (4.3) is well-known and follows from [He].
**Lemma 4.2**.: _Let \(Q_{0}\) be a cube in \(\mathbb{R}^{n}\), \(\mu\) be a Borel measure, and \(0<\alpha<n\). Then there is a constant \(C(n)>0\) such that the inequality_
\[I_{\alpha}(\chi_{Q_{0}}\mu)(x)\leq\frac{C(n)}{\alpha}\mu(Q_{0})^{\frac{\alpha} {n}}M^{c}(\chi_{Q_{0}}\mu)(x)^{\frac{n-\alpha}{n}} \tag{4.2}\]
_holds for every \(x\in\mathbb{R}^{n}\). Furthermore, the inequality_
\[I_{\alpha}(\chi_{Q_{0}}\mu)(x)\leq\frac{C(n)}{\alpha}\ell(Q_{0})^{\alpha}M( \chi_{Q_{0}}\mu)(x), \tag{4.3}\]
_holds for every \(x\in Q_{0}\),. If \(w\in A_{1}\), then the inequality_
\[I_{\alpha}(\chi_{Q_{0}}w)(x)\leq\frac{C(n)}{\alpha}[w]_{A_{1}}^{\frac{n-\alpha }{n}}w(Q_{0})^{\frac{n}{n}}w(x)^{\frac{n-\alpha}{n}} \tag{4.4}\]
_holds for almost every \(x\in\mathbb{R}^{n}\)._
Proof.: For \(x\in\mathbb{R}^{n}\) and \(t>0\) we denote
\[Q_{x,t}=Q\left(x,2t^{-\frac{1}{n-\alpha}}\right).\]
By the layer-cake formula, we obtain
\[\int_{Q_{0}}\frac{\mathrm{d}\mu(y)}{\left|x-y\right|^{n-\alpha}} =\int_{0}^{\infty}\mu\left(\left\{y\in Q_{0}:\frac{1}{\left|x-y \right|^{n-\alpha}}>t\right\}\right)\mathrm{d}t\] \[=\int_{0}^{\infty}\mu\left(\left\{y\in Q_{0}:\left|x-y\right|<t^{ -\frac{1}{n-\alpha}}\right\}\right)\mathrm{d}t\] \[\leq\int_{0}^{\infty}\min\left\{\mu(Q_{0}),\frac{\mu(Q_{0}\cap Q _{x,t})}{\left|Q_{x,t}\right|}\left|Q_{x,t}\right|\right\}\mathrm{d}t\] \[\leq\int_{0}^{\infty}\min\left\{\mu(Q_{0}),M^{c}(\chi_{Q_{0}}\mu) (x)\left(2t^{-\frac{1}{n-\alpha}}\right)^{n}\right\}\mathrm{d}t\] \[\leq\int_{0}^{\left(\frac{M^{c}(\chi_{Q_{0}}\mu)(x)}{\mu(Q_{0})} \right)^{\frac{n-\alpha}{n}}}\mu(Q_{0})\,\mathrm{d}t+C(n)\int_{\left(\frac{M^ {c}(\chi_{Q_{0}}\mu)(x)}{\mu(Q_{0})}\right)^{\frac{n-\alpha}{n}}}^{\infty}M^{ c}(\chi_{Q_{0}}\mu)(x)t^{-\frac{n}{n-\alpha}}\,\mathrm{d}t\] \[\leq\frac{C(n)}{\alpha}\mu(Q_{0})^{\frac{\alpha}{n}}M^{c}(\chi_{ Q_{0}}\mu)(x)^{\frac{n-\alpha}{n}},\]
which is inequality (4.2). Observe that inequality (4.4) is a direct consequence of (4.2).
To show (4.3), we use inequality (4.2) with the measure \(\chi_{Q_{0}}\mu\) to get
\[I_{\alpha}(\chi_{Q_{0}}\mu)(x) \leq\frac{C(n)}{\alpha}\mu(Q_{0})^{\frac{\alpha}{n}}M^{c}(\chi_{Q _{0}}\mu)(x)^{\frac{n-\alpha}{n}}\] \[=\frac{C(n)}{\alpha}|Q_{0}|^{\frac{\alpha}{n}}\left(\frac{\mu(Q_{ 0})}{|Q_{0}|}\right)^{\frac{\alpha}{n}}M^{c}(\chi_{Q_{0}}\mu)(x)^{\frac{n- \alpha}{n}}\] \[\leq\frac{C(n)}{\alpha}|Q_{0}|^{\frac{\alpha}{n}}M(\chi_{Q_{0}}\mu )(x)^{\frac{\alpha}{n}}M^{c}(\chi_{Q_{0}}\mu)(x)^{\frac{n-\alpha}{n}}\] \[=\frac{C(n)}{\alpha}\ell(Q_{0})^{\alpha}M(\chi_{Q_{0}}\mu)(x)\]
for every \(x\in Q_{0}\), since \(M^{c}(\chi_{Q_{0}}\mu)(x)\leq M(\chi_{Q_{0}}\mu)(x)\). This is the desired inequality (4.3).
### Representation formulas from Poincare-type inequalities
We present some local representation formulas which follow from general Poincare-type inequalities. Then we will introduce some consequences of these representation formulas. The following lemma is a key in our arguments. It essentially follows from [12] or [13], but it can also be obtained by following the proof of [14, Lemma 4.10], since cubes are examples of John domains. We are interested in the tracking of the constants involved in our estimates and so we will provide the proof here for the sake of clarity.
**Lemma 4.3**.: _Assume that \(0<\beta\leq\alpha<n\). Let \(Q_{0}\) be a cube in \(\mathbb{R}^{n}\). Suppose there exists a constant \(\kappa>0\), a function \(u\in L^{1}(Q_{0})\) and a nonnegative measurable function \(g\) such that_
\[\fint_{Q}|u(x)-u_{Q}|\,\mathrm{d}x\leq\kappa\,\ell(Q)^{\alpha}\fint_{Q}g(x)\, \mathrm{d}x\]
_for every cube \(Q\subset Q_{0}\). Then there exists a dimensional constant \(C(n)>0\) such that_
\[|u(x)-u(y)|\leq C(n)\,\frac{\kappa}{\beta}\,|x-y|^{\beta}\big{(}M_{\alpha- \beta,Q_{0}}(g\chi_{Q_{0}})(x)+M_{\alpha-\beta,Q_{0}}(g\chi_{Q_{0}})(y)\big{)}\]
_for every pair \(x,y\in Q_{0}\) of Lebesgue points of \(u\)._
Proof.: Pick two Lebesgue points \(x,y\in Q_{0}\) of \(u\) and let \(R_{0}\subset Q_{0}\) be a closed cube such that \(\ell(R_{0})\leq|x-y|\) and \(x,y\in R_{0}\). For every \(j\in\mathbb{N}\) there exists a cube \(R_{j}\subset R_{j-1}\) such that \(x\in R_{j}\) and \(\ell(R_{j})=2^{-1}\ell(R_{j-1})\). Since \(x\) is a Lebesgue point of \(u\), we can bound the oscillation of \(u\) over \(R_{0}\) by a telescopical sum of the distances between averages of \(u\) over the cubes \(R_{j}\), and then use the Poincare-type inequality as follows:
\[|u(x)-u_{R_{0}}| \leq\sum_{i=0}^{\infty}|u_{R_{i+1}}-u_{R_{i}}|\] \[\leq C(n)\,\kappa\,\sum_{i=0}^{\infty}(2^{-i}\ell(R_{0}))^{\beta} (2^{-i}\ell(R_{0}))^{\alpha-\beta}\fint_{R_{i}}g(z)\,\mathrm{d}z\] \[\leq C(n)\frac{\kappa}{1-2^{-\beta}}\ell(R_{0})^{\beta}M_{\alpha- \beta,Q_{0}}(g\chi_{Q_{0}})(x).\]
Repeating the argument with \(y\) yields the estimate
\[|u(x)-u(y)| \leq|u(x)-u_{R_{0}}|+|u(y)-u_{R_{0}}|\] \[\leq C(n)\frac{\kappa}{1-2^{-\beta}}|x-y|^{\beta}\big{(}M_{\alpha -\beta,Q_{0}}(g\chi_{Q_{0}})(x)+M_{\alpha-\beta,Q_{0}}(g\chi_{Q_{0}})(y)\big{)},\]
and this completes the proof, since \(1-2^{-\beta}\) is comparable to \(\beta\) as \(\beta\to 0\)
**Lemma 4.4**.: _Let \(Q_{0}\) be a cube in \(\mathbb{R}^{n}\). Assume that \(0<\alpha<n\) and consider \(0<\eta<n-\alpha\) and \(1\leq r<\infty\). Suppose that there exists a constant \(\kappa>0\), a function \(u\in L^{1}(Q_{0})\) and a nonnegative measurable function \(g\) such that_
\[\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$ -$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$ -$ }}\kern-13.499794pt}}\!\int_{Q}\lvert u(x)-u_{Q}\rvert\,\mathrm{d}x\leq\kappa\, \ell(Q)^{\alpha}\left(\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}\!\int_{Q}g(x)^{r}\,\mathrm{d}x\right)^{\frac{1}{r}} \tag{4.5}\]
_for every cube \(Q\subset Q_{0}\). Then there exists a dimensional constant \(C(n)\) such that_
\[\lvert u(x)-u_{Q_{0}}\rvert\leq C(n)\,\frac{\kappa}{\alpha^{1/r}\eta^{1/r}}\, \ell(Q_{0})^{\alpha/r^{\prime}}\left(I_{\alpha}(g^{r}\chi_{Q_{0}})(x)\right)^ {\frac{1}{r}} \tag{4.6}\]
_for every Lebesgue point \(x\in Q_{0}\) of \(u\)._
Proof.: This result is well known but we need to be precise with the main parameters involved. We adapt the main ideas from [10] in the case \(r=1\) and [10] when \(r>1\) and we also refer to [11]. Fix a Lebesgue point \(x\in Q_{0}\) of \(u\). Then there exists a chain \(\{Q_{k}\}_{k\in\mathbb{N}}\) of nested dyadic subcubes of \(Q_{0}\) such that \(Q_{1}=Q_{0}\), \(Q_{k+1}\subset Q_{k}\) with \(\lvert Q_{k}\rvert=2^{n}\lvert Q_{k+1}\rvert\) for all \(k\in\mathbb{N}\) and \(\{x\}=\bigcap_{k\in\mathbb{N}}Q_{k}\). Then, we can use an argument similar to the one in the proof of Lemma 4.3 to obtain that
\[\lvert u(x)-u_{Q_{0}}\rvert=\Bigl{\lvert}\lim_{k\to\infty}u_{Q_{k}}-u_{Q_{1}} \Bigr{\rvert}\leq\sum_{k\in\mathbb{N}}\left\lvert u_{Q_{k+1}}-u_{Q_{k}}\right\rvert.\]
Using the dyadic structure of the chain and the Poincare-type inequality (4.5), we obtain that
\[\sum_{k\in\mathbb{N}}\left\lvert u_{Q_{k+1}}-u_{Q_{k}}\right\rvert \leq\sum_{k\in\mathbb{N}}\frac{1}{\lvert Q_{k+1}\rvert}\int_{Q_{ k+1}}\lvert u(y)-u_{Q_{k}}\rvert\,\mathrm{d}y\] \[\leq 2^{n}\sum_{k\in\mathbb{N}}\frac{1}{\lvert Q_{k}\rvert}\int_{Q _{k}}\lvert u(y)-u_{Q_{k}}\rvert\,\mathrm{d}y\] \[\leq 2^{n}\kappa\sum_{k\in\mathbb{N}}\ell(Q_{k})^{\alpha}\left( \frac{1}{\lvert Q_{k}\rvert}\int_{Q_{k}}g(y)^{r}\,\mathrm{d}y\right)^{1/r}\] \[\leq 2^{n}\kappa\left(\sum_{k\in\mathbb{N}}\ell(Q_{k})^{\alpha} \right)^{1/r^{\prime}}\left(\sum_{k\in\mathbb{N}}\frac{\ell(Q_{k})^{\alpha}}{ \lvert Q_{k}\rvert}\int_{Q_{k}}g(y)^{r}\,\mathrm{d}y\right)^{1/r}\] \[\leq\frac{2^{n}}{(1-2^{-\alpha})^{1/r^{\prime}}}\kappa\ell(Q_{0 })^{\alpha/r^{\prime}}\left(\int_{Q_{0}}g(y)^{r}\sum_{k\in\mathbb{N}}\ell(Q_{k })^{\alpha-n}\chi_{Q_{k}}(y)\,\mathrm{d}y\right)^{1/r}.\]
Note that the immediate estimate \(\lvert x-y\rvert\leq\sqrt{n}\ell(Q_{k})\) produces an extra unwanted log factor when summing the series. We instead proceed as follows. Fix \(y\in Q_{0}\setminus\{x\}\) and pick \(0<\eta<n-\alpha\). Write \(k_{0}(y)=\max\{j\in\mathbb{N}:2^{j-1}\leq\sqrt{n}\frac{\ell(Q_{0})}{\lvert x-y \rvert}\}\). Then
\[\sum_{k\in\mathbb{N}}\ell(Q_{k})^{\alpha-n}\chi_{Q_{k}}(y) \leq\frac{C(n)}{\lvert x-y\rvert^{n-\alpha-\eta}}\sum_{k=1}^{k_{ 0}(y)}\ell(Q_{k})^{-\eta}\chi_{Q_{k}}(y)\] \[\leq\frac{C(n)}{\lvert x-y\rvert^{n-\alpha-\eta}\ell(Q_{0})^{ \eta}}\sum_{k=1}^{k_{0}(y)}2^{(k-1)\eta}\] \[\leq\frac{C(n)\,2^{\eta k_{0}(y)}}{\lvert x-y\rvert^{n-\alpha- \eta}\ell(Q_{0})^{\eta}(1-2^{-\eta})}\] \[\leq\frac{C(n)}{\lvert x-y\rvert^{n-\alpha}(1-2^{-\eta})}.\]
We conclude that the desired inequality
\[|u(x)-u_{Q_{0}}| \leq\frac{2^{n}C(n)^{1/r}\kappa}{(1-2^{-\alpha})^{1/r^{\prime}}(1-2^ {-\eta})^{1/r}}\ell(Q_{0})^{\alpha/r^{\prime}}\left(\int_{Q_{0}}\frac{g(y)^{r}}{ |x-y|^{n-\alpha}}\,\mathrm{d}y\right)^{\frac{1}{r}}\] \[\leq\frac{C(n)\kappa}{\alpha^{1/r^{\prime}}\eta^{1/r}}\ell(Q_{0}) ^{\alpha/r^{\prime}}(I_{\alpha}(g^{r}\chi_{Q_{0}})(x))^{1/r}\]
holds for Lebesgue points \(x\in Q_{0}\) of \(u\).
For the case \(\alpha=1\) we get a sharper result.
**Lemma 4.5**.: _Let \(Q_{0}\) be a cube in \(\mathbb{R}^{n}\) with \(n\geq 2\). Suppose there exists a constant \(\kappa>0\), a function \(u\in L^{1}(Q_{0})\) and a nonnegative measurable function \(g\) such that_
\[\fint_{Q}|u(x)-u_{Q}|\,\mathrm{d}x\leq\kappa\,\ell(Q)\fint_{Q}g(x)\,\mathrm{d}x\]
_for every cube \(Q\subset Q_{0}\). Then there exists a dimensional constant \(C(n)\) such that_
\[|u(x)-u(y)|\leq C(n)\,\kappa\,|x-y|\min_{z\in\{x,y\}}M(g\chi_{Q_{0}})(z)^{ \frac{1}{n}}\max_{z\in\{x,y\}}M(g\chi_{Q_{0}})(z)^{\frac{1}{n^{\prime}}} \tag{4.7}\]
_for every pair \(x,y\in Q_{0}\) of Lebesgue points of \(u\)._
Proof.: Pick two Lebesgue points \(x,y\in Q_{0}\) of \(u\), and let \(R_{0}\subset Q_{0}\) be a closed cube such that \(\ell(R_{0})\leq|x-y|\) and \(x,y\in R_{0}\). The representation formula (4.6) in Lemma 4.4, with \(\alpha=r=1\), yields
\[|u(x)-u_{R_{0}}|\leq C(n)\,\kappa\,I_{1}(g\chi_{R_{0}})(x).\]
Next we use inequality (4.2) in Lemma 4.2 to derive inequality
\[|u(x)-u_{R_{0}}|\leq C(n)\,\kappa\,\left(\int_{R_{0}}g(z)\,\mathrm{d}z\right) ^{\frac{1}{n}}M^{c}(\chi_{R_{0}}g)(x)^{\frac{1}{n^{\prime}}}.\]
Similar estimates hold for \(y\).
Since \(x,y\in R_{0}\), we get
\[|u(x)-u(y)| \leq|u(x)-u_{R_{0}}|+|u(y)-u_{R_{0}}|\] \[\leq C(n)\,\kappa\,\left(\left(\int_{R_{0}}g(z)\,\mathrm{d}z \right)^{\frac{1}{n}}M^{c}(\chi_{R_{0}}g)(x)^{\frac{1}{n^{\prime}}}+\left( \int_{R_{0}}g(z)\,\mathrm{d}z\right)^{\frac{1}{n}}M^{c}(\chi_{R_{0}}g)(y)^{ \frac{1}{n^{\prime}}}\right)\] \[=C(n)\,\kappa\,\ell(R_{0})\left(\fint_{R_{0}}g(z)\chi_{Q_{0}}\, \mathrm{d}z\right)^{\frac{1}{n}}\left(M^{c}(\chi_{Q_{0}}g)(x)^{\frac{1}{n^{ \prime}}}+M^{c}(\chi_{Q_{0}}g)(y)^{\frac{1}{n^{\prime}}}\right)\] \[\leq 2C(n)\,\kappa|x-y|\min_{z\in\{x,y\}}M(g\chi_{Q_{0}})(z)^{ \frac{1}{n}}\max_{z\in\{x,y\}}M(g\chi_{Q_{0}})(z)^{\frac{1}{n^{\prime}}}.\]
This is the desired inequality (4.7).
## 5. Proofs of theorems 2.1, 2.2, and 2.3
In this section we give the proofs of the main theorems of this paper. Each proof is given in a separate subsection.
### Proof of Theorem 2.1
We first let \(\mu\) be a Borel measure in \(\mathbb{R}^{n}\); at the end of the proof, we will apply the obtained estimates in the case \(\mathrm{d}\mu=w\,\mathrm{d}x\). Fix a cube \(Q\) in \(\mathbb{R}^{n}\), \(n\geq 2\), and function \(u\in C^{1}(Q)\). By applying the \((1,1)\)-Poincare inequality (1.1), there exists a dimensional constant \(C(n)>0\) such that
\[\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{ \hbox{$-$ }}\kern-12.149815pt}}{{\vbox{\hbox{$-$ }}\kern-9.899849pt}}{{\vbox{\hbox{$-$ }}\kern-8.999863pt}}\!\int_{R}\!|u(x)-u_{R}|\,\mathrm{d}x\leq C(n)\,\ell(R) \mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{ \hbox{$-$ }}\kern-12.149815pt}}{{\vbox{\hbox{$-$ }}\kern-9.899849pt}}{{\vbox{\hbox{$-$ }}\kern-8.999863pt}}\!\int_{R}\!|\nabla u(x)|\,\mathrm{d}x\]
for every cube \(R\subset Q\). We use Lemma 4.3 with parameters \(\beta=\alpha=1\), \(g=|\nabla u|\) and \(\kappa=C(n)\). Hence, for any pair of Lebesgue points \(x,y\in Q\) of \(u\), we have
\[|u(x)-u(y)|\leq C(n)\,|x-y|(M_{Q}(g\chi_{Q})(x)+M_{Q}(g\chi_{Q})(y)).\]
Since \(u\) is continuous, we know that every point of \(Q\) is a Lebesgue point of \(u\). By applying the triangle inequality, Tonelli's theorem and Lemma 4.2 twice,
\[\left(\int_{Q}\int_{Q}\frac{|u(x)-u(y)|^{p}}{|x-y|^{n+\delta p}} \,\mathrm{d}y\,\,\mathrm{d}\mu(x)\right)^{\frac{1}{p}} \leq C(n)\left(\int_{Q}\int_{Q}\frac{M_{Q}(g\chi_{Q})(x)^{p}}{|x- y|^{n-p(1-\delta)}}\,\mathrm{d}y\,\,\mathrm{d}\mu(x)\right)^{\frac{1}{p}}\] \[\leq C(n)\frac{\ell(Q)^{1-\delta}}{p^{\frac{1}{p}}(1-\delta)^{ \frac{1}{p}}}\left(\int_{Q}M(g\chi_{Q})(x)^{p}\,\,\mathrm{d}\mu(x)\right)^{ \frac{1}{p}}\] \[\quad+C(n)\left(\int_{Q}M(g\chi_{Q})(y)^{p}\int_{Q}\frac{\mathrm{ d}\mu(x)}{|x-y|^{n-p(1-\delta)}}\,\mathrm{d}y\right)^{\frac{1}{p}}\] \[\leq C(n)\frac{\ell(Q)^{1-\delta}}{p^{\frac{1}{p}}(1-\delta)^{ \frac{1}{p}}}\left(\int_{Q}M(g\chi_{Q})(x)^{p}\,\mathrm{d}\mu(x)\right)^{\frac {1}{p}}\] \[\quad+C(n)\frac{\ell(Q)^{1-\delta}}{p^{\frac{1}{p}}(1-\delta)^{ \frac{1}{p}}}\left(\int_{Q}M(g\chi_{Q})(y)^{p}\,M(\chi_{Q}\mu)(y)\,\mathrm{d}y \,\right)^{\frac{1}{p}}.\]
Since \(1<p<\infty\), we can apply the Fefferman-Stein inequality (3.8) to get
\[\left(\int_{Q}\int_{Q}\frac{|u(x)-u(y)|^{p}}{|x-y|^{n+\delta p}} \,\mathrm{d}y\,\mathrm{d}\mu(x)\right)^{\frac{1}{p}}\\ \leq C(n)p^{\prime}\frac{\ell(Q)^{1-\delta}}{(1-\delta)^{\frac{1 }{p}}}\left\{\ \left(\int_{Q}g(x)^{p}\,M(\mu\chi_{Q})(x)\mathrm{d}x\right)^{\frac{1}{p}}+ \left(\int_{Q}g(y)^{p}\,M^{2}(\mu\chi_{Q})(y)\,\mathrm{d}y\right)^{\frac{1}{p} }\right\}.\]
Since \(M(\mu\chi_{Q})\leq M^{2}(\mu\chi_{Q})\) almost everywhere, this finishes the proof of the inequality
\[\ell(Q)^{\delta}\Bigg{(}\int_{Q}\int_{Q} \frac{|u(x)-u(y)|^{p}}{|x-y|^{n+\delta p}}\,\mathrm{d}y\,\mathrm{d }\mu(x)\Bigg{)}^{\frac{1}{p}} \tag{5.1}\] \[\leq\frac{C(n)\,p^{\prime}}{(1-\delta)^{\frac{1}{p}}}\ell(Q) \left(\int_{Q}|\nabla u(x)|^{p}\,M^{2}(\chi_{Q}\mu)(x)\,\mathrm{d}x\right)^{ \frac{1}{p}}\]
for Borel measures \(\mu\) in \(\mathbb{R}^{n}\). If \(\mathrm{d}\mu(x)=w(x)\,\mathrm{d}x\) for some \(w\in A_{1}\), then \(M^{2}(w\chi_{Q})\leq[w]_{A_{1}}^{2}w\) almost everywhere, and this implies the second inequality in the theorem.
### Proof of Theorem 2.2
We first let \(\mu\) be a Borel measure in \(\mathbb{R}^{n}\); at the end of the proof, we will apply the obtained estimate in the case \(\mathrm{d}\mu=w\,\mathrm{d}x\). Fix a cube \(Q\) in \(\mathbb{R}^{n}\) and a function \(u\in C^{1}(Q)\). By the \((1,1)\)-Poincare inequality (1.1), there exists a dimensional constant \(C(n)>0\) such that
\[\fint_{R}|u(x)-u_{R}|\,\mathrm{d}x\leq C(n)\,\ell(R)\fint_{R}|\nabla u(x)|\, \mathrm{d}x\]
for every cube \(R\subset Q\). We use Lemma 4.5 with parameters \(g=|\nabla u|\) and \(\kappa=C(n)\). Hence, there exists a dimensional constant \(C(n)\) such that, for every pair of Lebesgue points \(x,y\in Q\) of \(u\)
\[|u(x)-u(y)|\leq C(n)\,|x-y|\min_{z\in\{x,y\}}M(g\chi_{Q_{0}})(z)^{\frac{1}{n}} \max_{z\in\{x,y\}}M(g\chi_{Q_{0}})(z)^{\frac{1}{n^{\prime}}}.\]
Let us define the set
\[A=\{(x,y)\in Q\times Q:M(g\chi_{Q})(x)\leq M(g\chi_{Q})(y)\}.\]
Since every point of \(Q\) is a Lebesgue point of \(u\in C^{1}(Q)\), we get
\[\int_{Q}\int_{Q} \frac{|u(x)-u(y)|}{|x-y|^{n+\delta}}\,\mathrm{d}y\,\mathrm{d}\mu(x)\] \[\leq\int_{Q}\int_{Q}\frac{\min_{z\in\{x,y\}}M(g\chi_{Q_{0}})(z)^{ \frac{1}{n}}\max_{z\in\{x,y\}}M(g\chi_{Q_{0}})(z)^{\frac{1}{n^{\prime}}}}{|x- y|^{n-(1-\delta)}}\,\mathrm{d}y\,\mathrm{d}\mu(x)\] \[=\int_{Q}\int_{Q}\chi_{A}(x,y)\frac{M(g\chi_{Q})(x)^{\frac{1}{n}} M(g\chi_{Q})(y)^{\frac{1}{n^{\prime}}}}{|x-y|^{n-(1-\delta)}}\,\mathrm{d}y\, \mathrm{d}\mu(x)\] \[\qquad\qquad+\int_{Q}\int_{Q}\chi_{(Q\times Q)\setminus A}(x,y) \frac{M(g\chi_{Q})(y)^{\frac{1}{n}}M(g\chi_{Q})(x)^{\frac{1}{n^{\prime}}}}{|x -y|^{n-(1-\delta)}}\,\mathrm{d}y\,\mathrm{d}\mu(x)\] \[=I+II.\]
We work first with \(I\). By Tonelli's theorem,
\[I \leq\int_{Q}M(g\chi_{Q})(x)^{\frac{1}{n}}\int_{Q}\frac{M(g\chi_{Q })(y)^{\frac{1}{n^{\prime}}}}{|x-y|^{n-(1-\delta)}}\,\mathrm{d}y\,\mathrm{d}\mu (x)\] \[=\int_{Q}M(g\chi_{Q})(x)^{\frac{1}{n}}\,I_{1-\delta}(\chi_{Q}M(g \chi_{Q})^{\frac{1}{n^{\prime}}})(x)\,\mathrm{d}\mu(x).\]
By the Coifman-Rochberg lemma [1, pp. 158-159] we know that \(M(g\chi_{Q})(y)^{\frac{1}{n^{\prime}}}\in A_{1}\) with a constant depending on the dimension. In particular, we get
\[M^{c}(\chi_{Q}M(g\chi_{Q})^{\frac{1}{n^{\prime}}})\leq C(n)M(g\chi_{Q})^{ \frac{1}{n^{\prime}}}\]
almost everywhere in \(Q\). We use this estimate together with inequality (4.2) in Lemma 4.2 to get
\[I \leq\frac{C(n)}{1-\delta}\left(\int_{Q}M(g\chi_{Q})(x)^{\frac{1}{ n^{\prime}}}\,\mathrm{d}x\right)^{\frac{1-\delta}{n}}\int_{Q}M(g\chi_{Q})(x)^{ \frac{1}{n}}\,M(g\chi_{Q})(x)^{\frac{n-(1-\delta)}{nn^{\prime}}}\,\mathrm{d} \mu(x)\] \[=\frac{C(n)}{1-\delta}\ell(Q)^{1-\delta}\,\left(\fint_{Q}M(g\chi_ {Q})(x)^{\frac{1}{n^{\prime}}}\,\mathrm{d}x\right)^{\frac{1-\delta}{n^{\prime} }}\int_{Q}M(g\chi_{Q})(x)^{1-\frac{1-\delta}{nn^{\prime}}}\,\mathrm{d}\mu(x).\]
To estimate the last two integrals we use Kolmogorov's inequality (3.3). By using also the weak type \((1,1)\) estimate for the maximal operator, we estimate the first integral
\[\fint_{Q}M(g\chi_{Q})(x)^{\frac{1}{n^{\prime}}}\,\mathrm{d}x\leq C(n)\|M(g\chi _{Q})\|_{L^{1,\infty}\left(Q,\frac{\mathrm{d}x}{|Q|}\right)}^{\frac{1}{n^{ \prime}}}\leq C(n)\left(\fint_{Q}g\chi_{Q}(x)\,\mathrm{d}x\right)^{\frac{1}{n^{ \prime}}}.\]
We estimate the second integral using Kolmogorov's inequality (3.3) and the Fefferman-Stein inequality (3.7), thus getting
\[\frac{1}{\mu(Q)}\int_{Q}\,M(g\chi_{Q})(x)^{1-\frac{1-\delta}{nn^{ \prime}}}\,\,\mathrm{d}\mu(x) \leq\frac{C(n)}{1-\delta}\left\|M(g\chi_{Q})\right\|_{L^{1,\infty }\left(Q,\frac{\mathrm{d}\mu}{\mu(Q)}\right)}^{1-\frac{1-\delta}{nn^{\prime}}}\] \[\leq\frac{C(n)}{1-\delta}\,\left(\sup_{t>0}\frac{t}{\mu(Q)}\mu \{x\in\mathbb{R}^{n}:M(g\chi_{Q})(x)>t\}\right)^{1-\frac{1-\delta}{nn^{\prime}}}\] \[\leq\frac{C(n)}{1-\delta}\,\left(\frac{1}{\mu(Q)}\int_{Q}g(x)\,M \mu(x)\,\mathrm{d}x\right)^{1-\frac{1-\delta}{nn^{\prime}}}.\]
We have shown that
\[I\leq\frac{C(n)}{(1-\delta)^{2}}\ell(Q)^{1-\delta}\,\left(\fint_{Q}g\chi_{Q}( x)\,\mathrm{d}x\right)^{\frac{1-\delta}{nn^{\prime}}}\left(\frac{1}{\mu(Q)} \int_{Q}g(x)\,M\mu(x)\,\mathrm{d}x\right)^{1-\frac{1-\delta}{nn^{\prime}}}\mu (Q)\,.\]
for Borel measures \(\mu\) in \(\mathbb{R}^{n}\). If we choose the measure \(\mu\) to be defined through an \(A_{1}\) weight \(w\), we get
\[I \leq\frac{C(n)}{(1-\delta)^{2}}\ell(Q)^{1-\delta}\,\left(\fint_{Q} g\chi_{Q}(x)\,\mathrm{d}x\right)^{\frac{1-\delta}{nn^{\prime}}}\left(\frac{1}{w(Q)} \int_{Q}g(x)\,Mw(x)\,\mathrm{d}x\right)^{1-\frac{1-\delta}{nn^{\prime}}}w(Q)\] \[\leq\frac{C(n)}{(1-\delta)^{2}}\ell(Q)^{1-\delta}\,\left(\frac{[ w]_{A_{1}}}{w(Q)}\int_{Q}g(x)\,w(x)\,\mathrm{d}x\right)^{\frac{1-\delta}{nn^{ \prime}}}\left(\frac{[w]_{A_{1}}}{w(Q)}\int_{Q}g(x)\,w(x)\,\mathrm{d}x\right)^ {1-\frac{1-\delta}{nn^{\prime}}}w(Q)\] \[\leq\frac{C(n)\,[w]_{A_{1}}}{(1-\delta)^{2}}\,\ell(Q)^{1-\delta} \,\int_{Q}g(x)\,w(x)\,\mathrm{d}x.\]
Since the term \(II\) can be treated in the same way, we have finished the proof of the theorem.
### Proof of Theorem 2.3
We adapt the non-weighted proof from [1].
We have to prove that
\[\left\|\frac{u(x)-u(y)}{|x-y|^{n+\delta}}\right\|_{L^{1,\infty}\left(Q\times Q,\mathrm{d}w(x)\times\mathrm{d}y\right)}\leq C(n)\,\frac{[w]_{A_{1}}^{2+\frac{ 1-\delta}{n}}}{\delta(1-\delta)}\ell(Q)^{1-\delta}\int_{Q}|\nabla u(x)|w(x)\, \mathrm{d}x.\]
By the \((1,1)\)-Poincare inequality (1.1), there exists a dimensional constant \(C(n)>0\) such that
\[\fint_{R}|u(x)-u_{R}|\,\mathrm{d}x\leq c_{n}\,\ell(R)\fint_{R}|\nabla u(x)|\, \mathrm{d}x\]
for every cube \(R\subset Q\). We use Lemma 4.3 with \(g=|\nabla u|,\ \kappa=C(n)\) and \(0<\beta=\delta<1=\alpha\). Hence, for any pair of Lebesgue points \(x,y\in Q\) of \(u\), the inequality
\[|u(x)-u(y)|\leq\,\frac{C(n)}{\delta}\,|x-y|^{\delta}(M_{1-\delta,Q}(g\chi_{Q} )(x)+M_{1-\delta,Q}(g\chi_{Q})(y))\]
holds.
Thus, we can estimate
\[\left\|\frac{u(x)-u(y)}{|x-y|^{n+\delta}}\right\|_{L^{1,\infty} \left(Q\times Q,\mathrm{d}w(x)\times\mathrm{d}y\right)}\] \[\leq\frac{C(n)}{\delta}\left\|\frac{M_{1-\delta,Q}g(x)}{|x-y|^{n} }\right\|_{L^{1,\infty}\left(Q\times Q,\mathrm{d}w(x)\times\mathrm{d}y \right)}+\frac{C(n)}{\delta}\left\|\frac{M_{1-\delta,Q}g(y)}{|x-y|^{n}}\right\| _{L^{1,\infty}\left(Q\times Q,\mathrm{d}w(x)\times\mathrm{d}y\right)}\] \[=\frac{C(n)}{\delta}\,(I+II).\]
In order to estimate \(I\), we write
\[E_{t}=\left\{(x,y):\frac{M_{1-\delta,Q}g(x)}{|x-y|^{n}}>t\right\}\]
for every \(t>0\). Then, by definition of the weak quasinorm and Tonelli's theorem,
\[I=\sup_{t>0}t\int_{Q}\int_{Q}\chi_{E_{t}}(x,y)w(x)\,\mathrm{d}x\,\mathrm{d}y= \sup_{t>0}t\int_{Q}\int_{Q}\chi_{E_{t}}(x,y)\mathrm{d}y\,w(x)\,\mathrm{d}x\,.\]
For a fixed \(x\), we easily see that \(\chi_{E_{t}}(x,\cdot)\) is the characteristic function of the ball centered at \(x\) with radius
\[\left(\frac{M_{1-\delta,Q}g(x)}{t}\right)^{\frac{1}{n}},\]
and hence
\[I\leq C(n)\sup_{t>0}\,t\int_{Q}\frac{M_{1-\delta,Q}g(x)}{t}w(x)\,\mathrm{d}x= C(n)\int_{Q}M_{1-\delta,Q}g(x)w(x)\,\mathrm{d}x. \tag{5.2}\]
Using Lemma 4.1, we finally get
\[I\leq C(n)\frac{[w]_{A_{1}}^{1+\frac{1-\delta}{n}}}{1-\delta}\ell(Q)^{1- \delta}\int_{Q}g(x)\,w(x)\,\mathrm{d}x,\]
and this yields the estimate for \(I\).
It remains to estimate the second term
\[II=\sup_{t>0}\,t\int_{Q}\int_{Q}\chi_{F_{t}}(x,y)w(x)\,\mathrm{d}x\,\mathrm{d}y,\]
where
\[F_{t}=\left\{(x,y):\frac{M_{1-\delta,Q}g(y)}{|x-y|^{n}}>t\right\}\,.\]
For a fixed \(y\), we see that \(\chi_{F_{t}}(\cdot,y)\) is the characteristic function of the ball centered at \(y\) and with radius
\[r_{y,t}=\left(\frac{M_{1-\delta,Q}g(y)}{t}\right)^{\frac{1}{n}}.\]
Hence,
\[II \leq\sup_{t>0}t\int_{Q}\,w(B(y,r_{y,t}))\,\mathrm{d}y\] \[\leq\sup_{t>0}t\int_{Q}\,\frac{w(Q(y,r_{y,t}))}{|Q(y,r_{y,t})|} \,|Q(y,r_{y,t})|\,\mathrm{d}y\] \[\leq c_{n}\int_{Q}M^{c}w(y)\,M_{1-\delta,Q}g(y)\,\mathrm{d}y\leq c _{n}[w]_{A_{1}}\int_{Q}M_{1-\delta,Q}g(y)\,w(y)\mathrm{d}y\,.\]
Using Lemma 4.1,
\[II\leq c_{n}\frac{[w]_{A_{1}}^{2+\frac{1-\delta}{n}}}{1-\delta}\ell(Q)^{1- \delta}\int_{Q}g(x)\,w(x)\,\mathrm{d}x,\]
and this yields the estimate for \(II\). This concludes the proof of the theorem.
## 6. Appendix
In this appendix we prove inequality (1.4) which appears already in [1, Theorem 1]. Our proof is based on very elementary calculations. The case \(p=1\) of this approach appears in [13].
**Lemma 6.1**.: _Let \(Q\) be a cube in \(\mathbb{R}^{n}\) and let \(u\in C^{1}(Q)\). Let \(0<\delta<1\) and \(1\leq p<\infty\). There is a dimensional constant \(C(n)>0\) such that_
\[\left(\fint_{Q}\int_{Q}\frac{|u(x)-u(y)|^{p}}{|x-y|^{n+\delta p}}\,\mathrm{d}y \,\mathrm{d}x\right)^{\frac{1}{p}}\leq\frac{C(n)}{\alpha(\delta,p)}\ell(Q)^{1 -\delta}\left(\fint_{Q}|\nabla u(x)|^{p}\,\mathrm{d}x\right)^{\frac{1}{p}}. \tag{6.1}\]
_Furthermore,_
\[\alpha(\delta,p)=\left\{\begin{array}{ccc}(1-(1-\delta)p)^{\frac{1}{p}}\,( 1-\delta)p)^{\frac{1}{p}}&if&(1-\delta)p<1,\\ ((1-\delta)p-1)^{\frac{1}{p}}&if&(1-\delta)p>1,\\ 1&if&(1-\delta)p=1\end{array}\right.\]
_and hence \(\alpha(\delta,p)\approx(1-\delta)^{\frac{1}{p}}\) as \(\delta\to 1\)._
Proof.: Let us first prove (1). By the Fundamental Theorem of Calculus one can write, for every \(x,y\in Q\),
\[u(y)-u(x)=\int_{0}^{1}\nabla u(x+t(y-x))\cdot(y-x)\,\mathrm{d}t,\]
where \(\cdot\) represents the usual scalar product in \(\mathbb{R}^{n}\). Thus, using this equality, Holder's inequality and Tonelli's theorem,
\[\begin{split}&\left(\fint_{Q}\int_{Q}\frac{|u(x)-u(y)|^{p}}{|x- y|^{n+\delta p}}\,\mathrm{d}y\,\mathrm{d}x\right)^{\frac{1}{p}}\\ &\qquad\leq\left(\fint_{Q}\int_{Q}\int_{0}^{1}\frac{|\nabla u(x+t (y-x))|^{p}}{|x-y|^{n+\delta p-p}}\mathrm{d}t\,\mathrm{d}y\,\mathrm{d}x\right) ^{\frac{1}{p}}\\ &\qquad=\left(\fint_{Q}\int_{0}^{1}\int_{Q\cap B(x,\sqrt{n}\ell(Q ))}\frac{|\nabla u(x+t(y-x))|^{p}}{|x-y|^{n-(1-\delta)p}}\,\mathrm{d}y\, \mathrm{d}t\,\mathrm{d}x\right)^{\frac{1}{p}},\end{split} \tag{6.2}\]
since \(Q\subset B(x,\sqrt{n}\ell(Q))\) for any \(x\in Q\). By change of variables \(z=x+t(y-x)=(1-t)x+ty\), one has
1. By convexity, \(x,y\in Q\) implies \(z\in Q\), so \(\chi_{Q}(y)=\chi_{Q}(z)\).
2. \(|x-y|=|z-x|/t\).
Thus, we continue with
\[\begin{split}&=\left(\fint_{Q}\int_{0}^{1}\int_{((1-t)x+tQ)\cap B (x,\sqrt{n}\ell(Q))}\frac{|\nabla u(z)|^{p}}{|z-x|^{n-(1-\delta)p}}\frac{t^{n -(1-\delta)p}}{t^{n}}\,\mathrm{d}z\,\mathrm{d}t\,\mathrm{d}x\right)^{\frac{1} {p}}\\ &\leq\left(\fint_{Q}\int_{0}^{1}\int_{Q\cap B(x,\sqrt{n}t\ell(Q))} \frac{|\nabla u(z)|^{p}}{|z-x|^{n-(1-\delta)p}}t^{-(1-\delta)p}\,\mathrm{d}z \,\mathrm{d}t\,\mathrm{d}x\right)^{\frac{1}{p}}\\ &=\left(\fint_{Q}\int_{Q}\int_{\frac{|z-x|}{\sqrt{n}\ell(Q)}} \frac{\mathrm{d}t}{t^{(1-\delta)p}}\frac{|\nabla u(z)|^{p}}{|z-x|^{n-(1- \delta)p}}\,\mathrm{d}z\,\mathrm{d}x\right)^{\frac{1}{p}},\end{split} \tag{6.3}\]
where we used Tonelli's theorem again in the last equality.
There are three possibilities depending on the value of \((1-\delta)p\).
**Case 1.** If \((1-\delta)p<1\), then we can extend the integral to zero, and we get the following upper bound
\[\begin{split}&\frac{1}{(1-(1-\delta)p)^{\frac{1}{p}}}\,\left( \fint_{Q}\int_{Q}\frac{|\nabla u(z)|^{p}}{|z-x|^{n-(1-\delta)p}}\,\mathrm{d}z \,\mathrm{d}x\right)^{\frac{1}{p}}\\ &\qquad=\frac{1}{(1-(1-\delta)p)^{\frac{1}{p}}}\,\left(\fint_{Q} |\nabla u(z)|^{p}\int_{Q}\frac{\mathrm{d}x}{|z-x|^{n-(1-\delta)p}}\,\mathrm{d} z\right)^{\frac{1}{p}}\\ &\qquad\leq\frac{1}{(1-(1-\delta)p)^{\frac{1}{p}}}\,\left(\fint_{ Q}|\nabla u(z)|^{p}\frac{C(n)\ell(Q)^{(1-\delta)p}}{v_{n}^{(1-\delta)p/n}(1- \delta)p}\mathrm{d}z\right)^{\frac{1}{p}}\\ &\qquad\leq\frac{C(n)\ell(Q)^{1-\delta}}{v_{n}^{\frac{1-\delta}{ n}}((1-\delta)p)^{\frac{1}{p}}(1-(1-\delta)p)^{\frac{1}{p}}}\left(\fint_{Q}| \nabla u(x)|^{p}\,\mathrm{d}x\right)^{\frac{1}{p}},\end{split} \tag{6.4}\]
where we used Tonelli's theorem and the fact that, for a Lebesgue measurable set \(E\) and \(0<\alpha<n\),
\[\int_{E}\frac{\mathrm{d}z}{|x-z|^{n-\alpha}}\leq C(n)v_{n}^{-\frac{\alpha}{n} }\alpha^{-1}|E|^{\frac{\alpha}{n}},\qquad\text{for all }x\in\mathbb{R}^{n}, \tag{6.5}\]
where \(v_{n}\) is the volume of the unit ball of \(\mathbb{R}^{n}\). See Lemma 4.2.
**Case 2.** If \((1-\delta)p>1\), then we can extend the upper bound of the integral in \(t\) up to infinity and then compute it. This way we obtain the following bound
(6.6)
where again we used Tonelli's theorem and inequality (6.5).
**Case 3.** If \((1-\delta)p=1\), then we compute the integral in \(t\) and use the elementary inequality
\[\log s\leq\frac{s^{q}-1}{q}\leq\frac{s^{q}}{q}\]
which holds whenever \(s>1\) and \(q>0\). Applying this, say, with \(q=\frac{1}{2}\) we get the upper bound
\[\begin{split}&\left(\fint_{Q}\int_{Q}\log\left(\frac{\sqrt{n}\ell(Q )}{|z-x|}\right)\frac{|\nabla u(z)|^{p}}{|z-x|^{n-1}}\mathrm{d}z\mathrm{d}x \right)^{\frac{1}{p}}\\ &\qquad\leq C(n)\ell(Q)^{\frac{q}{p}}\left(\fint_{Q}|\nabla u(z) |^{p}\int_{Q}\frac{\mathrm{d}x}{|z-x|^{n-(1-q)}}\mathrm{d}z\right)^{\frac{1}{p}} \\ &\qquad\leq C(n)\,\ell(Q)^{\frac{q}{p}+\frac{(1-q)}{p}}\left( \fint_{Q}|\nabla u(z)|^{p}\mathrm{d}z\right)^{\frac{1}{p}}\\ &\qquad=C(n)\,\ell(Q)^{1-\delta}\left(\fint_{Q}|\nabla u(z)|^{p} \mathrm{d}z\right)^{\frac{1}{p}},\end{split} \tag{6.7}\]
where Tonelli's theorem and inequality (6.5) have been used one more time.
**Remark 6.2**.: _Let \(Q\) be a cube in \(\mathbb{R}^{n}\) and let \(u\in L^{1}(Q)\). Let \(0<\beta\leq\delta<1\) and \(1\leq p<\infty\). Then it is straightforward to show that_
\[\ell(Q)^{\beta}\left(\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499949pt}}{{ \vbox{\hbox{$-$}}\kern-13.499949pt}}{{\vbox{\hbox{$-$}} \kern-13.499949pt}}{{\vbox{\hbox{$-$}}\kern-13.499949pt}}\!\int_{Q} \frac{|u(x)-u(y)|^{p}}{|x-y|^{n+\beta p}}\,\mathrm{d}y\,\mathrm{d}x\right)^{ \frac{1}{p}}\leq C(n)\ell(Q)^{\delta}\left(\mathchoice{{\vbox{\hbox{$-$}} \kern-13.499949pt}}{{\vbox{\hbox{$-$}}\kern-13.499949pt}}{{\vbox{ \hbox{$-$}}\kern-13.499949pt}}\!\int_{Q}\frac{|u(x)-u(y)|^{p}}{|x-y|^{n+ \beta p}}\,\mathrm{d}y\,\mathrm{d}x\right)^{\frac{1}{p}}\,.\]
|
2309.01030 | Online Adaptive Mahalanobis Distance Estimation | Mahalanobis metrics are widely used in machine learning in conjunction with
methods like $k$-nearest neighbors, $k$-means clustering, and $k$-medians
clustering. Despite their importance, there has not been any prior work on
applying sketching techniques to speed up algorithms for Mahalanobis metrics.
In this paper, we initiate the study of dimension reduction for Mahalanobis
metrics. In particular, we provide efficient data structures for solving the
Approximate Distance Estimation (ADE) problem for Mahalanobis distances. We
first provide a randomized Monte Carlo data structure. Then, we show how we can
adapt it to provide our main data structure which can handle sequences of
\textit{adaptive} queries and also online updates to both the Mahalanobis
metric matrix and the data points, making it amenable to be used in conjunction
with prior algorithms for online learning of Mahalanobis metrics. | Lianke Qin, Aravind Reddy, Zhao Song | 2023-09-02T22:19:24Z | http://arxiv.org/abs/2309.01030v2 | # Online Adaptive Mahalanobis Distance Estimation +
###### Abstract
Mahalanobis metrics are widely used in machine learning in conjunction with methods like \(k\)-nearest neighbors, \(k\)-means clustering, and \(k\)-medians clustering. Despite their importance, there has not been any prior work on applying sketching techniques to speed up algorithms for Mahalanobis metrics. In this paper, we initiate the study of dimension reduction for Mahalanobis metrics. In particular, we provide efficient data structures for solving the Approximate Distance Estimation (ADE) problem for Mahalanobis distances. We first provide a randomized Monte Carlo data structure. Then, we show how we can adapt it to provide our main data structure which can handle sequences of _adaptive_ queries and also online updates to both the Mahalanobis metric matrix and the data points, making it amenable to be used in conjunction with prior algorithms for online learning of Mahalanobis metrics.
###### Contents
* 1 Introduction
* 2 Related Work
* 2.1 Approximate Adaptive Distance Estimation
* 2.2 Online Metric Learning
* 3 Preliminaries
* 3.1 Notation
* 3.2 Background
* 3.3 Applications of our Data-Structure
* 4 Technique Overview
* 5 JL sketch for approximate Mahalanobis distance estimation
* 6 Online Adaptive Mahalanobis Distance Maintenance
* 7 Evaluation
* 8 Conclusion
* A Proofs for Online Adaptive Mahalanobis Distance Maintenance
* B Sampling
* C More Experiments
Introduction
The choice of metric is critical for the success of a wide range of machine learning tasks, such as \(k\)-nearest neighbors, \(k\)-means clustering, and \(k\)-medians clustering. In many applications, the metric is often not provided explicitly and needs to be learned from data [2, 1, 13]. The most common class of such learned metrics is the set of Mahalanobis metrics, which have been shown to have good generalization performance. For a set of points \(\mathcal{X}=\{x_{1},x_{2},\ldots,x_{n}\}\in\mathbb{R}^{d}\), a Mahalanobis metric \(d\) on \(\mathcal{X}\) is characterized by a positive semidefinite matrix \(A\in\mathbb{R}^{d\times d}\) such that \(d(x_{i},x_{j})=\sqrt{(x_{i}-x_{j})^{\top}A(x_{i}-x_{j})}\). Mahalanobis distances are generalizations of traditional Euclidean distances, and they allow for arbitrary linear scaling and rotations of the feature space.
In parallel to the large body of work on learning Mahalanobis metrics, the use of sketching techniques has also become increasingly popular in dealing with real-world data that is often high-dimensional data and also extremely large in terms of the number of data points. In particular, a large body of influential work has focused on sketching for Euclidean distances [1, 1, 10]. Despite the importance of Mahalanobis metrics in practical machine learning applications, there has not been any prior work focusing on dimension reduction for Mahalanobis distances.
In this paper, we initiate the study of dimension reduction for Mahalanobis distances. In particular, we focus on the Approximate Distance Estimation (ADE) problem [14] in which the goal is to build an efficient data structure that can estimate the distance from a given query point to a set of private data points, even when the queries are provided adaptively by an adversary. ADE has many machine learning applications where the input data can be easily manipulated by users and the model accuracy is critical, such as network intrusion detection [1, 13], strategic classification [12], and autonomous driving [11, 1, 12]. We formulate the ADE problem with a Mahalanobis distance as:
**Definition 1** (Approximate Mahalanobis Distance Estimation).: _For a Mahalanobis distance characterized by a positive semi-definite matrix \(A\in\mathbb{R}^{d\times d}\), given a set of data points \(\mathcal{X}=\{x_{1},x_{2},\ldots,x_{n}\}\subset\mathbb{R}^{d}\) and an accuracy parameter \(\varepsilon\in(0,1)\), we need to output a data structure \(\mathcal{D}\) such that: Given only \(\mathcal{D}\) stored in memory with no direct access to \(\mathcal{X}\), our query algorithm must respond to queries of the form \(q\in\mathbb{R}^{d}\) by reporting distance estimates \(\widetilde{d}_{1},\ldots,\widetilde{d}_{n}\) satisfying_
\[\forall i\in[n],(1-\varepsilon)\|q-x_{i}\|_{A}\leq\widetilde{d}_{i}\leq(1+ \varepsilon)\|q-x_{i}\|_{A}.\]
We also consider an online version of the problem, where the underlying Mahalanobis distance changes over iterations. This is an important problem, because the distance metric \(A\) is learned from data and can change over time in many applications of Mahalanobis distances. For example, for an Internet image search application, the continuous collection of input images changes the distance metric \(A\). We formulate our online version of the ADE problem with Mahalanobis distances as follows:
**Definition 2** (Online Adaptive Mahalanobis Distance Estimation).: _We need to design a data structure that efficiently supports any sequence of the following operations:_
* Initialize\((U\in\mathbb{R}^{k\times d},\{x_{1},x_{2},\cdots,x_{n}\}\subset\mathbb{R}^{d}, \varepsilon\in(0,1),\delta\in(0,1))\)_. The data structure takes_ \(n\) _data points_ \(\{x_{1},x_{2},\ldots,x_{n}\}\subset\mathbb{R}^{d}\)_, a_ \(k\times d\) _matrix_ \(U\) _(which defines the metric with_ \(A=U^{\top}U\)_), an accuracy parameter_ \(\varepsilon\) _and a failure probability_ \(\delta\) _as input._
* Update\(U(u\in\mathbb{R}^{d},a\in[k])\)_. Updates the_ \(a\)_-th row of_ \(k\times d\) _matrix_ \(U\)_._
* UpdateX\((z\in\mathbb{R}^{d},i\in[n])\)_. Update the data structure with the_ \(i\)_-th new data point_ \(z\)
* \(\textsc{QueryPair}(i,j\in[n])\) _Outputs a number \(p\) such that \((1-\varepsilon)\|x_{i}-x_{j}\|_{A}\leq p\leq(1+\varepsilon)\cdot\|x_{i}-x_{j}\|_ {A}\) with probability at least \(1-\delta\)._
* \(\textsc{QueryAll}(q\in\mathbb{R}^{d})\)_. Outputs a set of distance estimates_ \(\{\widetilde{d}_{1},\cdots,\widetilde{d}_{n}\}\subset\mathbb{R}\) _such that_ \(\forall i\in[n],(1-\varepsilon)\|q-x_{i}\|_{A}\leq\widetilde{d}_{i}\leq(1+ \varepsilon)\|q-x_{i}\|_{A}\)_. with probability at least_ \(1-\delta\)_._
* \(\textsc{SampleExact}(q\in\mathbb{R}^{d})\)_. Sample an index_ \(i\in[n]\) _with probability_ \(d_{i}/\sum_{j=1}^{n}d_{j}\)__
Our randomized data structure(Algorithm 2, 3 and 5) supports approximate Mahalanobis distance estimation with online update to the distance metric matrix and data points even for a sequence of adaptively chosen queries. It also supports samples an index \(i\in[n]\) with probability \(d_{i}/\sum_{j=1}^{n}d_{j}\).
Roadmap.In Section 2, we discuss related works in online metric learning, sketching, and adaptive distance estimation. In Section 3, we provide some notation used throughout the paper and also provide some background. In Section 5, we demonstrate how we can use the well-known JL sketch [11] to design a data-structure which can support Mahalanobis distance estimation. In Section 6, we present our adaptive Mahalanobis distance maintenance data structure and prove its time and correctness guarantees. We provide an experimental evaluation of our data structure in Section 7. We end with our conclusion in Section 8.
## 2 Related Work
SketchingSketching is a well-known technique to improve performance or memory complexity [10]. It has wide applications in linear algebra, such as linear regression and low-rank approximation[10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 21, 22, 23, 24, 25, 26, 27, 28, 29], training over-parameterized neural network [21, 22, 23, 24, 25], empirical risk minimization [20, 21], linear programming [20, 22, 23, 24, 25, 26, 27, 28, 29], distributed problems [21, 22, 24, 25], clustering [26], generative adversarial networks [27], kernel density estimation [28, 29], tensor decomposition [20], trace estimation [21], projected gradient descent [20, 21], matrix sensing [22, 23, 24, 25], softmax regression [20, 21, 23, 25, 26, 28, 27, 29], semi-definite programming [20], kernel methods [1, 23, 24], adversarial training [21], cutting plane method [20], discrepancy [15], federated learning [22, 23], kronecker protection maintenance [21], reinforcement learning [1, 22, 24], relational database [23] and attention computation [24].
### Approximate Adaptive Distance Estimation
There has been increasing interest in understanding threats associated with the deployment of algorithms in potentially adversarial settings [2, 21, 22, 23]. Additionally, the problem of preserving statistical validity when conducting exploratory data analysis has been extensively studied [1, 1, 14, 15, 22, 24, 25, 26, 27] where the goal is to maintain coherence with an unknown distribution from which data samples are acquired. In the application of approximate nearest neighbor, the works of [19, 18, 17] present non-trivial data structures for distance estimation which supports adaptive queries. In the context of streaming algorithms, the work of [1] designs streaming algorithms that support both adaptive queries and updates.
### Online Metric Learning
A number of recent techniques consider the metric learning problem [11, 12, 13]. Most works handle offline learning Mahalanobis distances, which often results in expensive optimization algorithms. POLA [10] is an algorithm for online Mahalanobis metric learning that optimizes a large-margin objective and establishes provable regret bounds. However, it requires eigenvector computation at each iteration to ensure positive definiteness, which can be time-consuming in practice. The information-theoretic metric learning approach of [10] presents an online algorithm that eliminates eigenvector decomposition operations. LEGO [11] requires no additional work for enforcing positive definiteness and can be implemented efficiently in practice. [11] leverages random projection in distance metric learning for high-dimensional data.
## 3 Preliminaries
### Notation
For any natural number \(n\), we use \([n]\) to denote the set \(\{1,2,\ldots,n\}\). We use \(A^{\top}\) to denote the transpose of matrix \(A\). For any vector \(v\in\mathbb{R}^{d}\) and positive semi-definite matrix \(A\in\mathbb{R}^{d\times d}\), we use \(\|v\|_{A}\) to denote \(\sqrt{v^{\top}Av}\). We use \(\mathcal{N}(0,I)\) to denote Gaussian distribution. For a probabilistic event \(f(x)\), we define \(\mathbf{1}\{f(x)\}\) such that \(\mathbf{1}\{f(x)\}=1\) if \(f(x)\) holds and \(\mathbf{1}\{f(x)\}=0\) otherwise. For a vector \(v\in\mathbb{R}^{d}\) and real valued random variable \(Y\), we will abuse notation and use \(\operatorname{median}(v)\) and \(\operatorname{median}(Y)\) to denote the median of the entries of \(v\) and the distribution of \(Y\) respectively. We use \(\Pr[]\) to denote the probability, and use \(\mathbb{E}[]\) to denote the expectation if it exists. We use \(\mathbf{Var}[]\) to denote the variance of a random variable. We use \(\widetilde{O}(f)\) to denote \(O(f\mathrm{poly}(\log f))\).
### Background
In this section, we provide several definitions and probability results.
**Definition 3** (Low-Rank Mahalanobis Pseudo-Metric).: _For any set of points \(\mathcal{X}\subset\mathbb{R}^{d}\), a pseudo-metric is a function \(d:\mathcal{X}\times\mathcal{X}\mapsto\mathbb{R}_{\geq 0}\) such that \(\forall x,y,z\in\mathcal{X}\), it satisfies: \(d(x,x)=0,d(x,y)=d(y,x),d(x,y)\leq d(x,z)+d(z,y)\)._
_As in some prior work on pseudo-metric learning [10], we only consider pseudo-metrics which can be written in the form \(d_{A}(x,y)=(x-y)^{\top}A(x-y)\) where \(A\) is a positive semi-definite matrix. We define the rank of a pseudo-metric \(d_{A}\) to be the rank of \(A\)._
Note that the distinction between pseudo-metrics and metrics is not very crucial for our results. In addition to the above properties, a metric satisfies the property that for any \(x,y\in\mathcal{X},d(x,y)=0\) if and only if \(x=y\).
We will use the following property from [14] for our sketching matrices \(\{\Pi_{j}\in\mathbb{R}^{m\times k}\}_{j=1}^{L}\) where \(L\) denotes the number of sketching matrices.
**Definition 4** (\((\varepsilon,\beta)\)-representative, Definition B.2 in [14]).: _A set of matrices \(\{\Pi_{j}\in\mathbb{R}^{m\times k}\}_{j=1}^{L}\) is said to be \((\varepsilon,\beta)\)-representative for any \(\varepsilon,\beta\in(0,1/2)\) if_
\[\forall\ \|v\|_{2}=1,\ \sum_{j=1}^{L}\mathbf{1}\{(1-\varepsilon)\leq\|\Pi_{j}v \|_{2}\leq(1+\varepsilon)\}\geq(1-\beta)L.\]
The above definition implies that if a set of matrices \(\{\Pi_{j}\in\mathbb{R}^{m\times k}\}_{j=1}^{L}\) satisfies this property, for any any unit vector \(v\), most of the projections \(\Pi_{j}v\), approximately preserve its length.
We will make use of Hoeffding's Inequality:
**Lemma 5** (Hoeffding's Inequality [10]).: _Let \(X_{1},\ldots,X_{n}\) be independent random variables such that \(X_{i}\in[a_{i},b_{i}]\) almost surely for \(i\in[n]\) and let \(S=\sum_{i=1}^{n}X_{i}-\mathbb{E}[X_{i}]\). Then, for every \(t>0\) :_
\[\Pr[S\geq t]\leq\exp\left(-\frac{2t^{2}}{\sum_{i=1}^{n}\left(b_{i}-a_{i} \right)^{2}}\right).\]
We will use the following guarantee (restated in our notation) from [10],
**Lemma 6**.: _Let \(\beta\in[0.1,0.2]\). For any accuracy parameter \(\varepsilon\in(0,1)\) and failure probability \(\delta\in(0,1)\), if \(m=\Omega(\frac{1}{\varepsilon^{2}}),L=\Omega((d+\log(1/\delta))\log(d/ \varepsilon))\), the set of sketching matrices \(\{\Pi_{j}\in\mathbb{R}^{m\times k}\}_{j=1}^{L}\) where every entry of each matrix is drawn i.i.d. from \(\mathcal{N}(0,1/m)\) satisfies:_
\[\forall\|v\|_{2}=1,\ \sum_{j=1}^{L}\mathbf{1}\{(1-\varepsilon)\leq\|\Pi_{j}v\|_ {2}\leq(1+\varepsilon)\}\geq(1-\beta)L\]
_with probability at least \(1-\delta\)._
The above lemma implies that with high probability, the sketch matrices \(\{\Pi_{j}\}_{j=1}^{L}\) is \((\varepsilon,\beta)\)-representative.
### Applications of our Data-Structure
Developing efficient algorithms for Nearest Neighbor Search (NNS) has been a focus of intense research, driven by a variety of real-world applications ranging from computer vision, information retrieval to database query [1, 1, 2]. A line of work, starting with the foundational results of [12, 1], have obtained sub-linear query time in \(n\) for the approximate variant in order to solve the problem that fast algorithms for exact NNS [11, 12] consumes impractical space complexity. The Locality Sensitive Hashing (LSH) approach of [12] gives a Monte Carlo randomized approach with low memory and query time, but it does not support adaptive queries. Although the algorithms of [1, 10] support adaptive queries and have sublinear query time, they use large space and are only designed for finding the approximate single nearest neighbor and do not provide distance estimates to all data points in the database per query.
For nonparametric models including SVMs, distance estimates to potentially every point in the dataset may be required [1, 1, 1]. For simpler tasks like \(k\)-nearest neighbor classification or database search, modifying previous approaches to return \(k\) nearest neighbors instead of \(1\), results in a factor \(k\) increase in overall query time. Therefore, developing efficient deployable variants in adversarial settings is an important endeavor, and an adaptive approximate distance estimation procedure is a useful primitive for this purpose.
## 4 Technique Overview
For a set of points \(\mathcal{X}=\{x_{1},x_{2},\ldots,x_{n}\}\in\mathbb{R}^{d}\), a Mahalanobis distance metric \(d\) on \(\mathcal{X}\) is characterized by a positive semidefinite matrix \(A\in\mathbb{R}^{d\times d}\) such that
\[d(x_{i},x_{j})=\sqrt{(x_{i}-x_{j})^{\top}A(x_{i}-x_{j})}.\]
First we provide a randomized sketching data structure \(\mathcal{D}\) which can answer approximate non-adaptive Mahalanobis distance estimation queries \(q\in\mathbb{R}^{d}\) by \(\widetilde{d_{i}}=\|\Pi Uq-\widetilde{x}_{i}\|_{2}\) in Theorem 7, where \(\Pi\) is a Johnson-Lindenstrauss sketching [1] matrix and \(U^{\top}U=A\).
To make our data structure resistant to adaptively chosen queries, we will use the \((\varepsilon,\beta)\)-representative in Definition 4 where A set of matrices \(\{\Pi_{j}\in\mathbb{R}^{m\times k}\}_{j=1}^{L}\) is said to be \((\varepsilon,\beta)\)-representative for any \(\varepsilon,\beta\in(0,1/2)\) if at least \((1-\beta)L\) sketching matrice can achieve \((1\pm\varepsilon)\)-approximate distance estimation. With
\[L=O((d+\log\frac{1}{\delta})\log(d/\varepsilon))\]
independent instantiation of randomized sketching data structure \(\mathcal{D}\) satisfying the \((\varepsilon,0.1)\)-representative property, we know that for any given query vector \(q\in\mathbb{R}^{d}\), most of the sketching matrices approximately preserves its length. Then we design a set of unit vectors \(\{v_{i}=\frac{U(q-x_{i})}{\|U(q-x_{i})\|_{2}}\}_{i=1}^{n}\) for query \(q\in\mathbb{R}^{d}\) and data points \(\{x_{1},x_{2},\ldots,x_{n}\}\subset\mathbb{R}^{d}\) and show that most of the projections \(\Pi_{j}\) satisfy \((1\pm\varepsilon)\)-approximate Mahalanobis distance estimation. We define a set \(\mathcal{J}\) as \(\mathcal{J}\coloneqq\{j:(1-\varepsilon)\leq\|\Pi_{j}Uv\|\leq(1+\varepsilon)\}\) which has size at least \(0.9L\). Now query using any randomized sketching matrix \(\Pi_{j}\) gives us \(\varepsilon\)-approximation with constant \(0.9\) success probability. To boost the success probability from constant probability to high probability, we sample \(R=O(\log(n/\delta))\) indexes \(\{j_{r}\}_{r=1}^{R}\) from the set \(\mathcal{J}\), and obtain \(R\) estimates of the Mahalanobis distance between \(q\) and \(x_{i}\). By Hoeffding's Inequality (Lemma 5), we know that the median of the sampled \(R\) distance estimates can provide \((1\pm\varepsilon)\)-approximate query with high probability.
To support sparse update for the Mahalanobis metric matrix \(U\), we update \(\widetilde{x}_{i,j}\) with the sparse update matrix \(B\) for \(n\) data points and \(L\) sketching matrice respectively in \(O((m+d)nL)\) time. To update the \(i\)-th data point with a new vector \(z\in\mathbb{R}^{d}\), we need to recompute \(\widetilde{x}_{i,j}=\Pi_{j}Uz\) for \(L\) sketching matrice respectively in \(O((m+d)kL)\) time.
To support sampling an index \(i\) with probability proportional to the distance between \(q\) and \(x_{i}\), we first leverage a segment tree to obtain \(\sum_{\ell=i}^{j}x_{\ell}\) in logarithmic time, and compute \(\sum_{\ell=i}^{j}\|q-x_{\ell}\|_{A}\). In each iteration, we determine left or right half to proceed by sampling a random value between \([0,1]\) uniformly and compare with \(\frac{\sum_{\ell=i}^{m}\|q-x_{\ell}\|_{A}}{\sum_{\ell=i}^{m}\|q-x_{\ell}\|_{A}}\). After \(T=O(\log n)\) iterations, we can obtain the final sampled index in \(O(\log^{2}n+kd\log n)\) time.
## 5 JL sketch for approximate Mahalanobis distance estimation
In this section, we provide a data structure which uses the well studied Johnson-Lindenstrauss sketch [10] to solve the approximate Mahalanobis distance estimation problem. We remark that other standard sketches like the AMS sketch [1] or CountSketch [13] can also be used to provide similar data-structures which can support Mahalanobis distance estimation with Monte Carlo guarantees.
**Theorem 7** (Guarantees for JL sketch for Mahalanobis Metrics).: _There is a data structure (Algorithm 1) for Approximate Mahalanobis Distance Estimation with the following procedures:_
* \(\textsc{Initialize}(U\in\mathbb{R}^{k\times d},\{x_{1},x_{2},\ldots,x_{n}\} \subset\mathbb{R}^{d},\varepsilon\in(0,1),\delta\in(0,1))\)_: Given a matrix_ \(U\in\mathbb{R}^{k\times d}\)_, a list of vectors_ \(\{x_{1},x_{2},\cdots,x_{n}\}\subset\mathbb{R}^{d}\)_, accuracy parameter_ \(\varepsilon\in(0,1)\)_, and failure probability_ \(\delta\in(0,1)\)_, the data-structure pre-processes in time_ \(\widetilde{O}((d+m)k)\)_._
* \(\textsc{Query}(q\in\mathbb{R}^{d})\)_: Given a query vector_ \(q\in\mathbb{R}^{d}\)_, Query outputs approximate Mahalanobis distance estimates_ \(\{d_{i}\}_{i=1}^{n}\) _in time_ \(O((d+m)k)\) _such that_ \[\Pr\Big{[}(1-\varepsilon)\|q-x_{i}\|_{A}\leq d_{i}\leq(1+ \varepsilon)\|q-x_{i}\|_{A},\forall i\Big{]}\] \[\geq 1-\delta.\]
_where \(A=U^{\top}U\in\mathbb{R}^{d\times d}\) is a positive semidefinite matrix which characterizes the Mahalanobis distances. Note that the above \(\forall i\) indicates for all \(i\in[n]\)._
Proof.: The Initialize pre-processing time complexity is dominated by line 16 which takes \(O((d+m)k)\). The Query time complexity is dominated by line 22 which takes \(O((d+m)k)\).
**Proof of correctness for Query:** We will use the following lemma which is a standard result used in proofs of the Johnson-Lindenstrauss lemma.
**Lemma 8** (Distributional JL (DJL) lemma).: _For any integer \(k\geq 1\) and any \(0<\varepsilon,\delta<1/2\), there exists a distribution \(D_{\varepsilon,\delta}\) over \(m\times k\) real matrices for some \(m\leq c\varepsilon^{-2}\log(1/\delta)\) for some constant \(c>0\) such that:_
\[\forall\ \|u\|_{2}=1,\Pr_{\Pi\sim D_{\varepsilon,\delta}}[(1-\varepsilon)\leq\| \Pi u\|_{2}\leq(1+\varepsilon)]\geq 1-\delta.\]
We will show that our choice of \(\Pi\in\mathbb{R}^{m\times k}\) with entries drawn iid from \(\mathcal{N}(0,1/m)\) satisfies the above guarantee.
Let \(u=(u_{1},u_{2},\ldots,u_{k})\). Since \(\Pi_{i,j}\sim\mathcal{N}(0,1/m)\) for all \(i\in[m],j\in[k]\), we have that each coordinate \(i\in[m]\) of \((\Pi u)_{i}=\sum_{j=1}^{k}\Pi_{i,j}u_{j}\) is the sum of \(k\) independent normally distributed random variables. Therefore, \(\{(\Pi u)_{i}\}_{i=1}^{m}\) are are all normally distributed real variable with their mean and variance being the sums of the individual means and variances. Note that \(\mathbb{E}[(\Pi u)_{i}]=\sum_{j=1}^{k}u_{j}\mathbb{E}[\Pi_{i,j}]=0\) and \(\mathbf{Var}[(\Pi u)_{i}]=\sum_{j=1}^{k}u_{j}^{2}\mathbf{Var}[\Pi_{i,j}]=(1/m )\cdot\|u\|_{2}^{2}=1/m\). Therefore, \((\Pi u)_{i}\sim\mathcal{N}(0,1/m)\). A random vector in \(\mathbb{R}^{m}\) with all its coordinates distributed as \(\mathcal{N}(0,\sigma^{2})\) can be viewed as being drawn from a \(m\)-dimensional spherical Gaussian \(\mathcal{N}(\mathbf{0},\sigma^{2}I_{m\times m})\). Therefore, we have \((\Pi u)\sim\mathcal{N}(\mathbf{0},\frac{1}{m}I_{m\times m})\). We will now show how to use the Gaussian Annulus Theorem [1, Theorem 2.9] to finish our proof.
**Lemma 9** (Gaussian Annulus Theorem).: _For any \(x\sim\mathcal{N}(\mathbf{0},I_{m\times m}),\beta\leq\sqrt{m}\), for some constant \(c>0\), we have that:_
\[\Pr\Big{[}(\sqrt{m}-\beta)\leq\|x\|_{2}\leq(\sqrt{m}+\beta)\Big{]}\geq 1-3^{-c \beta^{2}}.\]
We have that \((\Pi u)\sim\mathcal{N}(\mathbf{0},\frac{1}{m}I_{k\times k})\). Therefore, we have \((\Pi u)=\frac{1}{\sqrt{m}}x\) for \(x\sim\mathcal{N}(\mathbf{0},I_{k\times k})\). The condition that \((1-\varepsilon)\leq\|\Pi u\|_{2}\leq(1+\varepsilon)\) is equivalent to \((\sqrt{m}-\varepsilon\sqrt{m})\leq\|x\|_{2}\leq(\sqrt{m}+\varepsilon\sqrt{m})\). Therefore, we have that, for some constant \(c>0\),
\[\Pr_{\Pi\sim D_{\varepsilon,\delta}}\Big{[}(1-\varepsilon)\leq\|\Pi u\|_{2} \leq(1+\varepsilon)\Big{]}\geq 1-3^{-cm\varepsilon^{2}}.\]
To prove the correctness for our Query operation, we will apply a union bound over the choices of \(\{u_{i}=\frac{U(q-x_{i})}{\|U(q-x_{i})\|_{2}}\}_{i=1}^{n}\) in the above lemma. The failure probability is upper bounded by \(n\cdot 3^{-cm\varepsilon^{2}}\). To have this be at most delta, we need to have \(m\geq\frac{c^{\prime}}{\varepsilon^{2}}\log{(n/\delta)}\) for some constant \(c^{\prime}\). In particular, if we want a with high probability guarantee, taking \(m=\Omega(\frac{\log n}{\varepsilon^{2}})\) should suffice.
Thus, we complete the proof.
## 6 Online Adaptive Mahalanobis Distance Maintenance
Now, we move to the data structure and the algorithm to solve the online version of the problem and the corresponding proofs. Our main result in this section is the following:
**Theorem 10** (Main result).: _There is a data structure (Algorithm 2 and 3) for the Online Approximate Adaptive Mahalanobis Distance Estimation Problem (Definition 2) with the following procedures:_
* \(\textsc{Initialize}(U\in\mathbb{R}^{k\times d},\{x_{1},x_{2},\ldots,x_{n}\} \subset\mathbb{R}^{d},\varepsilon\in(0,1),\delta\in(0,1))\)_: Given a matrix_ \(U\in\mathbb{R}^{k\times d}\)_, data points_ \(\{x_{1},x_{2},\ldots,x_{n}\}\subset\mathbb{R}^{d}\)_, an accuracy parameter_ \(\varepsilon\) _and a failure probability_ \(\delta\) _as input, the data structure preprocesses in time_ \(O((m+d)knL)\)_._
* \(\textsc{Update}U(u\in\mathbb{R}^{d},a\in[k])\)_: Given an update vector_ \(u\in\mathbb{R}^{d}\) _and row index_ \(a\in[k]\)_, the data structure takes_ \(u\) _and_ \(a\) _as input and updates the_ \(a\)_'th row of_ \(U\) _by adding_ \(a\) _to it i.e._ \(U_{a,:}\gets U_{a,:}+u\) _in time_ \(O((m+d)nL)\)_._
* \(\textsc{Update}\textsc{X}(z\in\mathbb{R}^{d},i\in[n])\)_: Given an update vector_ \(z\in\mathbb{R}^{d}\) _and index_ \(i\in[n]\)_, the_ Update__X _takes_ \(z\) _and_ \(i\) _as input and updates the data structure with the new_ \(i\)_-th data point in_ \(O((m+d)kL+\log n)\) _time._
* \(\textsc{QueryPair}(i,j\in[n])\)_. Given two indexes_ \(i,j\in[n]\)_, the_ QueryPair _takes_ \(i,j\) _as input and approximately estimates the Mahalanobis distance from_ \(x_{i}\) _to_ \(x_{j}\) _in time_ \(O(mR)\) _and output a number_ \(p\) _such that:_ \((1-\varepsilon)\|x_{i}-x_{j}\|_{A}\leq p\leq(1+\varepsilon)\cdot\|x_{i}-x_{j}\| _{A}\) _with probability at least_ \(1-\delta\)_._
* \(\textsc{QueryAll}(q\in\mathbb{R}^{d})\)_: Given a query point_ \(q\in\mathbb{R}^{d}\)_, the_ QueryAll _operation takes_ \(q\) _as input and approximately estimates the Mahalanobis distances from_ \(q\) _to all the data points_ \(\{x_{1},x_{2},\ldots,x_{n}\}\subset\mathbb{R}^{d}\) _in time_ \(O((m+d)knR)\) _i.e. it provides a set of estimates_ \(\{\widetilde{d}_{i}\}_{i=1}^{n}\) _such that:_ \[\forall i\in[n],(1-\varepsilon)\|q-x_{i}\|_{A}\leq\widetilde{d}_{i}\leq(1+ \varepsilon)\|q-x_{i}\|_{A}\] _with probability at least_ \(1-\delta\)_, even for a sequence of adaptively chosen queries._
* SampleExact(\(q\in\mathbb{R}^{d}\)). _Given a query point_ \(q\in\mathbb{R}^{d}\) _as input,_ SampleExact _samples an index_ \(i\in[n]\) _with probability_ \(d_{i}/\sum_{j=1}^{n}d_{j}\) _in_ \(O(\log^{2}n+kd\log n)\) _time._
Proof.: We complete the proof for Theorem 10 using the following Lemma 11, Lemma 12, Lemma 13, Lemma 14, Lemma 15 and Lemma 17.
**Lemma 11** (Initialization Time).: _Given a matrix \(U\in\mathbb{R}^{k\times d}\), data points \(\{x_{1},x_{2},\ldots,x_{n}\}\subset\mathbb{R}^{d}\), an accuracy parameter \(\varepsilon\) and a failure probability \(\delta\) as input, the time complexity of Initialize function is \(O((m+d)knL)\)._
Proof.: The initialization part has three steps:
* Line 15 in Algorithm 8 takes \(O(nd)\) time to assign \(x_{i}\in\mathbb{R}^{d}\) for \(n\) times.
* Line 18 in Algorithm 8 takes \(O(mk)\) time to iid sample from \(\mathcal{N}(0,1/m)\).
* Line 21 in Algorithm 8 takes \(O((m+d)knL)\) time to execute \(n\times L\) times matrix multiplications.
* The initialization of segment tree needs \(O(nd)\) time.
Therefore, the total time for initialization is
\[O(nd)+O(mk)+O((m+d)knL)+O(nd)\] \[= O((m+d)knL).\]
Thus, we complete the proof.
**Lemma 12** (UpdateU Time).: _Given an update vector \(u\in\mathbb{R}^{d}\) and row index \(a\in[k]\), the UpdateU takes \(u\) and \(a\) as input and updates the \(a\)'th row of \(U\) by adding \(a\) to it i.e. \(U_{a,:}\gets U_{a,:}+u\) in \(O((m+d)nL)\) time._
Proof.: The update part has three steps:
* Line 29 in Algorithm 8 takes \(O(d)\) time to assign \(u^{\top}\) to \(B_{a}\).
* Line 30 in Algorithm 8 takes \(O(d)\) time to update \(U\) because \(B\) is sparse and we can ignore \(k-1\) rows containing all zero elements.
* Line 33 in Algorithm 8 take \(O((m+d)nL)\) time to compute sparse matrix multiplication for \(n\times L\) times and each sparse matrix multiplication takes \(O(m+d)\) time because \(B\) is sparse and we can ignore the \(k-1\) rows containing all zeros elements.
Therefore, the total time for update is
\[O(d)+O(d)+O((m+d)nL)\] \[= O((m+d)nL).\]
Thus, we complete the proof.
**Lemma 13** (UpdateX Time).: _Given a new data point \(z\in\mathbb{R}^{d}\) and index \(i\in[n]\), the UpdateX takes \(z\) and \(i\) as input and updates the data structure with the new \(i\)th data point in \(O((m+d)kL+\log n)\) time._
Proof.: The UpdateX operation has one step:
* Line 4 in Algorithm 8 takes \(O((m+d)kL)\) time to compute \(\Pi_{j}\cdot(U\cdot z)\) for \(L\) times.
* The Tree.Update operation takes \(O(\log n)\) time to complete.
Therefore, the total time for UpdateX is \(O((m+d)kL+\log n)\). Thus, we complete the proof.
**Lemma 14** (QueryPair).: _Given two indexes \(i,j\in[n]\), the QueryPair takes \(i,j\) as input and approximately estimates the Mahalanobis distance from \(x_{i}\) to \(x_{j}\) in time \(O(mR)\) and output a number \(p\) such that: \((1-\varepsilon)\|x_{i}-x_{j}\|_{A}\leq p\leq(1+\varepsilon)\|x_{i}-x_{j}\|_{A}\). with probability at least \(1-\delta\)._
Proof.: **Proof of Running Time.**
We can view the QueryPair operation as having the following two steps:
* The for-loop in line 4 takes \(O(mR)\) time because \(\|\widetilde{x}_{i,r}-\widetilde{x}_{j,r}\|_{2}\) takes \(O(m)\) time and it is executed for \(R\) times.
* Line 7 takes \(O(R)\) time to find median value from \(\{p_{r}\}_{r=1}^{R}\).
Thus, the total running time of QueryPair is
\[O(R)+O(mR)=\ O(mR).\]
**Proof of Correctness.**
Instead of choosing \(v=\frac{U(q-x_{i})}{\|q-x_{i}\|_{A}}\) in the proof of correctness for Query(Lemma 15), we should choose \(v=\frac{U(x_{i}-x_{j})}{\|x_{i}-x_{j}\|_{A}}\) for QueryPair\((i,j)\).
Accordingly, we also consider
\[z_{i,j}\coloneqq\underset{r\in[R]}{\text{median}}\{\widetilde{y}_{i,j,r}\}\]
and
\[\widetilde{y}_{i,j,r}:=\|\Pi_{j_{r}}Uv\|.\]
This gives us:
\[(1-\varepsilon)\|x_{i}-x_{j}\|_{A}\leq p\leq(1+\varepsilon)\|x_{i}-x_{j}\|_{A}.\]
with probability at least \(1-\frac{\delta}{n}\).
This concludes the proof of correctness for the output of QueryPair.
**Lemma 15** (QueryAll).: _Given a query point \(q\in\mathbb{R}^{d}\), the QueryAll operation takes \(q\) as input and approximately estimates the Mahalanobis distances from \(q\) to all the data points \(\{x_{1},x_{2},\ldots,x_{n}\}\)\(\subset\mathbb{R}^{d}\) in time \(O((m+d)knR)\) i.e. it provides a set of estimates \(\{\widetilde{d}_{i}\}_{i=1}^{n}\) such that:_
\[\forall i\in[n],(1-\varepsilon)\|q-x_{i}\|_{A}\leq\widetilde{d}_{i}\leq(1+ \varepsilon)\|q-x_{i}\|_{A}.\]
_with probability at least \(1-\delta\)._
Proof.: **Proof of running time.**
We can view the QueryAll operation as having the following three steps:
* Line 13 takes \(O(R)\) time to sample \(\{j_{r}\}_{r=1}^{R}\) from \([L]\).
* Line 16 takes \(O((m+d)knR)\) time because \(\|\Pi_{j_{r}}Uq-\widetilde{x}_{i,r}\|_{2}\) takes \(O((m+d)k)\) time and it is executed for \(n\times R\) times.
* Line 20 takes \(O(nR)\) time to find median value from \(\{d_{i,r}\}_{r=1}^{R}\) for \(n\) times.
Thus, the total running time of QueryAll is
\[O(R)+O((m+d)knR)+O(nR)\] \[= O((m+d)knR).\]
**Proof of Correctness.**
We will use the \((\varepsilon,\beta=0.1)\)-representative property (See Definition 4) for our sketching matrices \(\{\Pi_{j}\in\mathbb{R}^{m\times k}\}_{j=1}^{L}\). This property implies that for any given vector, most of the matrices approximately preserves its length. In particular, we will consider the set of unit vectors \(\{v_{i}=\frac{U(q-x_{i}))}{\|U(q-x_{i})\|_{2}}\}_{i=1}^{n}\) for query \(q\in\mathbb{R}^{d}\) and data points \(\{x_{1},x_{2},\ldots,x_{n}\}\subset\mathbb{R}^{d}\) i.e. for any point \(x_{i}\), we have that most of the projections \(\Pi_{j}\) satisfy
\[(1-\varepsilon)\|q-x_{i}\|_{A}\] \[\leq \|\Pi_{j}(U(q-x_{i}))\|_{2}\] \[\leq (1+\varepsilon)\|q-x_{i}\|_{A}.\]
For query \(q\in\mathbb{R}^{d},i\in[n]\), we will show that \(\widetilde{d}_{i}\) is a good estimate of \(\|q-x_{i}\|_{A}\) with high probability. From the definition of \(\widetilde{d}_{i}\), we know that the \(\widetilde{d}_{i}=0=\|q-x_{i}\|_{A}\) is true when \(q=x_{i}\). Therefore, we only need to consider the case when \(q\neq x_{i}\). Let \(v=\frac{U(q-x_{i})}{\|q-x_{i}\|_{A}}\).
From Lemma 6, we have that \(\{\Pi_{j}\}_{j=1}^{L}\) is \((\varepsilon,0.1)\)-representative. So the set \(\mathcal{J}\) defined as:
\[\mathcal{J}\coloneqq\{j:(1-\varepsilon)\leq\|\Pi_{j}Uv\|\leq(1+\varepsilon)\}\]
has size at least \(0.9L\). We now define the random variables
\[\widetilde{y}_{i,r}:=\|\Pi_{j_{r}}Uv\|\quad\text{and}\quad\widetilde{z}_{i}:= \underset{r\in[R]}{\text{median}}\{\widetilde{y}_{i,r}\}\]
with \(R,\{j_{r}\}_{r=1}^{R}\) defined in Query in Algorithm 3. We know that \(\widetilde{d}_{i}=\|q-x_{i}\|_{A}\widetilde{z}_{i}\) from the definition of \(\widetilde{d}_{i}\). Therefore, it is necessary and sufficient to bound the probability that \(\widetilde{z}_{i}\in[1-\varepsilon,1+\varepsilon]\). To do this, let \(W_{r}=\mathbf{1}\{j_{r}\in\mathcal{J}\}\) and \(W=\sum_{r=1}^{R}W_{r}\). Furthermore, we have \(\mathbb{E}[W]\geq 0.9r\) and since \(W_{r}\in\{0,1\}\), we have by Hoeffding's Inequality (Lemma 5).
\[\Pr[W\leq 0.6R]\leq\exp(-\frac{2(0.3R)^{2}}{R})\leq\frac{\delta}{n}\]
from our definition of \(r\). Furthermore, for all \(k\) such that \(j_{k}\in\mathcal{J}\), we have:
\[1-\varepsilon\leq\widetilde{y}_{i,k}\leq 1+\varepsilon.\]
Therefore, in the event that \(W\geq 0.6R\), we have \((1-\varepsilon)\leq\widetilde{z}_{i}\leq(1+\varepsilon)\). Hence, we get:
\[(1-\varepsilon)\|q-x_{i}\|_{A}\leq\widetilde{d}_{i}\leq(1+\varepsilon)\|q-x_{i }\|_{A}.\]
with probability at least \(1-\frac{\delta}{n}\).
Taking a union bound over all \(i\in[n]\), we get:
\[\forall i\in[n],(1-\varepsilon)\|q-x_{i}\|_{A}\leq\widetilde{d}_{i}\leq(1+ \varepsilon)\|q-x_{i}\|_{A}.\]
with probability at least \(1-\delta\).
This concludes the proof of correctness for the output of QueryAll.
## 7 Evaluation
Experiment setup.We evaluate our algorithm with gene expression cancer RNA-Seq Data Set from UCI machine learning repository [1] where \(n=800\) and we use the first \(d=5120\) columns. We run our experiments on a machine with Intel i7-13700K, 64GB memory, Python 3.6.9 and Numpy 1.22.2. We set the number of sketches \(L=10\) and sample \(R=5\) sketches during QueryAll. We run QueryAll for 10 times to obtain the average time consumption and the accuracy of QueryAll. We run QueryPair for 10000 times to obtain the average time. The red error bars in the figures represent the standard deviation. We want to answer the following questions:
Figure 1: Benchmark results for (a) QueryAll accuracy under different \(m\). (b) QueryAll time under different \(m\).
Figure 2: Benchmark results for (a) QueryPair time under different \(m\). (b) Data structure memory usage under different \(m\).
* How is the execution time affected by the sketch size \(m\)?
* How is the query accuracy affected by the sketch size \(m\)?
* How is the memory consumption of our data structure affected by the sketch size \(m\)?
Query Accuracy.From the part (a) in Figure 1, we know that the QueryAll accuracy increases from \(83.4\%\) to \(90.4\%\) as the sketch size increases from \(20\) to \(1280\). And we can reach around \(90\%\) distance estimation accuracy when the sketch size reaches \(320\).
Execution time.From the part (b) in Figure 1, QueryAll time grows from \(0.15\) second to \(10.49\) second as sketch size increases. From the part (a) in Figure 2, QueryPair time increases from \(0.015\) millisecond to \(\sim 0.02\) milliseconds as sketch size increases. The query time growth follows that larger sketch size leads to higher computation overhead on \(\Pi_{j}Uq\). Compared with the baseline whose sketch size \(m=1280\), when sketch size is \(320\), the QueryAll is \(8.5\times\) faster and our data structure still achieves \(\sim 90\%\) estimation accuracy in QueryAll.
Memory Consumption.From the part (b) in Figure 2, we find that the memory consumption increases from \(164\)MB to \(2228\)MB as the sketch size increases from \(20\) to \(1280\). Larger sketch size means that more space is required to store the precomputed \(\Pi_{j}U\) and \(\Pi_{j}Ux_{i}\) matrice. Compared with the baseline sketch size \(m=1280\), when sketch size is \(320\), the memory consumption is \(3.06\times\) smaller.
## 8 Conclusion
Mahalanobis metrics have become increasingly used in machine learning algorithms like nearest neighbor search and clustering. Surprisingly, to date, there has not been any work in applying sketching techniques on algorithms that use Mahalanobis metrics. We initiate this direction of study by looking at one important application, Approximate Distance Estimation (ADE) for Mahalanobis metrics. We use sketching techniques to design a data structure which can handle sequences of adaptive and adversarial queries. Furthermore, our data structure can also handle an online version of the ADE problem, where the underlying Mahalanobis distance changes over time. Our results are the first step towards using sketching techniques for Mahalanobis metrics. We leave the study of using our data structure in conjunction with online Mahalanobis metric learning algorithms like [10] to future work. From our perspective, given that our work is theoretical, it doesn't lead to any negative societal impacts.
|
2310.14238 | Invariant circles and phase portraits of cubic vector fields on the
sphere | In this paper, we characterize and study dynamical properties of cubic vector
fields on the sphere $\mathbb{S}^2 = \{(x, y, z) \in \mathbb{R}^3 ~|~
x^2+y^2+z^2 = 1\}$. We start by classifying all degree three polynomial vector
fields on $\mathbb{S}^2$ and determine which of them form Kolmogorov systems.
Then, we show that there exist completely integrable cubic vector fields on
$\mathbb{S}^2$ and also study the maximum number of various types of invariant
circles for homogeneous cubic vector fields on $\mathbb{S}^2$. We find a tight
bound in each case. Further, we also discuss phase portraits of certain cubic
Kolmogorov vector fields on $\mathbb{S}^2$. | Joji Benny, Supriyo Jana, Soumen Sarkar | 2023-10-22T09:16:04Z | http://arxiv.org/abs/2310.14238v1 | # Invariant circles and phase portraits of cubic vector fields on the sphere
###### Abstract.
In this paper, we characterize and study dynamical properties of cubic vector fields on the sphere \(\mathbb{S}^{2}=\{(x,y,z)\in\mathbb{R}^{3}\mid x^{2}+y^{2}+z^{2}=1\}\). We start by classifying all degree three polynomial vector fields on \(\mathbb{S}^{2}\) and determine which of them form Kolmogorov systems. Then, we show that there exist completely integrable cubic vector fields on \(\mathbb{S}^{2}\) and also study the maximum number of various types of invariant circles for homogeneous cubic vector fields on \(\mathbb{S}^{2}\). We find a tight bound in each case. Further, we also discuss phase portraits of certain cubic Kolmogorov vector fields on \(\mathbb{S}^{2}\).
Key words and phrases:polynomial vector fields, Kolmogorov system, periodic orbit, invariant circle, invariant great circle, first integral, phase portrait 2020 Mathematics Subject Classification: 34A34, 34C14, 34C40, 34C45, 58J90
## 1. Introduction
Let \(P,Q,R\) be polynomials in \(\mathbb{R}[x,y,z]\). Then, the following system of differential equations
\[\frac{dx}{dt}=P(x,y,z),\frac{dy}{dt}=Q(x,y,z),\frac{dz}{dt}=R(x,y,z) \tag{1.1}\]
is called a polynomial differential system in \(\mathbb{R}^{3}\). The differential operator
\[\mathcal{X}=P\frac{\partial}{\partial x}+Q\frac{\partial}{\partial y}+R\frac{ \partial}{\partial z} \tag{1.2}\]
is called the vector field associated with the system (1.1). The degree of the polynomial vector field in (1.2) is defined to be \(\max\{\deg(P),\deg(Q),\deg(R)\}\).
The system (1.1) is called a polynomial Kolmogorov system in \(\mathbb{R}^{3}\) when \(P=xP^{\prime}\), \(Q=yQ^{\prime}\) and \(R=zR^{\prime}\) for some \(P^{\prime},Q^{\prime},R^{\prime}\in\mathbb{R}[x,y,z]\). The associated vector field is called a polynomial Kolmogorov vector field.
An _invariant algebraic set_ for (1.2) is a subset \(A\subset\mathbb{R}^{3}\) such that \(A\) is the zero set of some \(f(x,y,z)\in\mathbb{R}[x,y,z]\) and \(\mathcal{X}f=Kf\) for some \(K\in\mathbb{R}[x,y,z]\). Here, the polynomial \(K\) is called the _cofactor_ of \(f\). Moreover, \(\mathcal{X}\) is also called a vector field on \(A\).
The Darboux theory of integrability and its generalizations [9] posits that if a system in \(\mathbb{R}^{2}\) has sufficiently many invariant algebraic sets, then it has a rational first integral. This theory can be generalized in \(\mathbb{R}^{n}\) also, see [3]. Determining invariant algebraic curves for vector fields on surfaces has become a problem of its own; see [8] and the references therein. Perhaps the simplest invariant algebraic set that a vector field on a sphere can have is an invariant circle since a circle on \(\mathbb{S}^{2}\) is always an intersection of a plane with \(\mathbb{S}^{2}\). An interesting problem involving invariant circles is determining when an invariant circle is a limit cycle or at least when it is a periodic orbit. Note that invariant circles for polynomial vector fields on \(\mathbb{S}^{2}\) are discussed in [11] and [16] for quadratic vector fields.
On the other hand, the Kolmogorov system in \(\mathbb{R}^{4}_{+}\) has been studied in [14]. Then, the integrability of a class of Kolmogorov systems in \(\mathbb{R}^{n}\) has been explored in [12]. Importantly, Kolmogorov systems [6] have found applications in Plasma Physics [7], Economics [1] and other areas, including Ecology. In this paper, we characterize cubic polynomial vector fields and study
the dynamics of cubic Kolmogorov vector fields on \(\mathbb{S}^{2}\). We remark that the article [4] discusses phase portraits of certain degree 3 polynomial vector fields on the Poincare sphere.
This paper is organized in the following way. In Section 2, we discuss the transformation of the vector fields under the stereographic projection formulae for projection from the 'South pole'. Then we recall the definition of extactic polynomial, first integral and complete integrability. We show that if \(\gamma\) is an orbit of \(\mathcal{X}\) on \(S^{2}\), then the cone on \(\gamma\) is also invariant for \(\mathcal{X}\). We recall the equation of such a cone when \(\gamma\) is a circle.
In Section 3, we give a necessary and sufficient condition when a cubic vector field in \(\mathbb{R}^{3}\) is a vector field on \(\mathbb{S}^{2}\), see Theorem 3.2 and further also determine when the invariant great circle is a periodic orbit. We characterize cubic Kolmogorov vector fields on \(\mathbb{S}^{2}\) in Corollary 3.3. We discuss the existence of an invariant great circle for a cubic vector field on \(\mathbb{S}^{2}\) in Theorem 3.6. We prove that there exists a large class of completely integrable cubic vector fields on \(\mathbb{S}^{2}\), see Theorem 3.4. We prove that there exist homogeneous and non-homogeneous cubic vector fields on \(\mathbb{S}^{2}\) having invariant great circle as a periodic orbit, see Proposition 3.7.
In Section 4, the focus of our discussion centers around homogeneous cubic vector fields on \(\mathbb{S}^{2}\). In Theorem 4.1, we find tight bounds on the number of invariant great circles that a cubic homogeneous vector field on \(\mathbb{S}^{2}\) can have. We also exhibit a condition in Proposition 4.2 when a cubic homogeneous vector field on \(\mathbb{S}^{2}\) has an invariant circle which is not a great circle.
In Section 5, we study cubic Kolmogorov systems on \(\mathbb{S}^{2}\) in detail. In particular, we draw phase portraits for various possible cubic Kolmogorov systems and show that a broad class of such systems does not admit a periodic orbit in Theorem 5.2. We give a sufficient condition when a singular point in \(\mathbb{S}^{2}\backslash\{z=0\}\) of a cubic Kolmogorov vector field on \(\mathbb{S}^{2}\) is either center or focus, see Theorem 5.3.
## 2. Preliminaries
In this section, we discuss the transformation of the vector fields under the stereographic projection formulae for projection from the 'South pole'. Note that these have been obtained in [16]. Then we recall the concept of extactic polynomial which helps to find invariant hypersurfaces. We recall the definition of first integral and complete integrability. We show that if \(\gamma\) is an orbit of \(\mathcal{X}\) on \(S^{2}\), then the cone on \(\gamma\) is also invariant for \(\mathcal{X}\). We recall the equation of such a cone when \(\gamma\) is a circle. Recall that the map
\[\Phi\colon\mathbb{S}^{2}-\{(0,0,-1)\}\longrightarrow\mathbb{R}^{2}\quad\text{ defined by }(x,y,z)\mapsto(u,v)\]
where,
\[u=\frac{x}{1+z}\quad,\quad v=\frac{y}{1+z}\]
is called the stereographic projection from the South pole. For a vector field \(\mathcal{X}=(P,Q,R)\) given by
\[P=\sum_{i+j+k\leqslant n}p_{ijk}x^{i}y^{j}z^{k},\quad Q=\sum_{i+j+k\leqslant n }q_{ijk}x^{i}y^{j}z^{k},\quad R=\sum_{i+j+k\leqslant n}r_{ijk}x^{i}y^{j}z^{k},\]
the induced map \(\Phi_{*}(P,Q,R)=(\mathcal{P},\mathcal{Q})\) is determined by
\[\dot{u}=\mathcal{P}=\tilde{P}(u,v)-u\tilde{R}(u,v),\] \[\dot{v}=\mathcal{Q}=\tilde{Q}(u,v)-v\tilde{R}(u,v),\]
where,
\[\tilde{P}(u,v) =\sum_{i+j+k\leqslant n}p_{ijk}(2u)^{i}(2v)^{j}(1-u^{2}-v^{2})^{k}(u ^{2}+v^{2}+1)^{n-i-j-k},\] \[\tilde{Q}(u,v) =\sum_{i+j+k\leqslant n}q_{ijk}(2u)^{i}(2v)^{j}(1-u^{2}-v^{2})^{k}( u^{2}+v^{2}+1)^{n-i-j-k},\] \[\tilde{R}(u,v) =\sum_{i+j+k\leqslant n}r_{ijk}(2u)^{i}(2v)^{j}(1-u^{2}-v^{2})^{k} (u^{2}+v^{2}+1)^{n-i-j-k}.\]
One of the best tools in order to search for invariant algebraic hypersurfaces is the following. Let \(W\) be a vector subspace of \(\mathbb{R}[x_{1},...,x_{n}]\) generated by the independent polynomials \(v_{1},...,v_{l}\), i.e., \(W=\langle v_{1},...,v_{\ell}\rangle\). The extactic polynomial of \(\mathcal{X}\) associated with \(W\) is the polynomial
\[\mathcal{E}_{W}(\mathcal{X})=\begin{vmatrix}v_{1}&\dots&v_{\ell}\\ \mathcal{X}(v_{1})&\dots&\mathcal{X}(v_{\ell})\\ \vdots&\vdots&\vdots\\ \mathcal{X}^{\ell-1}(v_{1})&\dots&\mathcal{X}^{\ell-1}(v_{\ell})\end{vmatrix}\]
where \(\mathcal{X}^{j}(v_{i})=\mathcal{X}^{j-1}(\mathcal{X}(v_{i}))\) for all \(i,j\). From the properties of determinants, it follows that the definition of the extactic polynomial is independent of the chosen basis of \(W\).
**Proposition 2.1**.: _[_10_, Proposition 1]_ _Let \(\mathcal{X}\) be a polynomial vector field in \(\mathbb{R}^{n}\) and \(W\) a finite dimensional vector subspace of \(\mathbb{R}[x_{1},x_{2},\dots,x_{n}]\) with \(\dim(W)>1\). If \(f=0\) is an invariant algebraic hypersurface for the vector field \(\mathcal{X}\) with \(f\in W\), then \(f\) is a factor of \(\mathcal{E}_{W}(\mathcal{X})\)._
The multiplicity of an invariant algebraic hypersurface \(f=0\) with \(f\in W\), is the largest positive integer \(k\) such that \(f^{k}\) divides the extactic polynomial \(\mathcal{E}_{W}(\mathcal{X})\) when \(\mathcal{E}_{W}(\mathcal{X})\neq 0\), otherwise the multiplicity is infinite. For more details on this multiplicity, see [2, 15].
**Definition 2.2**.: Let \(U\) be an open subset of \(\mathbb{R}^{3}\). A non-constant analytic map \(H\colon U\to\mathbb{R}\) is called a first integral of the vector field (1.2) on \(U\) if \(H\) is constant on all solution curves of the system (1.1) contained in \(U\) ; i.e., \(H(x(t),y(t),z(t))=\) constant for all values of \(t\) for which the solution \((x(t),y(t),z(t))\) is defined and contained in \(U\).
Note that \(H\) is a first integral of the vector field (1.2) on \(U\) if and only if \(\mathcal{X}H=0\) on \(U\).
**Definition 2.3**.: The system (1.1) is called completely integrable in an open set \(U\) if it has 2 independent first integrals.
If the system (1.1) is completely integrable with 2 independent first integrals \(H_{1}\) and \(H_{2}\) then the orbits of the system are contained in \(\{H_{1}=c_{1}\}\cap\{H_{2}=c_{2}\}\) for some \(c_{1},c_{2}\in\mathbb{R}\). In [13], the complete integrability of vector fields in \(\mathbb{R}^{n}\) is discussed.
Recall that the intersection of a plane \(ax+by+cz+d=0\) with \(\mathbb{S}^{2}\) is a circle. We can choose \(a,b,c,d\) such that \(a^{2}+b^{2}+c^{2}=1\) and \(|d|<1.\) Any circle on \(\mathbb{S}^{2}\) can be obtained in this way. If the plane passes through the origin, the intersection is called a great circle. Note that a vector field on \(\mathbb{S}^{2}\) is invariant by \(SO(3)\)1. Hence, if a polynomial vector field on \(\mathbb{S}^{2}\) has an invariant circle \(\{ax+by+cz+d=0\}\cap\mathbb{S}^{2}\), then we can assume the circle to be \(\{z+d=0\}\cap\mathbb{S}^{2}\) with \(|d|<1\).
Footnote 1: \(SO(3)\) is the group of all rotations about the origin in \(\mathbb{R}^{3}\).
We now state a result for homogeneous polynomial vector fields on \(\mathbb{S}^{2}\), which is proved for degree two in [11], and an entirely similar proof can be given for homogeneous vector fields of any degree.
**Proposition 2.4**.: Let \(\gamma=\{\phi(t)|t\in\mathbb{R}\}\subset\mathbb{S}^{2}\) be an orbit of \(\mathcal{X}.\) If \(\mathcal{X}\) is a homogeneous polynomial vector field on \(\mathbb{S}^{2}\), then \(\mathcal{X}\) is tangent to the surface \(S(\gamma)=\{sp\ |\ s\in\mathbb{R},p\in\gamma\}\).
We want to look at the circles on \(\mathbb{S}^{2}\) which are invariant with respect to the flow of the vector field \(\mathcal{X}\) on \(S^{2}\). In this case, Proposition 2.4 will imply that the entire cone on the circle is also invariant with respect to the flow of \(\mathcal{X}\) if the circle is invariant by the flow of \(\mathcal{X}.\) In [11], the authors describe the equation of the cone of such a circle as
\[(a^{2}-d^{2})x^{2}+(b^{2}-d^{2})y^{2}+(c^{2}-d^{2})z^{2}+2abxy+2acxz+2bcyz=0 \tag{2.1}\]
if \(d\neq 0.\) If \(d=0,\) then the cone itself becomes the plane given by \(ax+by+cz=0.\) When \(d=0,\) the intersection of \(\{ax+by+cz=0\}\) with \(\mathbb{S}^{2}\) is a great circle.
## 3. Cubic vector fields on \(\mathbb{S}^{2}\)
In this section, we characterize cubic vector fields on the standard sphere in \(\mathbb{R}^{3}\). Then we classify cubic Kolmogorov vector fields on \(S^{2}\). We also study those cubic vector fields which have an invariant great circle.
**Lemma 3.1**.: _[_5_, Lemma 4.1]_ _Let \(n\in\mathbb{N}\) and \(Q_{1},Q_{2},Q_{3}\) are polynomials in \(\mathbb{R}[x,y,z]\) such that the polynomial \(Q_{1}x^{n}+Q_{2}y^{n}+Q_{3}z^{n}\) is zero. Then \(Q_{1}=Ay^{n}+Bz^{n}\), \(Q_{2}=-Ax^{n}+Cz^{n}\) and \(Q_{3}=-Bx^{n}-Cy^{n}\) for some polynomials \(A,B,C\)._
**Theorem 3.2**.: _Let \(\mathcal{X}=(P,Q,R)\) be a cubic polynomial vector field in \(\mathbb{R}^{3}\). Then \(\mathcal{X}\) is cubic vector field on \(\mathbb{S}^{2}\) if and only if there exist \(f,g,h,A,B,C\in\mathbb{R}[x,y,z]\) such that_
\[\begin{split} P&=(1-x^{2}-y^{2}-z^{2})f+Ay+Bz,\\ Q&=(1-x^{2}-y^{2}-z^{2})g-Ax+Cz,\quad\text{and}\\ R&=(1-x^{2}-y^{2}-z^{2})h-Bx-Cy\end{split} \tag{3.1}\]
_where \(f,g,h\) are linear polynomials and \(A,B,C\) are quadratic polynomials without any constant term. Moreover, this vector field has cofactor \(-2(fx+gy+hz)\) for \(\mathbb{S}^{2}\)._
Proof.: We write \(P=P^{(3)}+P^{(2)}+P^{(1)}+P^{(0)}\) where \(P^{(j)}\) is the degree \(j\) homogeneous part of \(P\). Similarly, we write \(Q=Q^{(3)}+Q^{(2)}+Q^{(1)}+P^{(0)}\) and \(R=R^{(3)}+R^{(2)}+R^{(1)}+R^{(0)}\) also in this fashion.
Suppose \(\mathcal{X}=(P,Q,R)\) is a vector field on \(\mathbb{S}^{2}\). Then, it must satisfy
\[2(Px+Qy+Rz)=K(x^{2}+y^{2}+z^{2}-1). \tag{3.2}\]
for some \(K\in\mathbb{R}[x,y,z]\) with \(\deg(K)\leq 2\). Assume that \(K^{(j)}\) is the \(j\) degree homogeneous part of \(K\). Then \(K^{(0)}\) is \(0\) since there is no constant term on the left side of (3.2). Now, Comparing the degree \(4\), degree \(3\), degree \(2\) and degree \(1\) terms in (3.2), we get
\[\begin{split} 2(P^{(3)}x+Q^{(3)}y+R^{(3)}z)&=K^{(2)}(x^{2}+ y^{2}+z^{2}),\\ 2(P^{(2)}x+Q^{(2)}y+R^{(2)}z)&=K^{(1)}(x^{2}+y^{2 }+z^{2}),\\ 2(P^{(1)}x+Q^{(1)}y+R^{(1)}z)&=-K^{(2)},\\ 2(P^{(0)}x+Q^{(0)}y+R^{(0)}z)&=-K^{(1)},\end{split}\]
respectively. Hence
\[P^{(3)}x+Q^{(3)}y+R^{(3)}z=-(P^{(1)}x+Q^{(1)}y+R^{(1)}z)(x^{2}+y^{2}+z^{2}), \quad\text{and}\]
\[P^{(2)}x+Q^{(2)}y+R^{(2)}z=-(P^{(0)}x+Q^{(0)}y+R^{(0)}z)(x^{2}+y^{2}+z^{2}).\]
By Lemma 3.1,
\[\begin{split} P^{(3)}&=-P^{(1)}(x^{2}+y^{2}+z^{2})+ A_{1}y+B_{1}z,\\ Q^{(3)}&=-Q^{(1)}(x^{2}+y^{2}+z^{2})-A_{1}x+C_{1}z, \\ R^{(3)}&=-R^{(1)}(x^{2}+y^{2}+z^{2})-B_{1}x-C_{1}y \end{split}\]
where each of \(A_{1},B_{1},C_{1}\) is either \(0\) or a quadratic homogeneous polynomial in \(\mathbb{R}[x,y,z]\), and
\[P^{(2)} =-P^{(0)}(x^{2}+y^{2}+z^{2})+A_{2}y+B_{2}z\] \[Q^{(2)} =-Q^{(0)}(x^{2}+y^{2}+z^{2})-A_{2}x+C_{2}z\] \[R^{(2)} =-R^{(0)}(x^{2}+y^{2}+z^{2})-B_{2}x-C_{2}y.\]
where each of \(A_{2},B_{2},C_{2}\) is either \(0\) or a linear homogeneous polynomial in \(\mathbb{R}[x,y,z]\). Hence
\[P=P^{(3)}+P^{(2)}+P^{(1)}+P^{(0)}=(1-x^{2}-y^{2}-z^{2})(P^{(1)}+P^{(0)})+(A_{1} +A_{2})y+(B_{1}+B_{2})z.\]
Denoting \(f:=P^{(1)}+P^{(0)}\), \(A:=A_{1}+A_{2}\) and \(B:=B_{1}+B_{2}\); \(P\) becomes \((1-x^{2}-y^{2}-z^{2})f+Ay+Bz\). Similarly, we get \(Q\) and \(R\) as in the form in (3.1).
Note that if \(P,Q,R\) are given by (3.1), then they satisfy (3.2) with \(K=-2(fx+gy+hz)\). Hence, \(\mathcal{X}=(P,Q,R)\) is a vector field on \(\mathbb{S}^{2}\). Thus, the converse part is true.
**Corollary 3.3**.: _Suppose \(\mathcal{X}=(P,Q,R)\) is the cubic Kolmogorov vector field in \(\mathbb{R}^{3}\). Then \(\mathcal{X}\) is a vector field on \(\mathbb{S}^{2}\) if and only if there exist \(\alpha,\beta,\gamma,a,b,c\in\mathbb{R}\) such that_
\[P =x(\alpha(1-x^{2}-y^{2}-z^{2})+ay^{2}+bz^{2}),\] \[Q =y(\beta(1-x^{2}-y^{2}-z^{2})-ax^{2}+cz^{2}),\quad\text{and}\] \[R =z(\gamma(1-x^{2}-y^{2}-z^{2})-bx^{2}-cy^{2}). \tag{3.3}\]
Proof.: Suppose \(\mathcal{X}=(P,Q,R)\) is a vector field on \(\mathbb{S}^{2}\). Then by Theorem 3.2, \(P,Q,R\) are given by (3.1). Since \(\mathcal{X}\) is Kolmogorov, \(x,y,z\) divide \(P,Q,R\), respectively. Suppose
\[P^{\prime}x=P=(1-x^{2}-y^{2}-z^{2})f+Ay+Bz.\]
Then, we get \(P^{\prime}(0,0,0)x=f\) by comparing the linear part on both sides. Assume that \(P^{\prime}(0,0,0)=\alpha\), we get \(f=\alpha x\). Similarly, for \(Q^{\prime}y=Q\) and \(R^{\prime}z=R\), one gets \(g=\beta y\) and \(h=\gamma z\) for some \(\beta,\gamma\in\mathbb{R}\). Therefore, \(x\) divides \(Ay+Bz\), \(y\) divides \(-Ax+Cz\), and \(z\) divides \(-Bx-Cy\). Suppose
\[P_{1}x=Ay+Bz,\quad Q_{1}y=-Ax+Cz\quad\text{and}\quad R_{1}z=-Bx-Cy\]
for some polynomials \(P_{1},Q_{1}\), and \(R_{1}\) with \(\deg(P_{1}),\deg(Q_{1}),\deg(R_{1})\leqslant 2\). Then, we get that \(P_{1}x^{2}+Q_{1}y^{2}+R_{1}z^{2}=x(Ay+Bz)+y(-Ax+Cz)+z(-Bx-Cy)=0\). So, by Lemma 3.1, \(P_{1}=ay^{2}+bz^{2},Q_{1}=-ax^{2}+cz^{2}\) and \(R_{1}=-bx^{2}-cy^{2}\) for some \(a,b,c\in\mathbb{R}\). Thus, we get \(P,Q,R\) as in (3.3).
Suppose \(P,Q,R\) are given by (3.3). Then, the associated differential system of the vector field \(\mathcal{X}=(P,Q,R)\) is a cubic Kolmogorov vector field. Also, by Theorem 3.2, \(\mathcal{X}\) is a vector field on \(\mathbb{S}^{2}\). Thus, the converse part is true.
**Theorem 3.4**.: _There exist completely integrable cubic vector fields on \(\mathbb{S}^{2}\)._
Proof.: By Lemma 3.1, \(f^{\prime}x+g^{\prime}y+h^{\prime}z=0\) for some \(f^{\prime},g^{\prime},h^{\prime}\in\mathbb{R}[x,y,z]\) if and only if \(f^{\prime}=\alpha y+\beta z\), \(g^{\prime}=-\alpha x+\gamma z\) and \(h^{\prime}=-\beta x-\gamma y\) for some \(\alpha,\beta,\gamma\in\mathbb{R}\). Suppose \(\gamma\neq 0\) and \(a\in\mathbb{R}\backslash\{0\}\). Then there exist \(b,c\in\mathbb{R}\) such that \(\alpha=\frac{c\gamma}{a}\) and \(\beta=-\frac{b\gamma}{a}\). Define \(f:=\frac{c\gamma}{a}y-\frac{b\gamma}{a}z\), \(g:=-\frac{c\gamma}{a}x+\gamma z\) and \(h:=\frac{b\gamma}{a}x-\gamma y\). Then
\[af+bg+ch=0\quad\text{and}\quad fx+gy+hz=0.\]
Now, we choose \(A=\frac{c}{a}C\) and \(B=-\frac{b}{a}C\) for any quadratic polynomial \(C\) with \(C(0,0,0)=0\). Consider the vector field \(\mathcal{X}=(P,Q,R)\) given by
\[P =(1-x^{2}-y^{2}-z^{2})f+Ay+Bz,\] \[Q =(1-x^{2}-y^{2}-z^{2})g-Ax+Cz,\quad\text{and}\] \[R =(1-x^{2}-y^{2}-z^{2})h-Bx-Cy.\]
By Theorem 3.2, \(\mathcal{X}\) is a vector field on \(\mathbb{S}^{2}\) with cofactor \(-2(fx+gy+hz)\). Hence, \(x^{2}+y^{2}+z^{2}-1\) is a first integral of \(\mathcal{X}\). Again,
\[\mathcal{X}(ax+by+cz)= aP+bQ+cR\] \[= (1-x^{2}-y^{2}-z^{2})(af+bg+ch)+a(Ay+Bz)+b(-Ax+Cz)+c(-Bx-Cy)\] \[= (-bA-cB)x+(aA-cC)y+(aB+bC)z=0.\]
Hence, \(\mathcal{X}\) has two independent first integrals \(x^{2}+y^{2}+z^{2}-1\) and \(ax+by+cz\), which makes \(\mathcal{X}\) completely integrable in \(\mathbb{R}^{3}\).
**Corollary 3.5**.: _If one of \(a,b,c\) is non-zero, then there exists a cubic vector field on \(\mathbb{S}^{2}\) which has a first integral \(ax+by+cz\)._
The rest of this section characterizes vector fields on \(\mathbb{S}^{2}\) which have invariant great circles. Suppose \(P,Q,R\) are given by (3.1), where
\[A=\sum_{1\leqslant i+j+k\leqslant 2}a_{ijk}x^{i}y^{j}z^{k},\ B=\sum_{1 \leqslant i+j+k\leqslant 2}b_{ijk}x^{i}y^{j}z^{k},\ \text{and}\ C=\sum_{1\leqslant i+j+k\leqslant 2}c_{ijk}x^{ i}y^{j}z^{k}. \tag{3.4}\]
Then, under the stereographic projection, the system given by (3.1) becomes
\[\dot{u}=\tilde{P}(u,v)-u\tilde{R}(u,v)=\mathcal{P}(u,v),\] \[\dot{v}=\tilde{Q}(u,v)-v\tilde{R}(u,v)=\mathcal{Q}(u,v),\ \text{where}\] \[\tilde{P}= ((u^{2}+v^{2}+1)^{2}-(2u)^{2}-(2v)^{2}-(1-u^{2}-v^{2})^{2})\tilde {f}+\tilde{A}(2v)+\tilde{B}(1-u^{2}-v^{2})\] \[= 2\tilde{A}v+\tilde{B}(1-u^{2}-v^{2}).\]
Similarly, \(\tilde{Q}=-2\tilde{A}u+\tilde{C}(1-u^{2}-v^{2})\) and \(\tilde{R}=-2\tilde{B}u-2\tilde{C}v\) where
\[\tilde{A}=\sum_{1\leqslant i+j+k\leqslant 2}a_{ijk}(2u)^{i}(2v)^{j }(1-u^{2}-v^{2})^{k}(u^{2}+v^{2}+1)^{2-i-j-k},\] \[\tilde{B}=\sum_{1\leqslant i+j+k\leqslant 2}b_{ijk}(2u)^{i}(2v)^{j }(1-u^{2}-v^{2})^{k}(u^{2}+v^{2}+1)^{2-i-j-k},\] \[\tilde{C}=\sum_{1\leqslant i+j+k\leqslant 2}c_{ijk}(2u)^{i}(2v)^{j }(1-u^{2}-v^{2})^{k}(u^{2}+v^{2}+1)^{2-i-j-k}.\]
**Theorem 3.6**.: _Let \(\mathcal{X}=(P,Q,R)\) be a cubic vector field on \(\mathbb{S}^{2}\). Assume \(\mathcal{X}\) has an invariant great circle. Without loss of generality, we can assume that it is \(\mathbb{S}^{1}=\{z=0\}\cap\mathbb{S}^{2}\). Then, the following hold._
1. _The vector field_ \(\mathcal{X}\) _can be written as (_3.1_) with_ \[B=b_{020}y^{2}+b_{110}xy+b_{010}y+B^{{}^{\prime}}z,C=-b_{110}x^{2}-b_{020}xy-b _{010}x+C^{{}^{\prime}}z\] _where_ \(B^{{}^{\prime}},C^{{}^{\prime}}\) _are linear polynomials in_ \(\mathbb{R}[x,y,z]\)_._
2. _The great circle_ \(\mathbb{S}^{1}\) _is a periodic orbit of_ \(\mathcal{X}\) _if and only if the hypersurfaces_ \[\left\{\sum\limits_{1\leqslant i+j\leqslant 2}a_{ij0}u^{i}v^{j}=0\right\}\ \text{and}\ \{u^{2}+v^{2}=1\}\ \text{do not intersect in}\ \mathbb{R}^{2}.\]
Proof.: (a) If \(\mathcal{X}=(P,Q,R)\) given by (3.1) with \(A,B,C\) as in (3.4) is a vector field on \(\mathbb{S}^{2}\) then
\[xP(x,y,z)+yQ(x,y,z)+zR(x,y,z)=0\ \text{for all}\ (x,y,z)\in\mathbb{S}^{2}.\]
Hence, after the stereographic projection, we get
\[2u\tilde{P}(u,v)+2v\tilde{Q}(u,v)+(1-u^{2}-v^{2})\tilde{R}(u,v)=0\ \text{for all}\ (u,v)\in\mathbb{R}^{2}.\]
So, by the above,
\[u\dot{u}+v\dot{v}=u\tilde{P}(u,v)+v\tilde{Q}(u,v)-(u^{2}+v^{2})\tilde{R}(u,v)= -\frac{1}{2}(u^{2}+v^{2}+1)\tilde{R}(u,v). \tag{3.5}\]
Under the stereographic projection \(\{z=0\}\cap\mathbb{S}^{2}\) reduces to unit circle \(u^{2}+v^{2}=1\) in \(\mathbb{R}^{2}\). Due to (3.5), one can check that \(u^{2}+v^{2}-1=0\) is invariant of the vector field \((\mathcal{P},\mathcal{Q})\) if and only if \(u^{2}+v^{2}-1\) divides \(\tilde{R}=-2(\tilde{B}u+\tilde{C}v)\). Again, \(u^{2}+v^{2}-1\) divides \(\tilde{R}\) if and only if \(\tilde{R}|_{u^{2}+v^{2}=1}=0\). Note that
\[\tilde{R}|_{u^{2}+v^{2}=1}= -2(u\tilde{B}|_{u^{2}+v^{2}=1}+v\tilde{C}|_{u^{2}+v^{2}=1})\] \[= -8\left(\sum_{1\leqslant i+j\leqslant 2}b_{ij0}u^{i+1}v^{j}+c_{ ij0}u^{i}v^{j+1}\right).\]
Hence, \(\tilde{R}|_{u^{2}+v^{2}=1}=0\) implies that
\[b_{100}=b_{200}=c_{010}=c_{020}=0,c_{100}=-b_{010},c_{110}=-b_{020},c_{200}=-b_ {110}.\]
Hence, the first claim follows.
(b) It is well known that the invariant great circle \(\mathbb{S}^{1}\subset\mathbb{S}^{2}\) is a periodic orbit of \(\mathcal{X}\) on \(\mathbb{S}^{2}\) if and only if there is no singular point on it. Observe that, if \(\mathbb{S}^{1}\) is invariant then \(\tilde{R}|_{u^{2}+v^{2}=1}=0\). Hence,
\[\mathcal{P}|_{u^{2}+v^{2}=1}=2v\tilde{A}|_{u^{2}+v^{2}=1}\text{ and }\mathcal{Q}|_{u^{2}+v^{2}=1}=-2u\tilde{A}|_{u^{2}+v^{2}=1}.\]
Note that \(\tilde{A}|_{u^{2}+v^{2}=1}=4\sum\limits_{1\leqslant i+j\leqslant 2}a_{ij0}u^{i}v^ {j}\). So, \(\mathbb{S}^{1}\) is a periodic orbit if and only if the following two equations do not have a common solution.
\[\sum_{1\leqslant i+j\leqslant 2}a_{ij0}u^{i}v^{j}=0,\ u^{2}+v^{2}=1.\]
Llibre [16, Theorem 2] showed that a quadratic homogeneous vector field on \(\mathbb{S}^{2}\) has no great circle which is a periodic orbit. This statement does not hold true for cubic homogeneous vector fields on \(\mathbb{S}^{2}\).
**Proposition 3.7**.:
1. There exists a homogeneous vector field having a great circle as a periodic orbit.
2. There exists a non-homogeneous vector field having a great circle as a periodic orbit.
Proof.: (a) Consider the vector field \(\mathcal{X}\) on \(\mathbb{S}^{2}\) given by (3.1) with
\[A=x^{2}+y^{2},B=y^{2}+xy,C=-x^{2}-xy\quad\text{and}\quad f=g=h=0.\]
By (a) of Theorem 3.6, the intersection \(\{z=0\}\cap\mathbb{S}^{2}\) is an invariant great circle for \(\mathcal{X}\). Also, by (b) of Theorem 3.6, the great circle is periodic since \(u^{2}+v^{2}=0\) and \(u^{2}+v^{2}=1\) do not have a common solution.
(b) Consider the vector field \(\mathcal{X}\) on \(\mathbb{S}^{2}\) given by (3.1) with
\[A=x^{2}+y^{2},\quad B=y^{2}+xy,\quad C=-x^{2}-xy\]
and at least one of \(f,g,h\) is non-zero. By (a) of Theorem 3.6, \(\{z=0\}\cap\mathbb{S}^{2}\) is an invariant great circle for \(\mathcal{X}\). Also, by (b) of Theorem 3.6, the great circle is periodic since \(u^{2}+v^{2}=0\) and \(u^{2}+v^{2}=1\) do not have a common solution.
## 4. Cubic homogeneous vector fields on \(\mathbb{S}^{2}\)
In this section, we study some properties of the vector fields
\[\mathcal{X}:=P\frac{\partial}{\partial x}+Q\frac{\partial}{\partial y}+R\frac{ \partial}{\partial z},\]
defined on \(\mathbb{S}^{2}=\{(x,y,z)\in\mathbb{R}\ |\ f=x^{2}+y^{2}+z^{2}-1=0\}\) such that \(P,Q,R\) are homogeneous cubic polynomials in \(\mathbb{R}[x,y,z]\). Hence
\[\mathcal{X}(f)=K_{f}(f) \tag{4.1}\]
where \(K_{f}\) is the cofactor. We see that
\[\mathcal{X}(f)=2Px+2Qy+2Rz.\]
So, \(\mathcal{X}\) is homogeneous implies that \(\mathcal{X}(f)\) is also homogeneous, but we see that the right-hand side of (4.1) is not homogeneous unless \(K_{f}=0.\) Therefore for a homogeneous vector field \(\mathcal{X}=(P,Q,R)\) on \(\mathbb{S}^{2}\), we have
\[Px+Qy+Rz=0.\]
Hence by Lemma 3.1, the associated differential system of a homogeneous vector field \(\mathcal{X}=(P,Q,R)\) on \(\mathbb{S}^{2}\) will be of the following form.
\[P=Ay+Bz,\quad Q=-Ax+Cz,\quad\text{and}\quad R=-Bx-Cy \tag{4.2}\]
for some polynomials \(A,B,C\) in \(R[x,y,z]\).
Next, we discuss invariant circles on \(\mathbb{S}^{2}\) for homogeneous cubic vector fields. We see from Proposition 2.4 that a circle on \(\mathbb{S}^{2}\) is invariant with respect to the flow of \(\mathcal{X}\) if and only if the cone of the circle is invariant with respect to the flow of \(\mathcal{X}.\) This implies that the circle given by the intersection of \(\{ax+by+cz+d=0\}\) with \(\mathbb{S}^{2}\) is invariant if and only if
\[\begin{split}\mathcal{X}\left((a^{2}-d^{2})x^{2}+(b^{2}-d^{2})y^ {2}+(c^{2}-d^{2})z^{2}+2abxy+2acxz+2bcyz\right)=\\ K\left((a^{2}-d^{2})x^{2}+(b^{2}-d^{2})y^{2}+(c^{2}-d^{2})z^{2}+2 abxy+2acxz+2bcyz\right)\end{split} \tag{4.3}\]
for some polynomial \(K\in\mathbb{R}[x,y,z]\) where \(a^{2}+b^{2}+c^{2}=1\) and \(|d|<1\). We remark that The cones of the invariant circles \(\{ax+by+cz+d=0\}\cap\mathbb{S}^{2}\) and \(\{ax+by+cz-d=0\}\cap\mathbb{S}^{2}\) are same. Hence, invariant circles which are not great circles always occur in pairs.
In the case of a great circle \(\{ax+by+cz=0\}\cap\mathbb{S}^{2}\), the cone is the plane \(\{ax+by+cz=0\}\) itself, and then the great circle is invariant with respect to the flow of \(\mathcal{X}\) if and only if
\[\mathcal{X}(ax+by+cz)=K^{\prime}(ax+by+cz) \tag{4.4}\]
for some polynomial \(K^{\prime}\in\mathbb{R}[x,y,z]\).
**Theorem 4.1**.: _Let \(\mathcal{X}=(P,Q,R)\) be a cubic homogeneous vector field on \(\mathbb{S}^{2}\). Assume \(\mathcal{X}\) has an invariant great circle. Without loss of generality, we can assume that it is \(\mathbb{S}^{1}=\{z=0\}\cap\mathbb{S}^{2}\). If \(\mathcal{X}\) has finitely many invariant great circles, then the maximum number of invariant great circles of the form \(\{ax+by+cz=0\}\cap\mathbb{S}^{2}\) is_
1. _3 if_ \(a=0\) _or_ \(b=0\)_,_
2. _2 if_ \(c=0\)_,_
3. _2 if_ \(a,b,c\) _are non-zero._
_Moreover, in each of the above cases, the bound can be reached._
Proof.: The vector field \(\mathcal{X}\) is given by (4.2). Since \(z=0\) is invariant, \(z\) divides \(R=-Bx-Cy\). Hence, by Lemma 3.1, \(B=Ly+B^{\prime}z\) and \(C=-Lx+C^{\prime}z\) for some \(L,B^{\prime},C^{\prime}\in\mathbb{R}[x,y,z]\) of degree less than or equal to \(1\). Then
\[P =Ay+(Ly+B^{\prime}z)z=(A+Lz)y+B^{\prime}z^{2}=A^{\prime}y+B^{ \prime}z^{2},\] \[Q =-Ax+(-Lx+C^{\prime}z)z=-(A+Lz)x+C^{\prime}z^{2}=-A^{\prime}x+C^{ \prime}z^{2},\] \[R =-(Ly+B^{\prime}z)x-(-Lx+C^{\prime}z)y=-z(B^{\prime}x+C^{\prime}y).\]
where \(A^{\prime}:=A+Lz\) is a homogeneous polynomial of degree less than or equal to \(2\). We write
\[A^{\prime} =a_{1}x^{2}+a_{2}y^{2}+a_{3}z^{2}+a_{4}xy+a_{5}xz+a_{6}yz,\] \[B^{\prime} =b_{1}x+b_{2}y+b_{3}z,\text{ and}\] \[C^{\prime} =c_{1}x+c_{2}y+c_{3}z.\]
(1) The extactic polynomial of \(\mathcal{X}\) associated with \(\langle y,z\rangle\) is
\[\mathcal{E}_{\langle y,z\rangle}(\mathcal{X})=\left|\begin{matrix}y&z\\ Q&R\end{matrix}\right|=z(-C^{\prime}(y^{2}+z^{2})+x(A^{\prime}-B^{\prime}y)).\]
Suppose \(\mathcal{E}_{\langle y,z\rangle}(\mathcal{X})\) has \(\ell\) factors of the form \(by+cz\), say \(b_{i}y+c_{i}z\) for \(i=1,2,3,4\). Then
\[z(-C^{\prime}(y^{2}+z^{2})+x(A^{\prime}-B^{\prime}y))=\prod_{i=1}^{4}(b_{i}y+c _{i}z).\]
For \(x=0\), the above equation becomes \(-zC^{\prime}(0,y,z)(y^{2}+z^{2})=\prod\limits_{i=1}^{4}(b_{i}y+c_{i}z)\). But this is not possible since \(y^{2}+z^{2}\) has no linear factor over \(\mathbb{R}\). So, \(\mathcal{E}_{\langle y,z\rangle}(\mathcal{X})\) has at most \(3\) factors of the form \(by+cz\). Hence, for \(a=0\) by the Proposition 2.1, the maximum number of invariant great circles of the form \(\{by+cz=0\}\cap\mathbb{S}^{2}\) is \(3\).
To show that the bound can be reached, consider \(A^{\prime}=\prod\limits_{i=1}^{2}(b_{i}y+c_{i}z)+B^{\prime}y\), \(C^{\prime}=0\) for any linear homogeneous polynomial \(B^{\prime}\). Then the vector field
\[\mathcal{X}=(P,Q,R)=\left(y\prod\limits_{i=1}^{2}(b_{i}y+c_{i}z)+B^{\prime}(y ^{2}+z^{2}),-x\prod\limits_{i=1}^{2}(b_{i}y+c_{i}z)-B^{\prime}xy,-B^{\prime} xz\right)\]
has invariant hyperplanes \(z=0\), \(b_{1}y+c_{1}z=0\), and \(b_{2}y+c_{2}z=0\).
The proof for \(b=0\) is similar to the proof of \(a=0\). To show that the bound can be reached, consider \(A^{\prime}=\prod\limits_{i=1}^{2}(a_{i}x+c_{i}z)-C^{\prime}y\), \(B^{\prime}=0\) for any linear homogeneous polynomial \(C^{\prime}\). Then the vector field
\[\mathcal{X}=(P,Q,R)=\left(y\prod\limits_{i=1}^{2}(a_{i}x+c_{i}z)-C^{\prime}y^ {2},-x\prod\limits_{i=1}^{2}(a_{i}x+c_{i}z)+C^{\prime}xy+C^{\prime}z^{2},-C^{ \prime}yz\right)\]
has invariant hyperplanes \(z=0\), \(a_{1}x+c_{1}z=0\), and \(a_{2}x+c_{2}z=0\).
(2) The extactic polynomial of \(\mathcal{X}\) associated with \(\langle x,y\rangle\) is
\[\mathcal{E}_{\langle x,y\rangle}(\mathcal{X})=\left|\begin{matrix}x&y\\ P&Q\end{matrix}\right|=-A^{\prime}(x^{2}+y^{2})+z^{2}(C^{\prime}x-B^{\prime}y).\]
Suppose \(\mathcal{E}_{\langle x,y\rangle}(\mathcal{X})\) has \(4\) factors of the form \(ax+by\), say \(a_{i}x+b_{i}y\) for \(i=1,2,..,\ell\leqslant 4\). Then
\[-A^{\prime}(x^{2}+y^{2})+z^{2}(C^{\prime}x-B^{\prime}y)=p\prod\limits_{i=1}^{ \ell}(a_{i}x+b_{i}y) \tag{4.5}\]
for some polynomial \(p\in\mathbb{R}[x,y,z]\). In particular, for \(z=0\), last equation corresponds that
\[-A^{\prime}(x,y,0)(x^{2}+y^{2})=p(x,y,0)\prod\limits_{i=1}^{\ell}(a_{i}x+b_{i} y).\]
If \(A^{\prime}(x,y,0)\neq 0\) then \(\ell\leqslant 2\). Otherwise, \(A^{\prime}=zA^{\prime\prime}\) and \(p=zp^{\prime}\) for some polynomials \(A^{\prime\prime},p^{\prime}\in\mathbb{R}[x,y,z]\). Then (4.5) becomes
\[-A^{\prime\prime}(x^{2}+y^{2})+z(C^{\prime}x-B^{\prime}y)=p^{\prime}\prod \limits_{i=1}^{\ell}(a_{i}x+b_{i}y).\]
Again, for \(z=0\), this equation gives \(-A^{\prime\prime}(x,y,0)(x^{2}+y^{2})=p^{\prime}(x,y,0)\prod\limits_{i=1}^{ \ell}(a_{i}x+b_{i}y)\). If \(A^{\prime\prime}(x,y,0)\neq 0\) then \(\ell\leqslant 1\). Otherwise, \(A^{\prime\prime}=\alpha z\) for some \(\alpha\in\mathbb{R}\). In that case, we obtain \(\mathcal{E}_{\langle x,y\rangle}(\mathcal{X})=z^{2}(-\alpha(x^{2}+y^{2})+(C^{ \prime}x-B^{\prime}y))\). So, in this case also \(\mathcal{E}_{\langle x,y\rangle}(\mathcal{X})\) has maximum two
factors of the form \(ax+by\). So, combining all the cases, \(\mathcal{X}\) has maximum two invariant great circles of the form \(\{ax+by=0\}\cap\mathbb{S}^{2}\).
To prove that the bound can be reached, consider \(A^{\prime}=0,B^{\prime}=C^{\prime}=ax+by\). Then the vector field
\[\mathcal{X}=(P,Q,R)=((ax+by)z^{2},(ax+by)z^{2},-(ax+by)(x+y)z)\]
has invariant hyperplanes \(ax+by=0\) and \(x-y=0\).
(3) In (4.4), if \(\mathcal{X}\) is a homogeneous vector field of degree three, then \(K^{\prime}\) is homogeneous of degree two. Let
\[K^{\prime}=k_{1}x^{2}+k_{2}y^{2}+k_{3}z^{2}+k_{4}xy+k_{5}xz+k_{6}yz.\]
Then, from (4.4), we get the following by equating the coefficients on its both sides of each monomial.
\[x^{3}: ba_{1}=-ak_{1}\] \[y^{3}: aa_{2}=bk_{2}\] \[z^{3}: ab_{3}+bc_{3}=ck_{3}\] \[x^{2}y: aa_{1}-ba_{4}=bk_{1}+ak_{4}\] \[x^{2}z: ba_{5}+cb_{1}=-ck_{1}-ak_{5}\] \[y^{2}z: aa_{6}-cc_{2}=ck_{2}+bk_{6}\] \[xy^{2}: aa_{4}-ba_{2}=ak_{2}+bk_{4}\] \[xz^{2}: -ba_{3}+ab_{1}-cb_{3}+bc_{1}=ak_{3}+ck_{5}\] \[yz^{2}: aa_{3}+ab_{2}+bc_{2}-cc_{3}=bk_{3}+ck_{6}\] \[xyz: aa_{5}-ba_{6}-cb_{2}-cc_{1}=ck_{4}+bk_{5}+ak_{6}.\]
Then, from the first six equations, one can compute \(k_{i}\)'s. For example \(k_{1}=\frac{ba_{1}}{-a}\), \(k_{2}=\frac{aa_{2}}{b}\), \(k_{3}=\frac{ab_{3}+bc_{3}}{c}\), and so on similarly. Substituting these \(k_{i}\)'s in the remaining four equations, we get the following four equations respectively.
\[E_{1}: b^{2}a_{1}+a^{2}a_{2}-aba_{4}=0,\] \[E_{2}: bc^{3}a_{1}+a^{2}bca_{3}-abc^{2}a_{5}-ac(a^{2}+c^{2})b_{1}+a^{2}(a ^{2}+c^{2})b_{3}-a^{2}bcc_{1}+a^{3}bc_{3}=0,\] \[E_{3}: ac^{3}a_{2}+ab^{2}ca_{3}-abc^{2}a_{6}+ab^{2}cb_{2}-ab^{3}b_{3}+bc (b^{2}+c^{2})c_{2}-b^{2}(b^{2}+c^{2})c_{3}=0,\] \[E_{4}: b^{2}c(a^{2}+2b^{2})a_{1}-a^{4}ca_{2}-ab^{3}ca_{4}-ab^{2}(a^{2}+b ^{2})a_{5}+a^{2}b(a^{2}+b^{2})a_{6}-ab^{3}cb_{1}\] \[+a^{2}b^{2}cb_{2}+a^{2}b^{2}cc_{1}-a^{3}bcc_{2}=0. \tag{4.6}\]
If the vector field \(\mathcal{X}\) has \(\{ax+by+cz=0\}\cap\mathbb{S}^{2}\) as an invariant great circle, then (4.6) must be satisfied. In order for there to be additional great circles \(\{px+qy+rz=0\}\cap\mathbb{S}^{2}\), and \(\{sx+ty+uz=0\}\cap\mathbb{S}^{2}\), the vector field \(\mathcal{X}\) must satisfy (4.6) with \((p,q,r)\) and \((s,t,u)\) in place of \((a,b,c)\) in addition to (4.6) itself. Thus we have a system of twelve equations, and solving them (with the help of SAGEMATH2), we get
Footnote 2: Sagemath official website: [https://www.sagemath.org/](https://www.sagemath.org/)
\[a_{1}=a_{2}=a_{4}=a_{5}=a_{6}=b_{1}=b_{3}=c_{2}=c_{3}=0,\text{and }a_{3}=-b_{2}=c_{1}.\]
For these coefficients, the vector field becomes \(\mathcal{X}=(0,0,0)\). Hence, the maximum number of invariant great circles might be \(2\).
To prove that the bound can be reached, consider the following example. Suppose we have the polynomials \(A^{\prime}=x^{2}+y^{2}+2xy+xz+yz,B^{\prime}=-y+z\), \(C^{\prime}=x-z\). Then
\[\mathcal{X}= (P,Q,R)\] \[= (A^{\prime}y+B^{\prime}z^{2},-A^{\prime}x+C^{\prime}z^{2},-B^{ \prime}xz-C^{\prime}xz)\] \[= (x^{2}y+y^{3}+2xy^{2}+xyz+y^{2}z-yz^{2}+z^{3},-x^{3}-xy^{2}-2x^{2 }y-x^{2}z-xyz+xz^{2}-z^{3},-xz^{2}+yz^{2})\]
is a vector field on \(\mathbb{S}^{2}\). One can check that \(x+y+z=0\) and \(x+y-z=0\) are invariant hyperplanes for \(\mathcal{X}\) with corresponding cofactors \(-x^{2}+y^{2}\) and \(-x^{2}+y^{2}-2xz+2yz\) respectively. Hence, \(\{x+y+z=0\}\cap\mathbb{S}^{2}\) and \(\{x+y-z=0\}\cap\mathbb{S}^{2}\) are invariant great circles of \(\mathcal{X}\).
Now, we shall look at invariant circles, which are not great circles.
**Proposition 4.2**.: Assume that a cubic homogeneous vector field \(\mathcal{X}=(P,Q,R)\) on \(\mathbb{S}^{2}\) has an invariant circle which is not a great circle. Without loss of generality, assume the invariant circle to be \(\{z+d=0\}\cap\mathbb{S}^{2}\) with \(0<d<1\). Then \(R=(px+qy)(-d^{2}(x^{2}+y^{2}+z^{2})+z^{2})\) for some \(p,q\in\mathbb{R}\).
Proof.: Recall that \(\mathcal{X}=(P,Q,R)\) is a cubic homogeneous vector field on \(\mathbb{S}^{2}\) if and only if \(\mathcal{X}\) is given by (4.2), i.e., \(P=Ay+Bz,Q=-Ax+Cz,R=-Bx-Cy\) for some polynomials \(A,B,C\in\mathbb{R}[x,y,z]\).
The equation of the cone corresponding to the circle \(\mathcal{C}:=\{z+d=0\}\cap\mathbb{S}^{2}\) is
\[g_{\mathcal{C}}:=-d^{2}(x^{2}+y^{2}+z^{2})+z^{2}.\]
Therefore, the condition for the circle \(\mathcal{C}\) to be invariant is
\[\mathcal{X}g_{\mathcal{C}}=K_{\mathcal{C}}g_{\mathcal{C}}, \tag{4.7}\]
where the polynomial \(K_{\mathcal{C}}\) is the cofactor. Note that \(K_{\mathcal{C}}\) is a homogeneous polynomial. Now,
\[\mathcal{X}g_{\mathcal{C}}=K_{\mathcal{C}}g_{\mathcal{C}}\] \[\implies -2d^{2}(Px+Qy+Rz)+2Rz=K_{\mathcal{C}}g_{\mathcal{C}}\] \[\implies 2Rz=K_{\mathcal{C}}g_{\mathcal{C}}\quad\text{since }Px+Qy+Rz=0.\]
Since \(d\neq 0\), \(z\) does not divide \(g_{\mathcal{C}}\). So, the last implication tells us that \(z\) divides \(K_{\mathcal{C}}\), assume that \(K_{\mathcal{C}}=2z(px+qy+rz)\) for some \(p,q,r\in\mathbb{R}\). Hence, \(R=(px+qy+rz)(-d^{2}(x^{2}+y^{2}+z^{2})+z^{2})\). Since \(R=-Bx-Cy\), we get \(r(1-d^{2})=0\). So, either \(r=0\) or \(d=\pm 1\). Since \(d<1\), \(r\) must be zero. Then \(R=(px+qy)(-d^{2}(x^{2}+y^{2}+z^{2})+z^{2})\).
**Remark 4.3**.: Proposition 4.2 can also be proved for degree \(n(>3)\) homogeneous vector fields on \(\mathbb{S}^{2}\). In that case, \(R=(Mx+Ny)(-d^{2}(x^{2}+y^{2}+z^{2})+z^{2})\) for some \(M,N\in\mathbb{R}[x,y,z]\) such that \(\deg M=(n-3)\) or \(\deg N=(n-3)\).
## 5. Cubic Kolmogorov vector fields on \(\mathbb{S}^{2}\)
We recall that the form of a cubic Kolmogorov vector field on \(\mathbb{S}^{2}\) has been examined in Corollary 3.3. This section aims to study the dynamical properties of these cubic Kolmogorov vector fields on \(\mathbb{S}^{2}.\) For convenience, we shall rewrite the general form of a cubic Kolmogorov vector field, \(\mathcal{X}=(P,Q,R)\) defined on \(\mathbb{S}^{2}\),
\[\begin{split} P&=x(\alpha(1-x^{2}-y^{2}-z^{2})+ Ay^{2}+Bz^{2}),\\ Q&=y(\beta(1-x^{2}-y^{2}-z^{2})-Ax^{2}+Cz^{2}), \text{ and }\\ R&=z(\gamma(1-x^{2}-y^{2}-z^{2})-Bx^{2}-Cy^{2}), \end{split} \tag{5.1}\]
where \(\alpha,\beta,\gamma\) and \(A,B,C\) are constants. In what follows, we shall assume that \(A,B,C\neq 0\) unless it is specified. Notice from (5.1) that the hyperplanes \(\{x=0\}\), \(\{y=0\}\), and \(\{z=0\}\) are invariant with respect to the flow of \(\mathcal{X}.\) This gives that on \(\mathbb{S}^{2}\), the great circles determined by the
intersection of these hyperplanes with \(\mathbb{S}^{2},\) that is, \(\{x=0\}\cap\mathbb{S}^{2},\)\(\{y=0\}\cap\mathbb{S}^{2},\) and \(\{z=0\}\cap\mathbb{S}^{2}\) are invariant with respect to the flow of \(\mathcal{X}.\) From this, it implies that the points where these great circles intersect are singular points. Hence \((\pm 1,0,0),\)\((0,\pm 1,0),\) and \((0,0,\pm 1)\) are singular points of \(\mathcal{X}.\) We want to determine if \(\mathcal{X}\) has any other singular points on \(\mathbb{S}^{2}.\) Note that from (5.1), additional singular points can be obtained by solving the following equations.
\[Ay^{2}+Bz^{2} =0,\] \[-Ax^{2}+Cz^{2} =0,\] \[-Bx^{2}-Cy^{2} =0.\]
These equations, together with the fact that we are on the sphere, \(x^{2}+y^{2}+z^{2}=1,\) give
\[x=\pm\sqrt{\frac{-C}{B-A-C}},\quad y=\pm\sqrt{\frac{B}{B-A-C}},\quad\text{and }\quad z=\pm\sqrt{\frac{-A}{B-A-C}}. \tag{5.2}\]
Observe that \((x,y,z)\) is a point in \(\mathbb{R}^{3}\) when \(A,C>0,\)\(B<0\) or when \(A,C<0,\)\(B>0.\) If neither of these conditions are true, \(\mathcal{X}\) has only \((\pm 1,0,0),\)\((0,\pm 1,0),\) and \((0,0,\pm 1)\) as singularities.
Our objective is to determine the phase portraits of cubic Kolmogorov vector fields on \(\mathbb{S}^{2}.\) From (5.1), we notice that the antipodal transformation \((x,y,z)\mapsto(-x,-y,-z)\) on \(\mathbb{R}^{3}\) causes \((P,Q,R)\mapsto(-P,-Q,-R)\). This implies that trajectories in the southern hemisphere mirror trajectories in the northern hemisphere with the direction of flow reversed. Thus, in order to determine the phase portrait of these vector fields on \(\mathbb{S}^{2},\) it is sufficient to determine the phase portrait on one of the hemispheres (including its boundary). In this section, we shall use the stereographic projection \(\Phi\) of Section 2 from the South pole, to determine the phase portrait on the northern hemisphere. The northern hemisphere along with its boundary is mapped to the closed unit disk \(D^{2}:=\{(u,v)\in\mathbb{R}^{2}\ |\ u^{2}+v^{2}\leqslant 1\}\) by \(\Phi.\) We find the converted vector field in \(\mathbb{R}^{2}\) of the cubic Kolmogorov vector field on \(\mathbb{S}^{2}\) given by (5.1). For that, we have the following:
\[\tilde{P} =2u(4Av^{2}+B(1-u^{2}-v^{2})^{2}),\] \[\tilde{Q} =2v(-4Au^{2}+C(1-u^{2}-v^{2})^{2}),\text{ and}\] \[\tilde{R} =-4(1-u^{2}-v^{2})(Bu^{2}+Cv^{2}).\]
This gives
\[\mathcal{P} =\dot{u}=2u(4Av^{2}+B(1-u^{2}-v^{2})^{2})-u\tilde{R},\text{ and}\] \[\mathcal{Q} =\dot{v}=2v(-4Au^{2}+C(1-u^{2}-v^{2})^{2})-v\tilde{R}. \tag{5.3}\]
We shall now study the behaviour of this system near the singular points \((1,0,0),\)\((0,1,0),\) and \((0,0,1).\) Note that the behaviour of the system near the singular points \((-1,0,0),\)\((0,-1,0),\) and \((0,0,-1)\) is identical (with reverse arrows) to the behaviour around their antipodal counterparts by the discussion above.
In the case(s) where these are the only singular points of \(\mathcal{X},\) we shall be able to draw phase portraits. We shall compute the Jacobian, \(J\) at singular points of the vector field \(\Phi_{*}(\mathcal{X})=(\mathcal{P},\mathcal{Q})\) in \(\mathbb{R}^{2}.\) We have
\[J=\begin{pmatrix}\mathcal{P}_{u}&\mathcal{P}_{v}\\ \mathcal{Q}_{u}&\mathcal{Q}_{v}\end{pmatrix}. \tag{5.4}\]
From (5.3), we have
\[\mathcal{P}_{u} =2(4Av^{2}+B(1-u^{2}-v^{2})^{2})-8Bu^{2}(1-u^{2}-v^{2})-\tilde{R}-u \tilde{R}_{u},\] \[\mathcal{P}_{v} =2u(8Av-4Bv(1-u^{2}-v^{2}))-u\tilde{R}_{v},\] \[\mathcal{Q}_{u} =2v(-8Au-4Cu(1-u^{2}-v^{2}))-v\tilde{R}_{u},\] \[\mathcal{Q}_{v} =2(-4Au^{2}+C(1-u^{2}-v^{2})^{2})-8Cv^{2}(1-u^{2}-v^{2})-\tilde{R} -v\tilde{R}_{v}.\]
Also,
\[\tilde{R}_{u} =8u(Bu^{2}+Cv^{2})-8Bu(1-u^{2}-v^{2}),\] \[\tilde{R}_{v} =8v(Bu^{2}+Cv^{2})-8Cv(1-u^{2}-v^{2}).\]
Now we compute the Jacobian for the singular points \((1,0,0)\), \((0,1,0)\), and \((0,0,1)\).
1. For \((1,0,0)\): The stereographic projection \(\Phi\) maps this point to \((1,0)\) on the plane. We compute \[\mathcal{P}_{u}(1,0)=-8B,\quad\mathcal{P}_{v}(1,0)=0,\quad\mathcal{Q}_{u}(1,0)= 0,\quad\mathcal{Q}_{v}(1,0)=-8A.\] Therefore, \[J_{(1,0)}=8\begin{pmatrix}-B&0\\ 0&-A\end{pmatrix}.\]
2. For \((0,1,0)\): The stereographic projection \(\Phi\) maps this point to \((0,1)\) on the plane. We compute \[\mathcal{P}_{u}(0,1)=8A,\quad\mathcal{P}_{v}(0,1)=0,\quad\mathcal{Q}_{u}(0,1)= 0,\quad\mathcal{Q}_{v}(0,1)=-8C.\] Therefore, \[J_{(0,1)}=8\begin{pmatrix}A&0\\ 0&-C\end{pmatrix}.\]
3. For \((0,0,1)\): The stereographic projection \(\Phi\) maps this point to \((0,0)\) on the plane. We compute \[\mathcal{P}_{u}(0,0)=2B,\quad\mathcal{P}_{v}(0,0)=0,\quad\mathcal{Q}_{u}(0,0)= 0,\quad\mathcal{Q}_{v}(0,0)=2C.\] Therefore, \[J_{(0,0)}=2\begin{pmatrix}B&0\\ 0&C\end{pmatrix}.\]
Assume that \(A,B,C\neq 0.\) Let
\[(a)\ A,C>0,B<0,\quad\text{and}\quad(b)\ A,C<0,B>0. \tag{5.5}\]
Thus, if neither (a) nor (b) is true, then \((\pm 1,0,0)\), \((0,\pm 1,0)\), and \((0,0,\pm 1)\) are the only singular points of \(\mathcal{X}\) on \(\mathbb{S}^{2}\), and we draw the phase portraits for each of these cases in Figure 1.
**Remark 5.1**.: When one of \(A,B,C\) is zero, then the singular points are no longer isolated. For example, we plot the phase portrait when \(A=0\) and \(B,C>0\) in Figure 2. In this case, all points on the boundary of the disk (i.e., the unit circle) are singular.
From the above discussion, we see that when neither (a) nor (b) is true in (5.5), any Kolmogorov vector field of the form in (5.1) has phase portrait that is topologically equivalent to either (I) or (III) in Figure 1. So we have the following theorem.
**Theorem 5.2**.: _A cubic Kolmogorov vector field on \(\mathbb{S}^{2}\) of the form given in (5.1) admits no periodic orbit if \(A,B,C\) in (5.1) do not satisfy the following._
\[(1)\ A,C>0,B<0,\quad\text{and}\quad(2)\ A,C<0,B>0.\]
In the remaining, we discuss what happens when condition (a) in (5.5) is true. Note that the same analysis will also apply for (b), albeit with a change in the sense of the orbits in the phase portrait. That is when \(A,C>0\) and \(B<0.\) We know that in this case, there exist extra singularities given in (5.2). Applying the stereographic projection to these singular points, we have,
\[u=\frac{\pm\sqrt{-C}}{\sqrt{B-A-C}\pm\sqrt{-A}}\quad,\quad v=\frac{\pm\sqrt{B} }{\sqrt{B-A-C}\pm\sqrt{-A}}.\]
Let us fix a singular point
\[u_{0}=\frac{\sqrt{-C}}{\sqrt{B-A-C}+\sqrt{-A}}\quad,\quad v_{0}=\frac{\sqrt{B} }{\sqrt{B-A-C}+\sqrt{-A}}.\]
Figure 1. Phase portraits when neither (a) nor (b) of (5.5) is true.
Figure 2.
We note that the analysis below can be carried out entirely similarly for the other singular points. We need to evaluate the Jacobian in (5.4) at \((u_{0},v_{0}).\) After the calculation, we get the following.
\[\mathcal{P}_{u}(u_{0},v_{0}) =\frac{8AB}{D^{2}}+2B\left(\frac{D^{2}+C-B}{D^{2}}\right)^{2},\] \[\mathcal{P}_{v}(u_{0},v_{0}) =\frac{16A\sqrt{-BC}}{D^{2}}+8\frac{(C-B)\sqrt{-BC}}{D^{2}}\left( \frac{D^{2}+C-B}{D^{2}}\right),\] \[\mathcal{Q}_{u}(u_{0},v_{0}) =\frac{-16A\sqrt{-BC}}{D^{2}}-8\frac{(C-B)\sqrt{-BC}}{D^{2}}\left( \frac{D^{2}+C-B}{D^{2}}\right),\] \[\mathcal{Q}_{v}(u_{0},v_{0}) =\frac{8AC}{D^{2}}+2C\left(\frac{D^{2}+C-B}{D^{2}}\right)^{2},\]
where \(D:=\sqrt{B-A-C}+\sqrt{-A}.\) Notice that with our choice of the coefficients \(A,B\) and \(C,\) the term \(D\) is imaginary and so \(D^{2}<0.\) We take
\[F=\frac{8A}{D^{2}}+2\left(\frac{D^{2}+C-B}{D^{2}}\right)^{2},\]
then the characteristic polynomial \(c(\lambda)\) for the Jacobian matrix \(J_{(u_{0},v_{0})}\) is
\[c(\lambda)=\lambda^{2}-(B+C)F\lambda+BCF^{2}+\mathcal{P}_{v}^{2}(u_{0},v_{0}). \tag{5.6}\]
The roots of this polynomial are by definition the eigenvalues of the Jacobian, and from these eigenvalues, we shall determine the nature of the singular point. The discriminant, \(\Delta,\) of \(c(\lambda)\) is
\[\Delta:= (B+C)^{2}F^{2}-4BCF^{2}-4\mathcal{P}_{v}^{2}(u_{0},v_{0})\] \[= (C-B)^{2}F^{2}-4\mathcal{P}_{v}^{2}(u_{0},v_{0})\] \[= ((C-B)F+2\mathcal{P}_{v}(u_{0},v_{0}))((C-B)F-2\mathcal{P}_{v}(u_ {0},v_{0})).\]
Now,
\[(C-B)F+2\mathcal{P}_{v}(u_{0},v_{0})\] \[= \frac{8A(C-B)}{D^{2}}+2(C-B)\left(\frac{D^{2}+C-B}{D^{2}}\right)^ {2}+\frac{32A\sqrt{-BC}}{D^{2}}+\frac{16(C-B)\sqrt{-BC}}{D^{2}}\left(\frac{D^ {2}+C-B}{D^{2}}\right)\] \[= \frac{8A(C-B+4\sqrt{-BC})}{D^{2}}+\frac{2(C-B)(D^{2}+C-B+8\sqrt{ -BC})}{D^{2}}\left(\frac{D^{2}+C-B}{D^{2}}\right).\]
For large enough \(A,\) the expression \((C-B)F+2\mathcal{P}_{v}(u_{0},v_{0})\) is negative. In fact
\[\lim_{A\to\infty}\{(C-B)F+2\mathcal{P}_{v}(u_{0},v_{0})\}\] \[= \lim_{A\to\infty}\left(\frac{8A[(\sqrt{C}+\sqrt{-B})^{2}+2\sqrt{ -BC}]}{D^{2}}+\frac{2(C-B)[D^{2}+(\sqrt{C}+\sqrt{-B})^{2}+6\sqrt{-BC}]}{D^{2} }\left(\frac{D^{2}+C-B}{D^{2}}\right)\right)\] \[= \lim_{A\to\infty}\left(\frac{8A[(C-B+4\sqrt{-BC}]}{B-2A-C+2\sqrt{ -A}\sqrt{B-A-C}+8\sqrt{-BC}]}{B-2A-C+2\sqrt{-A}\sqrt{B-A-C}}\frac{(-2A+2\sqrt {-A}\sqrt{B-A-C}}{B-2A-C+2\sqrt{-A}\sqrt{B-A-C}})\right)\] \[= -8\sqrt{-BC}.\]
In a similar manner
\[(C-B)F-2\mathcal{P}_{v}(u_{0},v_{0})\] \[= \frac{8A(C-B)}{D^{2}}+2(C-B)\left(\frac{D^{2}+C-B}{D^{2}}\right)^ {2}-\frac{32A\sqrt{-BC}}{D^{2}}-\frac{16(C-B)\sqrt{-BC}}{D^{2}}\left(\frac{D^{2 }+C-B}{D^{2}}\right)\] \[= \frac{8A(C-B-4\sqrt{-BC})}{D^{2}}+\frac{2(C-B)(D^{2}+C-B-8\sqrt{ -BC})}{D^{2}}\left(\frac{D^{2}+C-B}{D^{2}}\right).\]
For large enough \(A\), the expression \((C-B)F-2\mathcal{P}_{v}\) is positive. In fact
\[\lim_{A\to\infty}\{(C-B)F-2\mathcal{P}_{v}\}=8\sqrt{-BC}.\]
From the above, we conclude that for \(A\) large enough, the discriminant \(\Delta\) of \(c(\lambda)\) is negative and hence the Jacobian matrix at \((u_{0},v_{0})\) has a pair of complex conjugate eigenvalues, say \(\mu\pm i\nu\). Thus we get the following result using [17, Theorem 4, Page-143] and [17, Corollary to Theorem 5, Page-145].
**Theorem 5.3**.: _For \(A\) large enough, the singular point \((u_{0},v_{0})\) is either a center or a focus._
**Remark 5.4**.: If the real part of the eigenvalue \(\mu>0\), then the singular point \((u_{0},v_{0})\) is an unstable focus and if \(\mu<0\), then the singular point \((u_{0},v_{0})\) is a stable focus, see [17, Theorem 4, Page-143].
**Example 5.5**.: We compute the discriminant and also determine the nature of the extra singularities when \(A=5,\)\(B=-1\) and \(C=2.\) In this case, the computation gives that \((B+C)F=-0.0001\) and \(\Delta=-123.54\).
From (5.6), we see that the eigenvalues are given by
\[\frac{(B+C)F\pm\sqrt{\Delta}}{2}.\]
Therefore, the eigenvalues are complex conjugate with a negative real part. This implies that the singular point is a stable focus. A similar calculation will also show that the remaining singularities belonging to the interior of \(D^{2}\) are also stable foci. In Figure 3, we plot the phase portrait of this case.
**Acknowledgments.** The first author was supported by a Senior Research Fellowship of the University Grants Commission of India for the duration of this work. The second author is supported by the Prime Minister's Research Fellowship, Government of India. The third author thanks 'ICSR office IIT Madras' for SEED research grant.
|
2308.00007 | Effects of Phi and $σ^{*}$-meson on properties of hyperon stars
including $Δ$ resonance | In this work, we study the properties of neutron stars using the linear
Relativistic Mean-Field (RMF) theory and consider multiple degrees of freedom
inside neutron stars, including hyperons and $\Delta$ resonances. We
investigate different coupling parameters $x_{\sigma \Delta}$ between $\Delta$
resonances and nucleons and compare the differences between neutron stars with
and without strange mesons $\sigma^*$ and $\phi$. These effects include
particle number distributions, equations of state (EOS), mass-radius relations,
and tidal deformabilities. To overcome the "hyperon puzzle," we employ the
$\sigma-cut$ scheme to obtain neutron stars with masses up to $2M_{\odot}$. We
find that strange mesons appear at around 3$\rho_0$ and reduce the critical
density of baryons in the high-density region. With increasing coupling
parameter $x_{\sigma \Delta}$, the $\Delta$ resonances suppress hyperons,
leading to a shift of the critical density towards lower values. The early
appearance of $\Delta$ resonances may play a crucial role in the stability of
neutron stars. Strange mesons soften the EOS slightly, while $\Delta$
resonances predominantly soften the EOS in the low-density region. By
calculating tidal deformabilities and comparing with astronomical observation
GW170817, we find that the inclusion of $\Delta$ resonances decreases the
radius of neutron stars. | Chen Wu, Wenjun Guo | 2023-07-29T07:58:18Z | http://arxiv.org/abs/2308.00007v5 | Effects of \(\phi\) and \(\sigma^{*}\)-meson on properties of hyperon stars including \(\Delta\) resonance
###### Abstract
In this work, we study the properties of neutron stars using the linear Relativistic Mean-Field (RMF) theory and consider multiple degrees of freedom inside neutron stars, including hyperons and \(\Delta\) resonances. We investigate different coupling parameters \(x_{\sigma\Delta}\) between \(\Delta\) resonances and nucleons and compare the differences between neutron stars with and without strange mesons \(\sigma^{*}\) and \(\phi\). These effects include particle number distributions, equations of state (EOS), mass-radius relations, and tidal deformabilities. To overcome the "hyperon puzzle," we employ the \(\sigma-cut\) scheme to obtain neutron stars with masses up to \(2M_{\odot}\). We find that strange mesons appear at around \(3\rho_{0}\) and reduce the critical density of baryons in the high-density region. With increasing coupling parameter \(x_{\sigma\Delta}\), the \(\Delta\) resonances suppress hyperons, leading to a shift of the critical density towards lower values. The early appearance of \(\Delta\) resonances may play a crucial role in the stability of neutron stars. Strange mesons soften the EOS slightly, while \(\Delta\) resonances predominantly soften the EOS in the low-density region. By calculating tidal deformabilities and comparing with astronomical observation GW170817, we find that the inclusion of \(\Delta\) resonances decreases the radius of neutron stars.
## I Introduction
Observations of massive neutron stars, such as PSR J1614-2230 with a mass of \(1.908\pm 0.016M_{\odot}\), have provided important constraints on the equation of state (EOS) of nuclear matter [1; 2; 3; 4]. Similarly, PSR J0348+0432 with a mass of \(2.01\pm 0.04M_{\odot}\)[5], MSP J0740+6620 with a mass of \(2.08^{+0.07}_{-0.07}M_{\odot}\)[6; 7] and a radius of \(12.39^{+1.30}_{-0.98}\) km (from NICER observations [8]) have also contributed to constraining the EOS. The first multi-messenger gravitational wave event GW170817 observed by LIGO-Virgo Collaboration (LWC) set constraints on the tidal deformability [9; 10] of the involved stars. The compactness-radius relation predicts a radius of \(12\leq R_{1.4}\leq 13\) km for a standard mass neutron star with \(M=1.4M_{\odot}\). These astronomical observations not only constrain the tidal deformability of a \(1.4\ M_{\odot}\) neutron star but also shed light on the strong interactions within dense nuclear matter.
Due to the high core density inside neutron stars, as the nucleon density increases, the strong interaction among hadrons leads to the excitation of hyperons [11; 12; 13; 14], \(\Delta\) resonances [15; 16; 17; 18; 19; 20; 21; 22; 23], or kaon meson condensation [24; 25; 26; 27; 28] in the neutron star interior. These forms of matter have significant impacts on the structure and evolution of the stars. While the presence of hyperons inside neutron stars is unavoidable, their appearance results in a significant softening of the equation of state, leading to a reduction in the maximum mass of neutron stars, commonly referred to as the hyperon puzzle [29; 30; 31]. Despite various speculations about the existence of hyperons in neutron stars, discussions about \(\Delta\) resonances have been limited. This is because the coupling parameters of \(\Delta\) with nucleons are not well determined, either experimentally or theoretically, and different coupling parameters can have a significant impact on the final results [20; 32]. Additionally, the occurrence of \(\Delta\) resonances causes the equation of state to soften, giving rise to a \(\Delta\) puzzle, similar to the hyperon puzzle found in some literature [19]. To ensure the stiffness of the equation of state and the existence of massive neutron stars, researchers have employed linear RMF theory with the inclusion of the sigma-cut scheme [33] and density-dependent functionals to study neutron stars containing hyperons [34; 35; 36; 37; 38; 39; 19].
The resolution of the hyperon puzzle requires additional repulsive interactions between baryons [40] to counteract this attractive mechanism. These repulsive interactions include: (a) Increasing the repulsive hyperon three-body force [30]. (b) Deconfinement phase transition to quark matter below the hyperon threshold [41]. (c) Enhancing the repulsive hyperonic interactions through the exchange of vector mesons [42]. The Relativistic Mean-Field (RMF) theory is an effective field theory used to handle the interactions between hadrons (nucleons) in a relativistic framework. The relevant degrees of freedom in this theory are baryons interacting through the exchange of \(\sigma\), \(\omega\), and \(\rho\) mesons. The scalar meson \(\sigma\) provides intermediate-range attraction, the vector meson \(\omega\) provides short-range repulsion, and the vector-isovector meson \(\rho\) describes the difference between neutrons and protons. Over the years, the \(\sigma\), \(\omega\), and \(\rho\) mesons have been widely applied in various aspects of neutron star research. However, there should also be scalar meson \(\sigma^{*}\) and vector meson \(\phi\) involved in the interactions between hyperons. These two mesons specifically interact between hyperons and do not participate in interactions between nucleons. In this work, we employ an alternative approach to overcome the softening of the equation of state (EOS) caused by multiple degrees of freedom, namely the \(\sigma\)-cut scheme [33]. This scheme suggests that
in the small-scale range where the density \(\rho_{B}\)\(>\)\(\rho_{0}\), a sharp decrease in the strength of the \(\sigma\) meson reduces the drop in the effective mass of nucleons, leading to an increase in the particle chemical potential and ultimately resulting in the stiffening of the EOS [43; 44]. In this article, we use the IUFSU model [45; 46] to study neutron star matter including hyperons, \(\Delta\) resonance, and strange mesons (\(\sigma^{*},\phi\)) with the \(\sigma\)-cut scheme. In Table 1, we present the properties and fundamental constants for baryons other than nucleons.
This paper is organized as follows. First, the theoretical framework is presented. Then we will study the effects of strange mesons (\(\sigma^{*},\phi\)) and \(\Delta\) resonance contains hyperons with the \(\sigma\)-cut scheme. Finally, some conclusions are provided.
## II Theoretical framework
In this section, we introduce the IUFSU model to study the properties of the NS including hyperons and \(\Delta\) resonance, the Lagrangian density of hadron matter is given by:
\[\begin{split}\mathcal{L}=&\sum_{B}\bar{\psi}_{B}[ i\gamma^{\mu}\partial\mu-m_{B}+g_{\sigma B}\sigma+g_{\sigma^{*}B}\sigma^{*}-g_{ \omega B}\gamma^{\mu}\omega_{\mu}-g_{\phi B}\gamma^{\mu}\phi_{\mu}-g_{\rho B }\gamma^{\mu}\vec{\tau}\cdot\vec{\rho^{2}}]\psi_{B}+\\ &\sum_{D}\bar{\psi}_{D}[i\gamma^{\mu}\partial\mu-m_{D}+g_{\sigma D }\sigma-g_{\omega D}\gamma^{\mu}\omega_{\mu}-g_{\rho D}\gamma^{\mu}\vec{\tau} \cdot\vec{\rho^{2}}]\psi_{D}+\\ &\frac{1}{2}\partial_{\mu}\sigma\partial^{\mu}\sigma-\frac{1}{2} m_{\sigma}^{2}\sigma^{2}-\frac{\kappa}{3!}(g_{\sigma N}\sigma)^{3}-\frac{ \lambda}{4!}(g_{\sigma N}\sigma)^{4}-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}+\frac{1}{ 2}m_{\omega}^{2}\omega_{\mu}\omega^{\mu}-\\ &\frac{1}{2}(\partial_{\mu}\sigma^{*}\partial^{\mu}\sigma^{*}-m_ {\sigma^{*}}^{2}\sigma^{*2})+\frac{\xi}{4!}(g_{\omega N}^{2}\omega_{\mu} \omega^{\mu})^{2}+\frac{1}{2}m_{\rho}^{2}\vec{\rho}_{\mu}\cdot\vec{\rho}^{ \mu}-\frac{1}{4}\vec{G}_{\mu\nu}\vec{G}^{\mu\nu}+\\ &\frac{1}{4}\Phi_{\mu\nu}\Phi^{\mu\nu}+\frac{1}{2}m_{\phi}^{2} \phi_{\mu}\phi^{\mu}+\Lambda_{\nu}(g_{\rho N}^{2}\vec{\rho}_{\mu}\cdot\vec{ \rho}^{\mu})(g_{\omega N}^{2}\omega_{\mu}\omega^{\mu})+\sum_{l}\bar{\psi}_{l} [i\gamma^{\mu}\partial\mu-m_{l}]\psi_{l},\end{split} \tag{1}\]
with the field tensors
\[\begin{split} F_{\mu\nu}&=\partial_{\mu}\omega_{ \nu}-\partial_{\nu}\omega_{\mu}\\ \vec{G}_{\mu\nu}&=\partial_{\mu}\vec{\rho}_{\nu}- \partial_{\nu}\vec{\rho}_{\mu}\\ \Phi_{\mu\nu}&=\partial_{\mu}\phi_{\nu}-\partial_{ \nu}\phi_{\mu},\end{split} \tag{2}\]
The model includes the baryon octet and two leptons (\(p\),\(n\),\(e\),\(\mu\),\(\Lambda^{0}\), \(\Sigma^{+}\), \(\Sigma^{0}\), \(\Sigma^{-}\), \(\Xi^{0}\), \(\Xi^{-}\)) as well as \(\Delta\) resonances (\(\Delta^{++}\), \(\Delta^{+}\), \(\Delta^{0}\), \(\Delta^{-}\)). The strong interactions between baryons are mediated by the isoscalar-scalar mesons \(\sigma\), \(\sigma^{*}\), isoscalar-vector mesons \(\omega\), \(\phi\) and isovector-vector meson \(\rho\), each with their respective masses and coupling constants. The isospin operator \(\vec{\tau}\) is used to represent the isovector-vector meson fields. The parameter \(\Lambda_{\nu}\) is introduced to modify the density dependence of the symmetry energy. The self-interactions of the isoscalar mesons (through the \(\kappa\), \(\lambda\), and \(\xi\) terms) are necessary to obtain an appropriate equation of state for symmetric nuclear matter. In the relativistic mean field (RMF) model, the operators of the meson fields are replaced by their expectation values using the mean field approximation. The parameters of the IUFSU model are listed in Table 2.
Finally, with the Euler-Lagrange equation, the equations of motion for baryons and mesons are obtained:
\[\begin{split}& m_{\sigma}^{2}\sigma+\frac{1}{2}\kappa g_{\sigma N}^{3} \sigma^{2}+\frac{1}{6}\lambda g_{\sigma N}^{4}\sigma^{3}=\sum_{B}g_{\sigma B} \rho_{B}^{S}+\sum_{D}g_{\sigma D}\rho_{D}^{S}\\ & m_{\omega}^{2}\omega+\frac{\xi}{6}g_{\omega N}^{4}\omega^{3}+2 \Lambda_{\nu}g_{\rho N}^{2}g_{\omega N}^{2}\rho^{2}\omega=\sum_{B}g_{\omega B} \rho_{B}+\sum_{D}g_{\omega D}\rho_{D}\\ & m_{\rho}^{2}\rho+2\Lambda_{\nu}g_{\rho N}^{2}g_{\omega N}^{2} \omega^{2}\rho=\sum_{B}g_{\rho B}\tau_{3B}\rho_{B}+\sum_{D}g_{\rho D}\tau_{3D} \rho_{D}\\ & m_{\phi}^{2}\phi=\sum_{B}g_{\phi B}\rho_{B}\\ & m_{\sigma^{*}}^{2}\sigma^{*}=\sum_{B}g_{\sigma^{*}B}\rho_{B}^{S },\end{split} \tag{3}\]
where \(\rho_{B(D)}\) and \(\rho_{B(D)}^{S}\) are the baryon(\(\Delta\)) density and the scalar density, which reads:
\[\begin{split}\rho_{B}&=\gamma\frac{k_{fB}}{6\pi^{2} }\\ \rho_{B}^{S}&=\gamma\frac{M^{*}}{4\pi^{2}}[-M^{*2} \ln(\frac{k_{fB}+E_{fB}^{*}}{M^{*2}})+k_{fB}E_{fB}^{*}]\end{split} \tag{4}\]
\(\gamma=2\) for baryons and \(\gamma=4\) for \(\Delta\) resonance. Here \(E_{fB}^{*}=\sqrt{M^{*2}+k_{fB}^{2}}\).
Now, we are in a position to discuss the coupling parameters between baryons (nucleons, hyperons and \(\Delta\)) and meson fields.
For the meson-hyperon couplings, we take those in the SU(6) symmetry for the vector couplings constants:
\[\begin{split}& g_{\omega\Delta}=g_{\omega\Sigma}=2g_{\omega\Xi}= \frac{2}{3}g_{\omega N}\\ & g_{\rho\Lambda}=0,g_{\rho\Sigma}=2g_{\rho\Xi}=2g_{\rho N}\\ & 2g_{\phi\Lambda}=2g_{\phi\Sigma}=g_{\phi\Xi}=\frac{-2\sqrt{2}}{3 }g_{\omega N}\end{split} \tag{5}\]
The scalar couplings are typically determined by fitting hyperon potentials with \(U_{Y}^{(N)}=g_{\omega Y}\omega_{0}-g_{\sigma Y}\sigma_{0}\), where \(\sigma_{0}\) and \(\omega_{0}\) are the values of the scalar and vector meson strengths at nuclear saturation density [47]. We choose the hyperon-nucleon potentials of \(\Lambda\), \(\Sigma\), and \(\Xi\) as \(U_{\Lambda}^{N}=-30\)MeV, \(U_{\Sigma}^{N}=30\)MeV, and \(U_{\Xi}^{N}=-18\)MeV [48; 49; 50]. Table 3 provides the numerical values of the meson-hyperon couplings at nuclear saturation density, where \(x_{\sigma Y}=g_{\sigma Y}/g_{\sigma N}\).
While strange mesons only interact with hyperons, so \(g_{\phi N}=g_{\sigma^{*}N}=g_{\phi\Delta}=g_{\sigma^{*}\Delta}\)=0. The masses of the strange mesons \(\phi\) and \(\sigma^{*}\) are \(M_{\phi}=1020\)MeV and \(M_{\sigma^{*}}=975\)MeV, respectively.
For the scalar meson \(\sigma^{*}\), we treat its coupling purely phenomenologically so as to satisfy the potential depths \(U_{\Sigma}^{(\Xi)}\simeq U_{\Lambda}^{(\Xi)}\simeq U_{\Xi}^{(\Xi)}\simeq U_{ \Lambda}^{(\Lambda)}\simeq 2U_{\Sigma}^{(\Lambda)}=40MeV\). This yield \(g_{\sigma^{*}\Lambda}/g_{\sigma}=g_{\sigma^{*}\Sigma}/g_{\sigma}=0.69\), \(g_{\sigma^{*}\Xi}/g_{\sigma}=1.25\)[51].
Due to the scarcity of experimental data and theoretical calculations regarding the \(\Delta\) resonance, there is uncertainty in the coupling parameters between the \(\Delta\) resonances and meson fields(\(\sigma,\omega,\rho\)). Therefore, we limit ourselves to considering only the couplings with the \(\sigma\) meson field, which has been explored in the literature [52; 53]. We assume that the scalar coupling ratio \(x_{\sigma\Delta}=g_{\sigma\Delta}/g_{\sigma N}>1\) and choose a value close to the mass ratio of the \(\Delta\) and the nucleon [54]. We adopt three different choices for \(x_{\sigma\Delta}\) (\(x_{\sigma\Delta}=1.05\), \(x_{\sigma\Delta}=1.1\), and \(x_{\sigma\Delta}=1.15\)) [55]. For \(x_{\omega\Delta}\) and \(x_{\rho\Delta}\), we take \(x_{\omega\Delta}=g_{\omega\Delta}/g_{\omega N}=1.1\)(Because we found that \(x_{\omega\Delta}<1\) would lead to a significantly lower maximum mass of neutron stars than astronomical observations.) and \(x_{\rho\Delta}=g_{\rho\Delta}/g_{\rho N}=1\)[56].
When neutrinos are not captured, the set of equilibrium chemical potential relations under general conditions is as follows:
\[\mu_{i}=\mu_{n}-q_{i}\mu_{e^{-}} \tag{6}\]
where \(q_{i}\) is the charge of \(i-\)th baryon, and the charge neutrality condition is fulfilled by:
\[\sum_{B}q_{B}\rho_{B}+\sum_{D}q_{D}\rho_{D}=\rho_{e}^{-}+\rho_{\mu^{-}}. \tag{7}\]
The chemical potential of baryons, \(\Delta\) and leptons read:
\[\mu_{i}=\sqrt{k_{F}^{i2}+m_{i}^{*2}}+g_{\omega i}\omega+g_{\rho i}\tau_{3i}\rho+g _{\phi i}\phi,i=B,D \tag{8}\]
\[\mu_{l}=\sqrt{k_{F}^{l2}+m_{l}^{2}},l=e^{-},\mu^{-} \tag{9}\]
where \(k_{F}^{i}\) is the Fermi momentum and the \(m_{i}^{*}\) is the effective mass of baryon and \(\Delta\) resonances, which can be related to the scalar meson field as \(m_{i}^{*}=m_{i}-g_{\sigma i}\sigma-g_{\sigma^{*}i}\sigma^{*}\), and \(k_{F}^{l}\) is the Fermi momentum of the lepton \(l(\mu^{-},e^{-})\).
The total energy density can be given as
\[\begin{split}\varepsilon=&\sum_{i=B,D}\frac{\gamma}{(2 \pi)^{3}}\int_{0}^{k_{F}^{i}}\sqrt{m_{i}^{*}+k^{2}}d^{3}k+\frac{1}{2}m_{\omega}^{2} \omega^{2}\\ &+\frac{\xi}{8}g_{\omega N}^{4}\omega^{4}+\frac{1}{2}m_{\sigma}^{2 }\sigma^{2}+\frac{\kappa}{6}g_{\sigma N}^{3}\sigma^{3}+\frac{\lambda}{24}g_{ \sigma N}^{4}\sigma^{4}\\ &+\frac{1}{2}m_{\phi}^{2}\phi^{2}+3\Lambda_{\nu}g_{\rho N}^{2}g_{ \omega N}^{2}\omega^{2}\rho^{2}+\frac{1}{2}m_{\rho}^{2}\rho^{2}\\ &+\frac{1}{2}m_{\sigma}^{2}\sigma^{2*}+\frac{1}{\pi^{2}}\sum_{l} \int_{0}^{k_{F}^{i}}\sqrt{k^{2}+m_{l}^{2}}k^{2}dk,\end{split} \tag{10}\]
And the expression of pressure reads
\[P=\sum_{i=B,D}\rho_{i}\mu_{i}+\sum_{l=\mu^{-},e^{-}}\rho_{l}\mu_{l}-\varepsilon, \tag{11}\]
Once the equation of state is specified, the mass-radius relation and other relevant quantities of neutron star can be obtained by solving the Tolman-Oppenheimer-Volkoff (TOV) equation [57].
\[\begin{split}\frac{dP(r)}{dr}=&-\frac{GM(r)}{r^{2} }\varepsilon(1+\frac{4\pi r^{3}P}{M(r)C^{2}})(1+\frac{P}{\varepsilon C^{2}}) \\ &\times(1-\frac{2GM(r)}{rC^{2}})^{-1},\end{split} \tag{12}\]
\[dM(r)=4\pi r^{2}\varepsilon(r)dr \tag{13}\]
The tidal deformability of a neutron star is reduced as a dimensionless form [58; 59].
\[\Lambda=\frac{2}{3}k_{2}{C_{1}}^{-5} \tag{14}\]
where \(C_{1}=GM/R\), the second Love number \(k_{2}\) can be fixed simultaneously with the structures of compact stars [60].
The \(\sigma\)-cut scheme [33] involves introducing an additional sigma self-interaction term in the model [61; 62; 33], which results in a stiffer equation of state (EOS):
\[\Delta U(\sigma)=\alpha\ln(1+exp[\beta(f-f_{s,core})]) \tag{15}\]
In this context, \(f=g_{sN}\sigma/M_{N}\) and \(f_{s,core}=f_{0}+c_{\sigma}(1-f_{0})\). Here, \(M_{N}\) represents the nucleon mass, and \(f_{0}\) is the value of \(f\) at saturation density, which equals 0.31 in the IUFSU model. \(c_{\sigma}\) is a positive parameter that we can adjust. The smaller the value of \(c_{\sigma}\), the stronger the effect of the \(\sigma\)-cut scheme. In our previous work, we extensively discussed the choice of the parameter \(c_{\sigma}\)[43]. In this paper, we adopt \(c_{\sigma}=0.15\) to satisfy the constraint on the maximum mass. The constants \(\alpha\) and \(\beta\) have values of \(4.822\times 10^{-4}M_{N}^{4}\) and 120, respectively, following the settings in Ref. [33]. This scheme stiffns the equation of state by quenching the decrease of the effective nucleon mass \(M_{N}^{*}=M_{N}(1-f)\) above saturation density \(\rho_{0}\).
## III Results
First, we studied the effect of the \(\sigma\)-cut scheme on the IUFSU model. In Fig. 1, we plot the ratio of the effective mass to the rest mass of nucleons as function of the baryon density. The dashed and solid lines represent the cases with and without strange mesons \(\sigma^{*}\) and \(\phi\), respectively. Here, \(\rho_{0}\) is the saturation density, and we chose \(x_{\sigma\Delta}=1.05\) to consider the \(\Delta\) resonance. We can observe that when \(\rho\leq\rho_{0}\), the effective mass is almost the same as in nucleons-only matter and remains unchanged by the \(\sigma\)-cut scheme. This suggests that the scheme does not alter the properties of nuclear matter at saturation, which is crucial. The inclusion of strange mesons \(\sigma^{*}\) and \(\phi\) slightly slows down the drop in effective mass \(M^{*}\), but this effect disappears when the \(\sigma\)-cut scheme is applied. However, when \(\rho\)\(>\)\(\rho_{0}\), the effective mass drops to around \(0.55M_{N}\), significantly suppressing the strength of the \(\sigma\) meson field strength. This is precisely the desired effect achieved by using the \(\sigma\)-cut scheme.
The field strength of various mesons are shown in Fig. 2. When the baryon density is approximately \(3\rho_{0}\), the \(\sigma^{*}\) and \(\phi\) mesons emerge, while the field strength of \(\sigma\) and \(\omega\) mesons remains nearly unchanged. However, the field strength of the \(\rho\) meson increases in the high-density region(\(6.6\rho_{0}\)). And we also distinctly see from Fig. 2, the field strength of \(\sigma^{*}\) is larger than \(\phi\) and both increase with
\begin{table}
\begin{tabular}{c c c c c c c c} Model & \(g_{\sigma}\) & \(g_{\omega}\) & \(g_{\rho}\) & \(\kappa\) & \(\lambda\) & \(\xi\) & \(\Lambda_{\nu}\) \\ IUFSU & 9.9713 & 13.0321 & 13.5899 & 3.37685 & 0.000268 & 0.03 & 0.046 \\ \end{tabular}.
\end{table}
Table 2: Parameter sets for the IUFSU model discussed in the text and the meson masses \(M_{\sigma}=491.5MeV\), \(M_{\omega}=786MeV\), \(M_{\rho}=763MeV\).
Figure 1: Effective mass of nucleons versus baryon density in NS matter using and not using \(\sigma\)-cut scheme with considering \(\sigma^{*}\) and \(\phi\) or not.
the baryon density. When we choose \(\sigma\)-cut scheme(Fig. 3), the field strength of \(\sigma\) meson is truncated, and the field strength of the \(\rho\) meson decreases around \(4\rho_{0}\) ~ \(\bar{\theta}\rho_{0}\), and subsequently surpasses the case when strange mesons are included, additionally, the strength of strange mesons will also slightly increase. In RMF theory,the scalar meson \(\sigma\) and \(\sigma^{*}\) provides attraction, the vector meson \(\omega\) and \(\phi\) provides repulsion, these changes in meson strength may give the stiffer EOS.
Fig. 4 illustrates the relative population of particles as a function of baryon density for different values of \(x_{\sigma\Delta}\), namely \(x_{\sigma\Delta}=1.05,1.1,1.15\). The dashed and solid lines represent the cases with and without considering strange mesons (\(\sigma^{*}\), \(\phi\)), respectively. When the strange mesons are taken into account, the critical densities of \(\Xi^{0}\), \(\Xi^{-}\), and \(\Delta^{++}\) shift towards lower density regions, while the critical densities of \(\Delta^{+}\) and \(\Delta^{0}\) shift towards higher density regions. Additionally, with the increase of \(x_{\sigma\Delta}\), the critical densities of \(\Lambda^{0}\), \(\Xi^{0}\), and \(\Xi^{-}\) move towards higher density regions, whereas the critical densities of leptons move towards lower density regions. Notably, as \(x_{\sigma\Delta}\) increases, the \(\Delta^{++}\) resonance starts to appear, and the overall critical density for the \(\Delta\) resonance decreases, pushing the appearance of hyperons to lower density regions. Regarding the critical density of \(\Delta\) resonance we have placed in Table 4.
Next we examine the effect of the \(\sigma\)-cut scheme on the particle population, this is plotted in the Fig. 5. As the decrease of \(\mu^{-}\), the \(\Delta^{+}\) and \(\Delta^{++}\) increase as the charge
\begin{table}
\begin{tabular}{c|c c c} \(\Delta\) & \multicolumn{3}{c}{\(n_{c}r\) (without \(\sigma^{*}\phi\))/(with \(\sigma^{*}\phi\))} \\ & \(x_{\sigma\Delta}=1.05\) & \(x_{\sigma\Delta}=1.1\) & \(x_{\sigma\Delta}=1.15\) \\ \hline \(\Delta^{++}\) & / & 7.54/7.73 & 6.27/6.24 \\ \(\Delta^{+}\) & 8.60/8.83 & 6.53/6.60 & 5.01/5.04 \\ \(\Delta^{0}\) & 6.53/6.66 & 4.49/4.56 & 3.52/3.55 \\ \(\Delta^{-}\) & 2.13/2.13 & 1.90/1.90 & 1.74/1.74 \\ \end{tabular}
\end{table}
Table 4: Threshold densities \(n_{cr}\) (in units of \(\rho/\rho_{0}\)) for \(\Delta\) resonance in dense nuclear matter for different values of \(x_{\sigma\Delta}\) without \(\sigma\)-cut scheme.
balance conditions lead to the increase of \(\Xi^{-}\) and \(\Delta^{-}\), suggesting that baryons are more favorable as neutralizers of positive charges compared to leptons. As \(x_{\sigma\Delta}\) increases from 1.05 to 1.15, the critical value of \(\Delta\) resonance shifts to lower density while the critical density of hyperons move toward the high density region, in particular. When \(x_{\sigma\Delta}=1.15\), the critical density of \(\Delta^{0}\) moves before the \(\Lambda^{0}\). Although the \(\sigma\)-cut scheme significantly affects the critical density distribution of various particles, it does not change the relationship between the \(\Delta\) resonance and strange mesons(\(\sigma^{*}\), \(\phi\)) as \(x_{\sigma\Delta}\) varies.
Fig. 6 shows the pressure as a function of energy density in neutron star matter containing \(\Delta\) resonances without the \(\sigma\)-cut scheme. The dashed line represents the case when strange mesons \(\sigma^{*}\) and \(\phi\) are considered, while the solid line represents the case without considering strange mesons. Although not particularly significant in the low-energy density region, in the high-energy density region, the presence of strange mesons slightly stiffens the equation of state due to their attractive effect. As \(x_{\omega\Delta}\) increases, the equation of state becomes softer in the energy density range from \(300MeV/fm^{3}\) to \(600MeV/fm^{3}\). However, for energy density greater than \(600MeV/fm^{3}\), it becomes significantly stiffer compared to the case when only hyperons are included. This suggests the existence of a softer equation of state in the low-density region, which ultimately constrains the radius of the neutron star, while the maximum mass does not show significant changes.
When considering the \(\sigma\)-cut scheme, we plot the equation of state (EOS) in Fig. 7, where the dashed and solid lines represent the cases with and without strange mesons, respectively. We can observe that the \(\sigma\)-cut scheme significantly stiffens the EOS, and this is due to the truncation of the \(\sigma\) meson field strength as shown in Fig. 3. Interestingly, in this case, the inclusion of strange
Figure 7: Pressure versus energy density with the \(\sigma\)-cut scheme(\(c_{\sigma}=0.15\)). The black solid line is for n, p, leptons and hyperons whereas others are with additional \(\Delta\) resonance, dotted lines contain \(\sigma^{*}\) and \(\phi\) mesons.
mesons actually softens the EOS in the high-energy density region. Moreover, the \(\sigma\)-cut scheme retains the softening feature of the EOS in the low-density region. Compared to the case without the \(\sigma\)-cut scheme, the softening region shifts by approximately \(50MeV/fm^{3}\) towards the low-density region. By obtaining the EOS through this approach, we can solve the TOV equation to produce neutron stars with masses up to \(2M_{\odot}\), effectively eliminating the "hyperon puzzle."
The mass-radius relationship for neutron stars (NS) discussed here and depicted in Fig. 8. The shaded bands represent the constraints imposed by the observables of massive neutron stars, namely PSR J1614-2230 [1; 2; 3; 4] and PSR J034+0432 [5]. In 2019, the Neutron Star Interior Composition Explorer (NICER) collaboration reported precise measurements of the mass and radius of PSR J0030+0451 [63], and in 2021, they reported on MSP J0740+6620 [6]. The left solid lines in the figure, without \(\sigma\)-cut, demonstrate that different coupling parameters \(x_{\sigma\Delta}\) have a notable impact on the maximum mass and radius of the neutron star. It reveals that the \(\Delta\) resonance decreases the maximum mass and radius of the neutron star. As \(x_{\sigma\Delta}\) increases (from 1.05 to 1.15), the maximum mass decreases. The right solid lines represent \(c_{\sigma}=0.15\), which significantly boosts the maximum mass of the neutron star beyond \(2M_{\odot}\), in agreement with the constraints from gravitational waves and NICER (MSP J0740+6620). However, there is no significant difference between the maximum mass and radius variation of neutron stars with the addition of \(\sigma^{*}\) and \(\phi\). Table 5 presents the simultaneous measurements of the radius for MSP J0740+6620 and PSR J0030-0451 using NICER data, along with the maximum mass of the neutron star for various values of \(x_{\sigma\Delta}\).
Another important constraint is the tidal deformability of the compact stars. In Fig. 9 we present the tidal deformabilities of compact stars corresponding to those in Fig. 8. Based on the gravitational wave data from the binary neutron star merger event GW170817, the tidal deformability at \(1.4M_{\odot}\) was extracted as \(\Lambda_{1.4}=190^{+390}_{-120}\)[64]. From the figure, it is evident that the \(\sigma\)-cut scheme with a stiffer equation of state (EOS) yields larger values of \(\Lambda_{1.4}\) and heavier masses. However, these values of \(\Lambda_{1.4}\) fall outside the constraint set by GW170817. On the other hand, the softer EOS, without the \(\sigma\)-cut scheme, satisfies the GW170817 constraint, resulting in smaller radii. Additionally, the inclusion of the \(\Delta\) resonance maintains \(\Lambda_{1.4}\) within the bounds of GW170817. These findings indicate that considering the \(\Delta\) resonance in the softer EOS is necessary, given the strong constraint imposed by the observational tidal deformability of compact stars during the GW170817 event. Furthermore, future gravitational wave events from binary neutron star mergers are expected to provide measurements of the neutron star's tidal deformability at \(2.0M_{\odot}\).
## IV Summary
In this paper, we have discussed the \(\Delta\) resonance and strange mesons (\(\sigma^{*},\phi\)) within neutron stars under the IUFSU model, prompted by recent astronomical observations yielding rapid results on the radii and tidal deformations of compact stars. However, the maximum masses
Figure 8: Mass-radius relation using and not using \(\sigma\)-cut scheme in NS matter including hyperons and \(\Delta\) resonance, the dotted line indicates that considering \(\sigma^{*}\) and \(\phi\). The horizontal bars indicate the observational constraints of PSR J1614 - 2230 [1; 2; 3; 4], PSR J0348 + 0432 [5], MSP J0740 + 6620 [6] and PSR J0030-0451 [63].
Figure 9: The dimensionless tidal deformability as a function of star mass. The solid line indicates without \(\sigma\)-cut scheme, the dotted line indicates that considering \(\sigma^{*}\) and \(\phi\). And the constraints from GW170817 event for tidal deformability is shown.
of neutron stars generated by the softer equation of state (EOS) (known as the hyperon puzzle) fail to approach \(2.0M_{\odot}\), thereby not satisfying the constraints from observations of massive neutron stars. Consequently, we employ the \(\sigma\)-cut scheme, resulting in a maximum mass exceeding \(2M_{\odot}\).
We investigated the impact of strange mesons on neutron stars and found that within a neutron star, \(\sigma^{*}\) and \(\phi\) mesons shift the critical density of hyperons towards the low-density region. However, with the inclusion of the \(\Delta\) resonance, strange mesons lead to a shift of the critical density of the \(\Delta\) resonance towards the high-density region. As the coupling parameter \(x_{\sigma\Delta}\) increases, the \(\Delta\) resonance appears earlier, suggesting that the presence of strange mesons affects the critical density of the \(\Delta\) resonance. These results indicate that although \(\sigma^{*}\) and \(\phi\) only interact with hyperons, considering the \(\Delta\) resonance, they play a minor role in the interaction confinement between baryons. Additionally, the inclusion of strange mesons slightly increases the mass, but the variations are not significant. Interestingly, when the \(\sigma\)-cut scheme is considered, \(\sigma^{*}\) and \(\phi\) mesons lead to a softening of the equation of state.
Furthermore, we explore the effect of \(x_{\sigma\Delta}\) on the \(\Delta\) resonance. For the \(\Delta\) coupling constants, we consider \(x_{\sigma\Delta}=1.05\), \(1.1\), and \(1.15\). The value of \(x_{\sigma\Delta}\) significantly influences the relative population of particles as a function of the baryon density. We observe that the inclusion of the \(\Delta\) resonance shifts the critical density of hyperons towards the high-density region as \(x_{\sigma\Delta}\) increases from \(1.05\) to \(1.15\), while the critical density of the \(\Delta\) resonance moves towards the low-density region, this suggests that an early appearance of the \(\Delta\) resonance may contribute to the stability of neutron stars. Furthermore, with increasing \(x_{\sigma\Delta}\), the equation of state softens in the low-density region, resulting in a significant reduction in the radius of neutron stars, while the maximum mass remains almost unchanged.
When not using the \(\sigma\)-cut scheme, the softer equation of state considering the \(\Delta\) resonance still falls within the \(\Lambda_{1.4}\) range of GW170817 and results in smaller radii. And when we employ the \(\sigma\)-cut scheme with \(c_{\sigma}=0.15\), we observe that the maximum mass and radius of neutron stars obtained align closely with the constraints from NICER (MSP J0740+6620). However, the tidal deformability exceeds the constraint from GW170817, for neutron stars with a mass exceeding \(2M_{\odot}\), future gravitational wave events from binary neutron star mergers may provide new constraints on tidal deformability.
|
2303.02562 | The First Comprehensive Dataset with Multiple Distortion Types for
Visual Just-Noticeable Differences | Recently, with the development of deep learning, a number of Just Noticeable
Difference (JND) datasets have been built for JND modeling. However, all the
existing JND datasets only label the JND points based on the level of
compression distortion. Hence, JND models learned from such datasets can only
be used for image/video compression. As known, JND is a major characteristic of
the human visual system (HVS), which reflects the maximum visual distortion
that the HVS can tolerate. Hence, a generalized JND modeling should take more
kinds of distortion types into account. To benefit JND modeling, this work
establishes a generalized JND dataset with a coarse-to-fine JND selection,
which contains 106 source images and 1,642 JND maps, covering 25 distortion
types. To this end, we proposed a coarse JND candidate selection scheme to
select the distorted images from the existing Image Quality Assessment (IQA)
datasets as JND candidates instead of generating JND maps ourselves. Then, a
fine JND selection is carried out on the JND candidates with a crowdsourced
subjective assessment. | Yaxuan Liu, Jian Jin, Yuan Xue, Weisi Lin | 2023-03-05T03:12:57Z | http://arxiv.org/abs/2303.02562v2 | The First Comprehensive Dataset with Multiple Distortion Types for Visual Just-Noticeable Differences
###### Abstract
Recently, with the development of deep learning, a number of Just Noticeable Difference (JND) datasets have been built for JND modeling. However, all the existing JND datasets only label the JND points based on the level of compression distortion. JND models learned from such datasets can only be used for image/video compression. Hence, a generalized JND modeling should take more kinds of distortion types into account. To benefit JND modeling, this work establishes a generalized JND dataset with a coarse-to-fine JND selection, which contains 106 source images and 1,642 JND maps, covering 25 distortion types. To this end, we proposed a coarse JND candidate selection scheme to select the distorted images from the existing Image Quality Assessment (IQA) datasets as JND candidates instead of generating JND maps ourselves. Then, a fine JND selection is carried out on the JND candidates with a crowdsourced subjective assessment.
Yaxuan Liu\({}^{a,\star}\) Jian Jin\({}^{b,\star}\) Yuan Xue\({}^{c}\) Weisi Lin\({}^{b}\)\({}^{a}\) Harbin Engineering University, College of Intelligent Systems Science and Engineering, Harbin, China
\({}^{b}\) Nanyang Technological University, School of Computer Science and Engineering, Singapore
\({}^{c}\) Fudan University, School of Software, Shanghai, China Just Noticeable Difference, Human Visual System, Mean Opinion Scores (MOS), perception modeling, dataset
## 1 Introduction
JND is a metric for assessing the visual redundancy of the HVS, and it has been widely applied to computer vision and multimedia signal processing applications such as perceptual image and video compression [1][2], image enhancement [3], watermarking [4], and so on. JND modeling, which has been studied for many years, tries to precisely predict the visual redundancy of the HVS for a given visual content. Traditional JND models [5][6] tried to predict the JND threshold for each pixel or each coefficient of the sub-bands based on the features of the HVS and their associated maskings. Recently, a few works tried to extend the JND concept to a picture level and proposed the picture-wise JND. To utilize the powerful deep learning techniques in the JND modeling, lots of JND datasets [7, 8, 9] were built. Jin et al. [7] created the MCL-JCI dataset, which consists of 5,000 corresponding JPEG distorted versions with quality factors (QF) ranging from 1 to 100 and 50 source images in the \(1920\times 1080\) resolution. They conducted subjective quality assessment tests involving 150 participants watching the source image and a compressed image side by side on a TV. JND samples were gathered from 30 individuals for a specific image. Then, Shen et al. [8] used 202 source images and 7,878 encoded versions of those images with a resolution of \(1920\times 1080\) to create a JND dataset based on the impending video coding standard Versatile Video Coding (VVC). Each source image was compressed by VTM5.0 coding with quantization parameters (QP) ranging from 13 to 51. The subjective tests were conducted in a carefully monitored lab setting. 20 subjects evaluated 20 PJND samples for each source image. Lin et al. [9] expanded the datasets mentioned above and built a large PJND dataset called KonJND-1k. They conducted subjective JND assessment studies using the flicker test via crowdsourcing instead of in the laboratory environment. This dataset contains 1,008 source images as well as distorted versions created using JPEG and BPG compression. The study involved 503 workers in total, resulting in 61,030 PJND samples and an average of 42 samples per source image. However, all of these JND datasets in existence only took into consideration the impacts of compression distortion without con
Figure 1: Illustration of some source images in our dataset
sidering the effects of the other distortion types. That is, the JND points were labeled based on the levels of compression distortion, _e.g._, Quality Factor (QF), Quantization Parameter (QP), _etc_. Therefore, JND models learned from such kinds of datasets were limited to compression-relevant applications. In order to predict a JND model so that it has more wide applications, establishing a generalized JND dataset that covers all existing distortion types is necessary and significant. However, the challenging part is how to obtain distorted images that cover all distortion types with different distortion levels so that JND maps can be selected from them.
Recently, many IQA datasets [10, 11, 12, 13] have been established, which contain source images and their associated distortion images as well as subjective scores, _e.g._, Mean Opinion Scores (MOS). Commonly, their distortion images cover various distortion types under various distortion levels. In view of this, we can directly pick out the JND maps from the distorted images in the IQA datasets. On the one hand, the generation of distortion images that cover all distortion types with different distortion levels can be saved. On the other hand, as the MOS reflects the perceptual quality of distorted images, it can be used for selecting JND maps. Therefore, we directly select the JND maps from IQA datasets with a proposed coarse-to-fine JND map selection scheme, which contains two main steps: 1) coarse JND candidate selection with MOS as the threshold and 2) fine JND map selection with crowdsourced subjective assessment. The first step helps us make a fast JND candidate selection from huge distorted images, which can save time and human resources in subsequent subjective assessment. The second step subjectively confirms the final JND maps from the selected JND candidates in the first step by the subjects. Finally, we establish a generalized JND dataset, containing 106 source images (part of them are shown in Fig. 1) and 1,642 JND maps, with 25 distortion types being involved. It should be mentioned that this is the first JND dataset that covers various distortion types.
## 2 Coarse-to-fine JND Selection
### Coarse JND Candidate Selection
To select JND images from these datasets, we use the MOS of distorted images as the preliminary selection criterion, since the MOS is a subjective viewing score, reflecting the perceptual quality of distorted images among all the subjects. Commonly, the higher the distortion image quality, the smaller the difference between it and its associated source image, and vice versa. Therefore, we can use MOS to select distorted images with high perceptual quality as the JND maps, so as to reduce the amount of data in subsequent subjective tests. The involved IQA datasets in this work include TID2013 [13], TID2008 [12], and KADID-10k [10], which have a total of 14,825 distortion images from 106 source images, and 39 distortion types, each with 4 or 5 distortion levels.
Since different IQA datasets use different algorithms to subjectively evaluate distortion images, the range of the obtained MOS values is different. In view of this, we first normalize the MOS values. This process can be formulated as
\[Q=\frac{P-Min}{Max-Min}, \tag{1}\]
where \(P\) represents the MOS value of the distortion image in a certain IQA dataset, \(Min\) and \(Max\) are the minimum and maximum values of the MOS in the current IQA dataset. \(Q\) represents the normalized MOS. The range of the normalized MOS is [0,1]. Fig. 2 shows the MOS distribution of each IQA dataset after normalization, and it can be seen that the MOS distribution in different datasets is different. Among them, the MOS distribution of TID2013 and TID2008 conforms to the normal distribution, while the KADID-10k dataset is relatively evenly distributed in each interval. To determine a threshold that can distinguish between JND and non-JND maps, we randomly sample 10%-20% of the source images and all their corresponding distorted images from each IQA dataset and then pick out JND maps from them through a subjective viewing test by well-trained and experienced subjects. We only sample 10%-20% of the whole images because the number of images is huge and we don't need to ensure the high accuracy of the threshold due to the fine JND map selection after. As a result, 112 JND maps are confirmed out of 2,190 distorted images with MOS above 0.7. To determine the MOS threshold, we calculate the average MOS of the confirmed JND images for each image dataset and then reduce the calculated average MOS by about 5% so that more JND-eligible maps can be involved. Then, we obtain the MOS thresholds 0.855 for the KADID-10k dataset, and 0.795 for the TID2013 and TID2008 datasets, respectively. Afterward, we use the MOS thresholds above to pick out images from
Figure 2: Histogram of the distribution of normalized MOS in TID2008, TID2013, and KADID-10k datasets, which shows that different IQA datasets have different MOS distributions.
their corresponding datasets as the JND candidates. In this way, we coarsely obtain 1,962 JND candidates.
### Fine JND Map Selection
After coarse JND selection, there are still some JND candidates that can be noticed the differences compared with their associated source images. In view of this, we conduct a crowdsourced subjective assessment of the JND candidates to achieve fine JND selection by abandoning non-JND maps. The crowdsourced subjective assessment was conducted on Amazon Mechanical Turk (AMT). On the AMT platform, requesters--companies, organizations, or individuals--create and submit tasks requiring human intelligence for workers, who may be paid for each task successfully completed.
In this work, we adopt the flicker test as used in [14], where the JND candidate and its associated source image are toggled back and forth successively at a frequency of 8 Hz. Here, we set three options for subjects: obvious flicker, slight flicker, and no flicker. The definitions of the three options are as follows. The obvious flicker indicates that the flicker is intense, affecting the entire image. The slight flicker means that slight changes can be seen in the image, such as a subtle variation in the image brightness. These changes can resemble a mild film grain in a static scene of a movie and are often localized to small regions of the image. No flicker represents the image is static, there is no visible change in the image, no matter how small it is.
According to the definition of JND as aforementioned in Sec. 1, no flicker and slight flicker JND candidates are selected as the final JND maps. Here, we randomly divide all the 1962 JND candidates into 103 groups. Each group contains 19 or 20 JND candidates, which are assessed by 30 workers (subjects). A total of \(1962\times 30\) results are obtained.
Considering that the screens used by the workers may have different sizes and resolutions, resulting in different physical sizes of pictures displayed on the screen, we carry out the calibration process in [9] to make the image appear the same physical size on all workers' computers. That is, workers must prepare a card the size of a credit card (\(85mm\times 53.98mm\)) and change the size of a frame on the screen to fit the card. By computing the Logical Pixel Density (LPD) of the display in Pixels Per Inch (PPI), we can estimate the physical size of the display. For more details refer to [9]. Following the calibration, we instruct the workers to change their viewing distance to \(30cm\) in accordance with the trigonometric calculation [15], [16] and ISO standard [17].
After passing a training phase, the workers will enter the assessment phase. In the assessment phase, 19 or 20 images are set for each task, and the workers choose the most suitable option from the three options of no flicker, slight flicker, and obvious flicker, as aforementioned.
## 3 Results Processing and Analysis
### Outlier Removal
To get effective results, we need to remove outliers in the collected results. Since we can not guarantee that all workers complete the test with seriousness, for example, some workers may randomly choose an option during the test, or workers may submit unreliable answers because they have been tested for a long time. In this paper, we adopt the result processing method mentioned in the ITU-R Recommendation BT.500 [18] to remove unreliable results. We use the scores 1, 2, and 3 to represent no flicker, slight flicker, and obvious flicker respectively. Then, we calculate the mean and standard deviation of the scores rated by all the subjects for each image as follows
\[\bar{u}_{j}=\frac{1}{N}\sum_{i=1}^{N}u_{ij}, \tag{2}\]
\[S_{j}=\sqrt{\sum_{i=1}^{N}\frac{\left(\bar{u}_{j}-u_{ij}\right)^{2}}{\left(N- 1\right)}}. \tag{3}\]
\(u_{ij}\) denotes the score rated by the \(i\)-th subject on the \(j\)-th image, where \(i=1,2,...,N\) and \(j=1,2,...,M\). \(\bar{u}_{j}\) and \(S_{j}\) are the mean and standard deviation of scores rated by all the subjects for the \(j\)-th image. Subsequently, we adopt the confidence interval in [18] for outlier removal, that is
\[\left[\bar{u}_{j}-\delta_{j},\bar{u}_{j}+\delta_{j}\right],\text{where }\delta_{j}=1.96\frac{S_{j}}{\sqrt{N}}. \tag{4}\]
\(\left[\cdot\right]\) is rounding operation. \(u_{ij}\) outside such an interval is considered as an outlier. After removing the outliers, we recalculate the average score of each image. We regard the image with an average score of less than 2.5 as a JND map.
### Result Analysis
As a result, we select a total of 1,642 distortion images (corresponding to 106 source images) as the JND maps, which contain 25 distortion types. Table 1 shows the 25 distortion types, their number of JND maps, and the ratio of the number of JND maps to the total number of samples. From the table, we can see that the number of JND maps with different distortion types varies greatly. There are two reasons to explain this. First, since we select JND maps from three IQA datasets KADID-10k, TID2008 and TID2013, some distortion types are not contained by all three datasets such as Comfort noise and Sparse sampling and reconstruction, which causes different numbers of samples for different distortion types. The distortion type with a small number of samples probably has fewer JND maps and vice versa. Second, the HVS has different sensibilities to different types of distortion [6]. For the distortion type with a high sensibility of the HVS, even low-level distortion may be perceived by the HVS and this will
make fewer JND maps can be obtained from the IQA datasets, and vice versa. Therefore, the distortion with a higher sensibility to the HVS, the fewer JND maps we obtained from the samples with such type of distortion. Besides, for each distortion type, we also calculate the ratio of the number of JND maps and total samples. The difference in such ratios reflects a certain extent to which types of distortion are more easily sensitive to the HVS.
A good JND dataset tends to contain a wide variety of visual content. To demonstrate the diversity of the JND maps in our dataset, we calculated the spatial information (SI) [19] and the colorfulness (CF) [20] of 106 source images, as shown in Fig. 3. The spatial information reflects the spatial complexity of the image, and the colorfulness reflects the richness of the color of the image. In Fig. 3, we can see that the selected source images do have wide coverage in this plot. Furthermore, we divide the 106 source images into 8 semantic categories, as shown in Table 2.
## 4 Conclusions
A JND dataset containing 106 source images and 1,642 JND maps with 25 distortion types involved has been established in this work. To have such a dataset, we coarsely select the JND candidates from IQA datasets through the MOS thresholds, and then subjectively select the final JND map from the JND candidates by conducting crowdsourced subjective assessments, which can greatly save time and human resources spent generating distorted images by ourselves. This is the first JND dataset covering multiple distortion types, which can be used to learn JND models for more distortion types, not just limited to compression distortion types contained in existing JND datasets, and thus have a wider range of applications. For example, we can use it to build a JND model to optimize the image transmission process and reduce the image distortion caused by packet loss during data transmission. Furthermore, considering that pictures on social media now contain multiple distortions, this dataset can also be used to build a more generalized JND model, which can well simulate
\begin{table}
\begin{tabular}{|l|c|c|} \hline
**No. Distortion types** & **Num.** & **Ratio** \\ \hline
01 Additive Gaussian noise & 19 & 0.030 \\
02 Additive noise in color components & 115 & 0.183 \\
03 Masked noise & 50 & 0.222 \\
04 High frequency noise & 38 & 0.167 \\
05 Denoise & 50 & 0.079 \\
06 Multiplicative noise & 55 & 0.104 \\
07 Comfort noise & 18 & 0.144 \\
08 Motion blur & 91 & 0.225 \\
09 Gaussian blur & 128 & 0.203 \\
10 Color diffusion & 60 & 0.148 \\
11 Color shift & 25 & 0.062 \\
12 Color quantization & 25 & 0.047 \\
13 Color saturation & 70 & 0.132 \\
14 JPEG & 95 & 0.151 \\
15 JPEG transmission errors & 26 & 0.116 \\
16 JPEG2000 & 123 & 0.195 \\
17 Darken & 138 & 0.341 \\
18 Brighten & 68 & 0.168 \\
19 Mean shift & 233 & 0.370 \\
20 Jitter & 40 & 0.099 \\
21 Non-eccentricity patch & 25 & 0.040 \\
22 Quantization & 36 & 0.089 \\
23 Contrast change & 54 & 0.086 \\
24 Sparse sampling and reconstruction & 13 & 0.104 \\
25 Chromatic aberrations & 47 & 0.376 \\ \hline \end{tabular}
\end{table}
Table 1: Numbers of JND Maps and The Ratio of The Number of JND Maps to The Total Samples for Different Distortion Types
Figure 3: Distribution of spatial information and colorfulness of source images in our dataset.
\begin{table}
\begin{tabular}{|c|c|} \hline
**Content** & **Number** \\ \hline People & 12 \\ \hline Animals & 16 \\ \hline Plants & 12 \\ \hline Objects & 21 \\ \hline Landscape & 9 \\ \hline Outdoor & 15 \\ \hline Building & 12 \\ \hline Nature & 9 \\ \hline \end{tabular}
\end{table}
Table 2: Number of Source Images in Each Semantic Category
the perception of human vision for any type of distortion, thus revealing the characteristics of the human visual system and improving the quality of images on social media sites. Besides, our dataset has strong expansibility. In our future work, we will extend this dataset by applying the proposed coarse-to-fine JND selection scheme to more IQA datasets such as PIPAL [21], CSIQ [11], and KADIS-700k [22].
|
2304.09490 | Neural Network Quantisation for Faster Homomorphic Encryption | Homomorphic encryption (HE) enables calculating on encrypted data, which
makes it possible to perform privacypreserving neural network inference. One
disadvantage of this technique is that it is several orders of magnitudes
slower than calculation on unencrypted data. Neural networks are commonly
trained using floating-point, while most homomorphic encryption libraries
calculate on integers, thus requiring a quantisation of the neural network. A
straightforward approach would be to quantise to large integer sizes (e.g. 32
bit) to avoid large quantisation errors. In this work, we reduce the integer
sizes of the networks, using quantisation-aware training, to allow more
efficient computations. For the targeted MNIST architecture proposed by Badawi
et al., we reduce the integer sizes by 33% without significant loss of
accuracy, while for the CIFAR architecture, we can reduce the integer sizes by
43%. Implementing the resulting networks under the BFV homomorphic encryption
scheme using SEAL, we could reduce the execution time of an MNIST neural
network by 80% and by 40% for a CIFAR neural network. | Wouter Legiest, Jan-Pieter D'Anvers, Furkan Turan, Michiel Van Beirendonck, Ingrid Verbauwhede | 2023-04-19T08:22:28Z | http://arxiv.org/abs/2304.09490v2 | # Neural Network Quantisation
###### Abstract
Homomorphic encryption (HE) enables calculating on encrypted data, which makes it possible to perform privacy-preserving neural network inference. One disadvantage of this technique is that it is several orders of magnitudes slower than calculation on unencrypted data. Neural networks are commonly trained using floating-point, while most homomorphic encryption libraries calculate on integers, thus requiring a quantisation of the neural network. A straightforward approach would be to quantisate to large integer sizes (e.g. \(32\,\mathrm{bit}\)) to avoid large quantisation errors. In this work, we reduce the integer sizes of the networks, using quantisation-aware training, to allow more efficient computations. For the targeted MNIST architecture proposed by Badawi et al. [1], we reduce the integer sizes by 33% without significant loss of accuracy, while for the CIFAR architecture, we can reduce the integer sizes by 43%. Implementing the resulting networks under the BFV homomorphic encryption scheme using SEAL, we could reduce the execution time of an MNIST neural network by 80% and by 40% for a CIFAR neural network.
convolutional neural networks, quantisation, privacy-preserving machine learning, fully homomorphic encryption
## I Introduction
Homomorphic encryption (HE) allows performing calculations on encrypted data. This technique enables applications where data is processed in untrusted environments (e.g. a cloud environment) while ensuring that this environment does not learn anything about the data itself. As such, it is a promising technique to make privacy-preserving machine learning possible.
A downside of HE is that it significantly increases the size of encrypted data. As a result, encrypted operations are typically several orders of magnitude slower than their unencrypted counterparts. This work tries to accelerate neural network inference under homomorphic encryption by using quantisation techniques to reduce the data size and, thus, the computational cost.
Neural network frameworks generally use a floating-point representation to represent network parameters and intermediate variables. However, HE systems like BFV [2] encode only integers, requiring an additional conversion step to convert the floating-point neural network parameters to the integer HE variables. While it is possible to design neural networks that work solely with integer representations, previous works have only studied such networks in a non-HE related context [3, 4, 5].
In addition, this conversion is an essential step before porting it to hardware. For instance, a plaintext \(32\,\mathrm{bit}\) floating-point addition is \(30\times\) more energy-consuming 1 than an \(8\,\mathrm{bit}\) integer equivalent [6]. By using the conversion, we can select smaller HE parameters that lead to limited resource use and better management of corner cases. Therefore, making the behaviour of the system faster and more predictable in general.
Footnote 1: Energy consumption using a \(45\,\mathrm{nm}\) CMOS technology.
As calculations in these non-HE integer-only networks are performed, the sizes of the integer variables increase. The intermediate values are commonly scaled down to a smaller number after each layer to keep these integer-only networks manageable. This means the most significant bits are held after each operation, while the least significant are discarded.
Unfortunately, these reduction operations are based on division or shift operations, which natively are not supported in HE schemes, so downscaling cannot easily be performed. Therefore, in neural network HE inferences, the intermediate values will grow throughout the inference and the final calculations will need to operate on very large integers. For instance, when all of the weights of a neural network are converted to \(32\,\mathrm{bit}\), a 10-layer CIFAR network will produce integers with bit-sizes up to \(614\,\mathrm{bit}\). The maximum bit-length of these output integers will be denoted as the _'final integer width'_ (FIW), and we will show that this value significantly affects the overall computation cost.
Gilad-Bachrach et al. [7] implemented the first artificial feedforward neural network under homomorphic encryption using the HE scheme YASHE [8]. Note that an attack proposed by Albrecht et al. [9] reduced the security level of this scheme and is therefore considered broken in practice. Gilad-Bachrach et al. [7] proposed a specialised, HE-focussed _CryptoNets_ architecture for the MNIST dataset [10].
One of the downsides of the CPU implementations of CryptoNets is the high latency of \(250\,\mathrm{sec}\) for an MNIST image. It was improved by Brutzkus et al. [11] with the Low-Latency CryptoNets (LoLa) architecture. Using the BFV scheme, optimisations in the underlying HE library SEAL and a different approach to representing the ciphertext data, a latency of \(0.29\,\mathrm{sec}\) was reached, an improvement of \(93\times\) relative to CryptoNets. In addition, Brutzkus et al. [11] proposes variants of the LoLa network for processing the CIFAR-10 dataset [12]. They report an accuracy of 74.1% and a latency of \(730\,\mathrm{sec}\).
Badawi et al. [1] implemented the BFV scheme on GPUs. They propose two architectures, one smaller for MNIST and one more extensive for CIFAR. Accordingly, their CIFAR network boasts an accuracy of 77.55% and a latency of \(304.43\,\mathrm{sec}\).
In this work, we improve upon the state-of-the-art HE neural networks by considering advanced neural network quantisation techniques. We first investigate post-training quantisation, a
method typically used in the state-of-the-art, and show that there is a limit to how many intermediate variables can be scaled down without significantly affecting accuracy. We then show that quantisation-aware training can indeed be used to substantially scale down these intermediate variables without a similar accuracy penalty. In the end, we reduced the final integer width with 33% for MNIST and 43% for CIFAR, allowing a speedup with factors 80% and 40%, respectively, over typical 8-bit post-training quantisation networks as used in the state-of-the-art.
## II Preliminaries
### _Homomorphic encryption_
Homomorphic encryption enables performing arithmetic operations on encrypted data. Take the following example: consider an asymmetric encryption system with two integer numbers \(x\) and \(y\). They can be encrypted by the using encryption key pk to \(c_{x}\!=\!\mathsf{Enc}(\mathsf{pk},\!x)\) and \(c_{y}\!=\!\mathsf{Enc}(\mathsf{pk},\!y)\). These two ciphertexts are sent to a server that cannot be trusted. The server can perform an operation \(\diamondsuit\) on both ciphertext \(c_{xy}\!=\!c_{x}\!\diamondsuit c_{y}\), which is the equivalent of doing an addition on the plaintexts. The result of this operation is then sent back to the user, who can decrypt this message. Using the decryption key sk, the resulting plaintext message \(z\) is obtained by \(z\!=\!\mathsf{Dec}(\mathsf{sk},\!c_{xy})\). The message \(z\) will have a value of \(z\!=\!x\!+\!y\). Altogether, the server does not obtain any information about the integers \(x\) and \(y\) while possessing the unencrypted data.
A limitation of this form of encryption is that it only allows certain operations, i.e. addition or multiplication of two ciphertexts. Execution of non-linear functions is normally performed using a polynomial approximation that uses only addition, subtraction, and multiplication. Moreover, a division in the HE schemes CKKS and BFV is theoretically possible, but it is costly and thus avoided in practice [13].
The biggest problem with the lack of a division operation is that variables will grow during computation. For example, when multiplying two \(8\,\mathrm{bit}\) integers, the result becomes roughly \(16\,\mathrm{bit}\). In unencrypted neural network implementations, this variable can be divided with a power of two to get back to \(8\,\mathrm{bit}\), making it more manageable for the next layer. However, such an operation is not possible in encrypted neural network inference. This leads to large intermediate and output integers. The maximum bit-length of these output integers will be denoted as the 'final integer width' (FIW).
The HE scheme must be instantiated with more extensive parameters to accommodate these larger variables, which comes at a significant cost. Once a specific variable size is reached, additional techniques are required to allow large representations. More specifically, to ensure a correct representation in the plaintext space during inference, a residue numeral system (RNS), following the Chinese remainder theorem, is used to divide the large numbers into several smaller numbers. This leads to several smaller HE instances that could be run in parallel. Since each instance consumes computing resources, decreasing the variable sizes can significantly reduce the number of RNS instances and, thus, the computational cost.
### _Neural network_
A neural network is a machine learning technique consisting of a network of small interconnected computation units called neurons. These neurons can be adapted, which enables the network to 'learn' a specific, human-like task such as classifications of images. A neuron will take a number of inputs, perform a weighted sum over these inputs, and output a function of the result of this sum. Neurons are grouped to form layers, and different behaviour can be obtained depending on their configuration.
Since a typical division or non-linear function cannot be executed trivially under FHE, we use a slightly adapted version of the classical neural network layers. A dense or convolutional layer is representable under FHE. However, the activation function is approximated by a \(f(x)\!=\!x^{2}\) square function, resulting in a _Square layer_. Moreover, the _scaled average pooling_ layer is replaced by an equivalent where the inputs are summed, but the division is omitted.
### _Architectures_
This work uses the two architectures developed by Badawi et al. [1] for homomorphic inference. These networks are used as test cases to research the effect of quantisation on homomorphic encryption inference. Both architectures omit the last (Sigmoid) activation function since it only maps the output to the unit interval. For a detailed description of the network, we refer the reader to the paper of Badawi et al. [1].
The first architecture used in this paper focuses on the MNIST dataset [10]. It is based on the HCNN [1] and consists of two convolutional, two square activation and one dense layer. The authors stated an accuracy of 99% for this architecture.
The second architecture is designed to classify the more complex CIFAR-10 dataset [12]. The 10-layered network especially uses the scaled average pooling and square layer. The originally proposed HCNN architecture was slightly modified in our implementation by not using padding, as this only results in reduced accuracy: our floating-point model model obtains an accuracy of 73.28%, while the original HCNN reports 77.8%.
## III Post-Training Quantisation (PTQ)
Usually, floating-point numbers with single or double precision are used to represent the weights and biases of a network. However, it is possible to convert these numbers to \(8\,\mathrm{bit}\) integers without a notable reduction in accuracy [14]. A further reduction in the representation might have a more detrimental effect on the neural network accuracy. Converting an existing (floating-point) neural network into a quantised (integer) version is called post-training quantisation (PTQ).
During PTQ quantisation, a real value \(r\!\in\![\alpha,\beta]\) is converted to a \(b\)-bit integer \(q\). The process is determined by two factors: the zero-point \(Z\) and the scale factor \(S\), using the following formula:
\[q\!=\!\left\lfloor\frac{r}{S}\!+\!Z\right\rfloor\!. \tag{1}\]
Dequantisation can be done through the formula \(r\!=\!S(q\!-\!Z)\), where the quantised value is converted back to its original scale.
The scale factor \(S\) determines the quantisation step size. The zero-point \(Z\) is the quantised value \(q\) corresponding to the real value \(r\!=\!0\) and positions the range of representable numbers optimally.
When \(Z\!\neq\!0\), we say the quantisation is asymmetric or affine. This quantisation explicitly uses the zero point, often set at \(Z\!=\!-\alpha\cdot(2^{b}-1)/(\beta-\alpha)\). A second option is a symmetric quantisation, which reduces the overhead of dealing with the zero
point by setting it to zero. Commonly, the values are mapped to a signed symmetric interval \([\alpha,\beta]\!=\![-2^{b-1},\!2^{b-1}\!-\!1]\), although an unsigned interval is also possible. Symmetric quantisation is a more limited but easier-to-handle quantisation technique.
To evaluate the effect of quantisation, we first determined the distribution of the parameters of both MNIST and CIFAR networks, plotted in Figure 1. Since both networks possess a symmetric distribution, the mean of the values is zero, and thus symmetric quantisation is the best candidate to convert the signed real numbers for both networks. To determine the ideal scale factor for the weights, three candidates are tested:
The first scale factor \(S\!=\!1/(2^{b-1}\!-\!1)\) only considers the bit width. No account is taken of the size or distribution of the real numbers. The second scaling factor \(S\!=\!\max(|\mathbf{W}|)/(2^{b-1}\!-\!1)\) considers the largest absolute value that can be represented in the quantised interval, and extreme values that are outside the quantisation range are quantised to the edges of the quantisation interval.
To understand the influence of these different scale factors on the network, we build a Python framework that evaluates the effect of post-training quantisation on the accuracy and FIW. The framework takes a neural network, converts each of the weights to an integer representation and then executes a neural network inference. We process each of the \(10\,000\) images in the test set for these experiments to determine the accuracy and FIW. The maximum of all individual final integer widths and corresponding accuracies are reported in Table I.
Furthermore, we also reduce the sizes of the input coefficients. This way, we can obtain an even lower final integer width. In all the experiments, the MNIST data is scaled down from its typical \(8\,\mathrm{bit}\) to \(2\,\mathrm{bit}\). However, since CIFAR images are more complex, the same reduction could lead to unacceptable accuracies. Therefore we chose not to reduce the CIFAR dataset.
The results in Table I show that for both networks, we can quantise until \(8\,\mathrm{bit}\) without an accuracy drop. When we want to use lower quantisation, the accuracy starts to drop. One of the reasons is that in these cases, many of the weights are quantised to zero, which causes much of information to 'disappear' and results in a diminished FIW.
## IV Quantisation-aware training (QAT)
In the previous section, we showed that neural networks could be quantised to \(8\,\mathrm{bit}\) integers, but the accuracy reduces for a more drastic quantisation. To reduce the bit width of the network further, we can make the network aware of the quantisation during its training. Before training is started, the quantisation technique and parameters are chosen and introduced into the training graphs as 'fake quantisation' nodes, which simulate the low-precision behaviour of the quantisation. These nodes quantise a real input using Equation 1 and immediately perform a dequantisation immediately afterwards, thus injecting an error that the quantisation would cause. Depending on the quantisation used, this method can result in a network with approximately the same accuracy as a full-precision network while using low-precision parameters.
Using Brevitas [15], we trained the same networks as in the post-training quantisation experiments of the previous section. Brevitas is a library to develop and train quantisation-aware hardware-ready networks. We used its 'weights-only quantisation process', in which a quantisation error is exclusively injected in the weights.
The results can be seen on the right in Table I. For lower bit widths, the accuracy of the QAT is significantly better than the PTQ case, remaining approximately the same as the full precision network. One of the reasons is that quantisation-aware training will prevent parameter sparsity and ensure that each parameter is used correctly.
Table II compares our QAT network to earlier works. Notable is that the CryptoNets and HCNN implementations use PTQ techniques, but no QAT techniques. Using QAT, we can quantise the weights to as low as \(2\,\mathrm{bit}\), giving the network a much lower FIW with minimal to no drop in accuracy. Compared to a full precision network, i.e. quantising the parameters to \(32\,\mathrm{bit}\) integers, the FIW is reduced with a factor of 8.2 for the MNIST network and a factor of 5 for the CIFAR network. Compared to the numbers presented by HCNN, our smallest networks have a 33% and 43% smaller final integer width for MNIST and CIFAR, respectively, while boasting similar accuracy.
## V Evaluation
In this section, we evaluate our newly developed quantised neural networks by implementing them using the Pyfhel [16] library, which is a software package that provides python-bindings for Microsoft's SEAL library [17]. An encrypted inference is executed using the integer-based BFV scheme. All of our tests are run using Python 3.9.13, Brevitas 0.7.1, Pyfhel 3.3.1 (using SEAL 3.7), on an Intel Xeon Silver 4208 CPU.
One of the most compelling optimisations in certain HE schemes was the introduction of batching or packing, as described by Smart and Vercauteren [18]. It provides a way to pack multiple plaintext messages into a single ciphertext as if it were a vector of plaintexts. In our implementation, we use batching to pack each input channel into a single ciphertext. A single ciphertext is used for a (black-and-white) MNIST image, and three ciphertexts are needed for an (RGB) CIFAR image.
Due to batching, we cannot implement a dot-based matrix-vector multiplication since we need access to the individual elements of a ciphertext. Therefore rotation-based versions of each neural network layer are implemented based on previous works. Dathathri et al. [19] propose an algorithm to calculate a single convolution kernel on a subset of the input data. We adapted the algorithm further to enable us to simultaneously apply an input kernel to a complete channel. As proposed by Juvekar et al. [20], a rotation-based algorithm is used to
Fig. 1: Overview of the weight distribution of the MNIST and CIFAR architecture.
execute a matrix-vector multiplication for the dense layer. This algorithm will perform the multiplication using only vector addition, multiplications and rotations.
When converting to an almost binary size (\(2\,\mathrm{bit}\)), extra sparsity is introduced, which we use to reduce the latency. Before encoding the weight during the encrypted inference, we check for a zero vector. If there is one, all the associated operations using this vector can be omitted. This results in a speedup of 28% between a \(4\,\mathrm{bit}\) and \(2\,\mathrm{bit}\) network.
### _Homomorphic encryption parameter selection_
To determine and select suitable HE parameters, we first analyse the final integer width that determines whether we need multiple instances. The SEAL library limits the maximum size of the plaintext modulus to \(60\,\mathrm{bit}\) for performance reasons. Due to the outcomes of our QAT experiments, we need to represent larger plaintext spaces and use a residue numeral system (RNS). An overview of the used HE parameters is given in Table III.
### _Results_
We report the sequential times for various quantisation on the right in Table II. To account for the number of instances, the'sequential time' is given, corresponding to the total time when each instance is executed sequentially and reflecting the use of computing resources. For the CIFAR network, the work of Badawi et al. [1] uses ten instances, each possessing a plaintext size of around \(21\,\mathrm{bit}\). Using the same sizes, our smallest network (\(2\,\mathrm{bit}\)) only requires six instances.
The MNIST architecture's smallest network is 80% faster than the best \(8\,\mathrm{bit}\) PTQ network. This is due to the smaller FIW and because it uses the additional sparsity of the weights. As for the CIFAR architecture, we obtain a 40% speedup compared to the \(8\,\mathrm{bit}\) PTQ network, which is equal to the quantisation used by HCNN.
## VI Conclusion
The absence of a division operation in some fully homomorphic encryption schemes implies that variables keep growing during computations. In this work, we tested two main quantisation techniques to reduce the size of the internal variables, which in turn affects computation cost. We first looked at the limitations of post-training quantisation and showed that there is a lower limit to the quantisation (in our case \(8\,\mathrm{bit}\)) before the accuracy significantly drops. To further reduce the variable sizes, we developed a quantisation-aware training framework. We reduced the final integer width with 33% for MNIST and 43% for CIFAR, compared to the state-of-the-art HCNN architecture. In our experiments, the quantisation aware training, allowing for reducing the network weights up to \(2\,\mathrm{bit}\), boasts an 80% and 40% speedup for the MNIST and CIFAR network, respectively, over typical \(8\,\mathrm{bit}\) weights obtained with post-training quantisation.
\begin{table}
\begin{tabular}{l l l l l|l l l} \hline \hline \multicolumn{2}{c}{Network} & \multicolumn{1}{c}{Quantum} & \multicolumn{1}{c|}{\begin{tabular}{c} Acc. \\ [\%] \\ \end{tabular} } & \multicolumn{1}{c|}{\begin{tabular}{c} FIW \\ [\(\,\mathrm{bit}\)] \\ \end{tabular} } & \multicolumn{1}{c|}{\begin{tabular}{c} Seq. \\ time \\ [min.] \\ \end{tabular} } & \multicolumn{1}{c}{
\begin{tabular}{c} No. of \\ inst. \\ \end{tabular} } \\ \hline \multirow{6}{*}{MNIST} & CryptoNets & [7] & \(5\)-\(10\,\mathrm{bit}\) & 99 & 80 & - & 2 \\ & HCNN & [1] & \(4\,\mathrm{bit}\) & 99 & 43 & - & 1 \\ \cline{2-7} & Our Work & \(32\,\mathrm{bit}\) & 98.6 & 238 & 98.45 & 7 \\ & Our Work & \(8\,\mathrm{bit}\) & 98.51 & 70 & 41.63 & 3 \\ & Our Work & \(4\,\mathrm{bit}\) & 98.65 & 45 & 12.2 & 1 \\ & Our Work & \(2\,\mathrm{bit}\) & 98.46 & 29 & 8.78 & 1 \\ \hline \multirow{6}{*}{MNIST} & LoLa & [11] & \(8\)-\(9\,\mathrm{bit}\) & 74.1 & 93 & - & 4 \\ & HCNN & [1] & \(8\,\mathrm{bit}\) & 77.55 & 218 & - & 10 \\ \cline{1-1} \cline{2-7} & Our Work & \(32\,\mathrm{bit}\) & 73.28 & 614 & 12801 & 30 \\ \cline{1-1} & Our Work & \(8\,\mathrm{bit}\) & 73.04 & 205 & 4267 & 10 \\ \cline{1-1} & Our Work & \(4\,\mathrm{bit}\) & 72.49 & 153 & 3413 & 8 \\ \cline{1-1} & Our work & \(2\,\mathrm{bit}\) & 69.14 & 124 & 2560 & 6 \\ \hline \hline \end{tabular}
\end{table} TABLE II: Results of the quantisation-aware training and homomorphic inference.
\begin{table}
\begin{tabular}{l l l l l l l l l} \hline \hline \multicolumn{2}{c}{} & & \multicolumn{2}{c}{\begin{tabular}{c} PTQ \(-S\!=\!\frac{\max([\mathbf{W}])}{2^{8}-1}\) \\ \end{tabular} } & \multicolumn{2}{c}{\begin{tabular}{c} PTQ \(-S\!=\!\frac{\beta-\alpha}{2^{8}-1}\) \\ \end{tabular} } & \multicolumn{2}{c}{\begin{tabular}{c} PTQ \(-S\!=\!\frac{1}{2^{8}-1}\) \\ \end{tabular} } & \multicolumn{2}{c}{
\begin{tabular}{c} QAT \\ \end{tabular} } \\ \cline{2-7} & Quantisation & Acc [\%] & FIW [\(\,\mathrm{bit}\)] & Acc [\%] & FIW [\(\,\mathrm{bit}\)] & Acc [\%] & FIW [\(\,\mathrm{bit}\)] & Acc [\%] & FIW [\(\,\mathrm{bit}\)] \\ \hline \multirow{6}{*}{MNIST} & \(32\,\mathrm{bit}\) & 98.43 & 237 & 98.43 & 231 & 98.43 & 233 & 98.41 & 238 \\ & \(8\,\mathrm{bit}\) & 98.41 & 69 & 98.44 & 63 & 98.44 & 64 & 98.3 & 70 \\ & \(3\,\mathrm{bit}\) & 94.39 & 32 & 44.03 & 26 & 79.04 & 27 & 98.3 & 38 \\ & \(2\,\mathrm{bit}\) & 14.4 & 20 & 11.35 & 3 & 11.53 & 6 & 98.46 & 29 \\ \hline \multirow{6}{*}{CIFAR} & \(32\,\mathrm{bit}\) & 73.09 & 583 & 73.09 & 571 & 73.09 & 570 & 73.28 & 614 \\ & \(8\,\mathrm{bit}\) & 73.0 & 202 & 73.18 & 187 & 73.09 & 186 & 73.04 & 205 \\ \cline{1-1} & \(4\,\mathrm{bit}\) & 52.24 & 135 & 18.67 & 123 & 9.96 & 128 & 72.49 & 153 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Results of the quantised model using Brevitas and post-training quantisation with different scale factors for the architectures for both MNIST and CIFAR.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Network & Quantisation & N & \(\log q\) & Plaintext modulus \\ \hline \multirow{6}{*}{MNIST} & \(8\,\mathrm{bit}\) & \(2^{14}\) & 389 & 35184371138561, \\ & \(4\,\mathrm{bit}\) & \(2^{14}\) & 389 & 35184371138561 \\ & \(2\,\mathrm{bit}\) & \(2^{14}\) & 389 & 1073643521 \\ \hline \multirow{6}{*}{MNIST} & \(8\,\mathrm{bit}\) & \(2^{15}\) & 825 & Same as \(4\,\mathrm{bit}\) + \\ & & & & & 8257537, 6946817 \\ \cline{1-1} & \(4\,\mathrm{bit}\) & \(2^{15}\) & 825 & Same as \(2\,\mathrm{bit}\) + \\ \cline{1-1} & \(2\,\mathrm{bit}\) & \(2^{15}\) & 825 & 1376257, 1769473, 2424833, \\ \cline{1-1} & & & & & 2752513, 3604481, 3735553 \\ \hline \hline \end{tabular}
\end{table} TABLE III: Used HE parameters |
2307.11957 | High-performance real-world optical computing trained by in situ
model-free optimization | Optical computing systems provide high-speed and low-energy data processing
but face deficiencies in computationally demanding training and
simulation-to-reality gaps. We propose a gradient-based model-free optimization
(G-MFO) method based on a Monte Carlo gradient estimation algorithm for
computationally efficient in situ training of optical computing systems. This
approach treats an optical computing system as a black box and back-propagates
the loss directly to the optical computing weights' probability distributions,
circumventing the need for a computationally heavy and biased system
simulation. Our experiments on diffractive optical computing systems show that
G-MFO outperforms hybrid training on the MNIST and FMNIST datasets.
Furthermore, we demonstrate image-free and high-speed classification of cells
from their marker-free phase maps. Our method's model-free and high-performance
nature, combined with its low demand for computational resources, paves the way
for accelerating the transition of optical computing from laboratory
demonstrations to practical, real-world applications. | Guangyuan Zhao, Xin Shu, Renjie Zhou | 2023-07-22T01:56:58Z | http://arxiv.org/abs/2307.11957v5 | # High-performance real-world optical computing trained by in situ model-free optimization
###### Abstract
Optical computing systems can provide high-speed and low-energy data processing but face deficiencies in computationally demanding training and simulation-to-reality gap. We propose a model-free solution for lightweight in situ optimization of optical computing systems based on the score gradient estimation algorithm. This approach treats the system as a black box and back-propagates loss directly to the optical weights' probabilistic distributions, hence circumventing the need for computation-heavy and biased system simulation. We demonstrate a superior classification accuracy on the MNIST and FMNIST datasets through experiments on a single-layer diffractive optical computing system. Furthermore, we show its potential for image-free and high-speed cell analysis. The inherent simplicity of our proposed method, combined with its low demand for computational resources, expedites the transition of optical computing from laboratory demonstrations to real-world applications.
[http://dx.doi.org/XX.XXXXXX](http://dx.doi.org/XX.XXXXXX)
## 1 Introduction
Optical computing leverages the properties of light waves to facilitate high-speed data processing while reduces the energy cost [1, 2, 3, 4, 5], such as the optical spatial correlators [3, 6], the optical edge detector [7]. Recent advance in automatic differentiation has enabled in silico training of large-scale optical computing weights, giving rise to the realizations of diffractive neural networks [8, 9], optical reservoir computing [10, 11] and coherent nanophotonic circuits [12].
Training optical computing systems presents two significant challenges: an intensive computational process and a performance disparity between simulation and reality when implementing pre-trained weights onto real-world systems [9, 13, 14]. The systems are typically trained in silico using differentiable simulators rooted in the first principle of optics, an approach known as simulator-based training (SBT). While SBT has proven effective within the confines of the simulator, the performance in real systems is largely contingent upon the simulator's fidelity. Factors such as misalignment and aberration, often omitted in simulations, can cause significant performance degradation when weights trained exclusively within the simulator are applied to real-world systems. To bridge the simulation-to-reality gap, physics-aware training (PAT) and hybrid training (HBT) have been introduced [15, 16]. Both training strategies include conducting the forward pass in the real system and back-propagating loss through the simulator. Consequently, the error measured in the real system during the forward pass refines weight optimization more accurately than strictly in silico training [13].
Despite these advances, there is a continued reliance on a physics-based simulator during the backward process in the current in situ training methods. Such a setting brings three drawbacks: Firstly, the biased simulator prohibits the above training process from achieving optimal results; Secondly, the in silico simulation requires large memory and computation, limiting the aforementioned methods from in situ training in edge devices with limited computing resources [17]; Lastly, the model-based training strategies require the input object to be fully visible during the training. Capturing the high-fidelity image of this object would impose burdens on the experiment.
Here we propose an alternative solution to the above solutions requiring back-propagating errors through the simulator: an in situ, model-free optimization (MFO) method using a score gradient estimation algorithm [18, 19] to solely use the forward outputs from the real system to get gradient for updating the weights of the optical computing system. As shown in Fig. 1, our method treats the optical system as a black box and back-propagates task-specific negative-loss as rewards to the source
weight distributions (Fig. 1b). This process only requires knowledge of the weights and forward outputs of the optical computing system, unlike the SBT and HBT methods that require a model and the images of the input objects (Fig. 1a). A tabular comparison of our work and the prior methodologies is in Tab. 1.
We demonstrate our method on a home-built single-layer diffractive optical computing system. Experimental results show that our MFO method outperforms the result of hybrid training on the commonly used MNIST and FMNIST datasets [9, 15, 16] (Sec. 3A). As a proof-of-concept demonstration, we experimentally show that our MFO-trained optical computing system can achieve classification of white blood cell phase maps with a testing accuracy of 73.8% (Sec. 3B), making it a promising method for image-free and high-speed cell analysis. Lastly, we show our pipeline only consumes \(\sim\frac{1}{100}\) computing resources as compared to the HBT in situ methods by avoiding computation-intensive modeling of the wave propagation process (Sec. 3C).
## 2 Methodology
In what follows, we detail the problem setup of training the optical computing system and our solution. We introduce background and problem formation in Subsec. A and the conventional solution of using simulator-based training (SBT) in Subsec. B. We then illustrate our solution to the problem, the model-free optimization (MFO) for training the optical computing system in Subsec. C. Finally, we illustrate the optical computing system and its simulator in Subsec. D, which we will use to demonstrate the performance of our method.
### Problem setup
We are interested in learning the optimal optical weights \(w\in\mathbb{R}^{H}\) for the optical computing system on desired task with training dataset \(\mathcal{D}=\{x_{i},y_{i}\}_{i=1}^{N}\), where \(N\) is the size of the dataset, \(H\) is the number of trainable parameters in \(w\), and \(x\) and \(y\) denote the input and target of interest, respectively. A function \(f_{sys}(\cdot;w)\) maps \(x\to y\) through this optical computing system with \(w\). Specifically, in the image classification task based on the diffractive optical computing system we work on, \(f_{sys}\) denotes the optical mapping from the input image \(x\) to the output label \(y\), and \(w\) is the phase-valued optical computing weight.
During training, we minimize the cost function \(I(w)\) as the expected value across the entire training data set \(\mathcal{D}\):
\[w^{\star} =\operatorname*{arg\,min}_{w}J(w), \tag{1a}\] \[=\operatorname*{arg\,min}_{w}\mathbb{E}[\mathcal{L}(\mathcal{D},w )],\] (1b) \[=\operatorname*{arg\,min}_{w}\frac{1}{N}\sum_{i=1}^{N}\mathcal{L} (f_{sys}(x_{i};w),y_{i}), \tag{1c}\]
where \(\mathcal{L}\) is the task-specific loss function, we use cross-entropy loss [20] since we deal with image classification tasks throughout this paper.
We use gradient descent-based search to find optimal \(w\) to minimize the objective function \(J(w)\):
\[w=w-\alpha\nabla_{w}J(w), \tag{2}\]
where \(\nabla_{w}\) represents the gradient operator that collects all the partial derivatives of a function concerning parameters in \(w\), and \(\alpha\) is the learning rate.
It is straightforward to use the backpropagation method [14] to take the gradient through \(f_{sys}\) and finds the gradient \(\nabla_{w}J(w)\) as:
\[\nabla_{w}J(w)=\frac{1}{N}\sum_{i=1}^{N}\nabla_{w}\mathcal{L}(f_{sys}(x_{i},w ),y_{i}), \tag{3}\]
when we have an accurate differentiable modeling of \(f_{sys}\). However, this is the case for digital neural networks, but not when we train a real-world optical computing system. **Thus, this paper's critical aim is finding an accurate gradient estimation of \(\nabla_{w}J(w)\) to update optical computing weights \(w\) in a real-world system**.
### In silico simulator-based training (SBT)
**Back-propagate through the simulator \(\hat{f}_{sys}\)**. In a real-world optical computing system, since we do not have an exact functional expression of \(f_{sys}\), the **simulator-based training** builds a simulator \(\hat{f}_{sys}\) as the differentiable approximation of \(f_{sys}\) for the simulation-based training (Fig. 1a). The naive training strategy
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline & **SBT [9]** & **HBT [16]** & **MFO (ours)** \\ \hline In situ & No & Yes & Yes \\ Computation overhead & High & High & Low \\ Model-free & No & No & Yes \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Comparison of strategies on training optical computing systems** along the axes of in situ training capability (in situ), in silico computation overhead (computation overhead), and requirements on a physics-based simulator and knowledge of input objects (model-free).
Figure 1: **Model-free training of the optical computing system.** (a) The blue highlights shows that the conventional training of the optical computing system relies on a physics-based simulator \(\hat{f}_{sys}\), which substitutes the inaccessible \(f_{sys}\) corresponding to the real system. The training process back-propagates the loss through simulator \(\hat{f}_{sys}\) to update the weight \(w\). This is the basis of SBT and HBT methods. (b) The brown highlights shows that our model-free training strategy back-propagates the error of training to the distribution parameter \(\theta\) and bypasses the reliance on correct differentiable modeling of the optical system \(f_{sys}\) and knowledge on input \(\{x_{i}\}_{i=1}^{N}\).
is substituting \(f_{sys}\) in Eq. 3 with the simulator \(\hat{f}_{sys}\), applying in silico training on the simulator:
\[\nabla_{w}J(w)=\frac{1}{N}\sum_{i=1}^{N}\nabla_{w}\mathcal{L}(\hat{f}_{sys}(x_{i },w),y_{i}). \tag{4}\]
Result of \(\nabla_{w}J(w)\) is used in Eq. 2 to update the parameter \(w\). After training, the optimized \(w\) is uploaded to the real optical computing system \(f_{sys}\) to test the performance.
**Simulation-to-reality gap**. The aforementioned simulator-based training is based on backpropagation through the simulator \(f_{sys}\). The "sim2real" gap is low (i.e., the gradient \(\nabla_{w}J(w)\) is an accurate estimation) when the simulator \(f_{sys}\) is similar to \(f_{sys}\). However, this assumption does not hold in many prototypes of optical computing systems where the inadequate modeling and misalignment between the optical elements decay the performance of the SBT during the "sim2real" transfer. We use a simulator described in Subsec. D to assess the adverse effect of misalignment on the image classification results by measuring drops in classification accuracy when introducing various misalignments to a well-trained ideal optical computing system. For instance, we show with simulations in Fig. 2 that slightly laterally misaligning the optical computing layer by \(41.1\,\mu m\) reduces the classification accuracy by \(31.2\%\).
### In situ model-free optimization (MFO)
Our solution for solving the aforementioned "sim2real" gap issue in Subsec. B is: in situ learning the optical weights \(w\) with the model-free optimization. In situ learning, on the one hand, enables us to access the output of \(f_{sys}\) and, on the other hand, in principle, feasible on the hardware requirement side as we can use spatial light modulators as programmable devices to update optical weights \(w\). The challenging part is designing a training strategy that efficiently uses the actual system function \(f_{sys}\) to construct an unbiased gradient estimator. Here, we use the score gradient estimator to calculate the gradient [21, 22] for the backward update of parameters in \(w\) while circumventing the construction of \(f_{sys}\), a biased and resource-intensive numerical modeling of \(f_{sys}\).
**Back-propagate through the weights distribution \(p\)**. In our score gradient estimation for model-free optimization, we model optical computing weights \(w\) as a random variable that follows a parameterized distribution: \(w\sim p(w|\theta)\) and rewrite the objective function in Eq. 1c as a probabilistic objective function:
\[\operatorname*{arg\,min}_{\theta}J(\theta) =\operatorname*{arg\,min}_{\theta}\frac{1}{N}\sum_{i=1}^{N}\mathbf{ E}_{p(w|\theta)}[\mathcal{L}(f_{sys}(x_{i},w),y_{i})], \tag{5a}\] \[=\operatorname*{arg\,min}_{\theta}\frac{1}{N}\sum_{i=1}^{N}\int p (w|\theta)\mathcal{L}(f_{sys}(x_{i},w),y_{i})\,\mathrm{d}w. \tag{5b}\]
The probabilistic distribution \(p(w|\theta)\) is continuous in its domain and differentiable concerning its distributional parameters \(\theta\). Accordingly, the original goal of optimizing \(w\) is reformulated as finding a most likely distribution \(p(w|\theta)\) that minimizes the objective function in Eq. 5. Specifically, in our work, we model the distribution \(p\) as a multivariate normal distribution with \(\theta=\{\mu,\sigma^{2}\}\) and \(p(w|\theta)=\mathcal{N}(w;\mu,\sigma^{2})\) as we optimize the continuous phase map to be uploaded onto the SLM.
To update distribution parameter \(\theta\) with the gradient descent Eq. 2, we take the gradient on Eq. 5:
\[\nabla_{\theta}J(\theta) =\nabla_{\theta}\frac{1}{N}\sum_{i=1}^{N}\int p(w|\theta) \mathcal{L}(f_{sys}(x_{i},w),y_{i})\,\mathrm{d}w, \tag{6a}\] \[=\frac{1}{N}\sum_{i=1}^{N}\int\mathcal{L}(f_{sys}(x_{i},w),y_{i}) \nabla_{\theta}p(w|\theta)\,\mathrm{d}w,\] (6b) \[=\frac{1}{N}\sum_{i=1}^{N}\int p(w|\theta)\mathcal{L}(f_{sys}(x_{ i},w),y_{i})\nabla_{\theta}\log p(w|\theta)\,\mathrm{d}w,\] (6c) \[=\frac{1}{N}\sum_{i=1}^{N}\frac{1}{M}\sum_{j=1}^{M}\mathcal{L}(f_ {sys}(x_{i},w_{j}),y_{i})\nabla_{\theta}\log p(w_{j}|\theta),\] (6d) \[=\frac{1}{M}\sum_{j=1}^{M}[\frac{1}{N}\sum_{i=1}^{N}\mathcal{L}(f _{sys}(x_{i},w_{j}),y_{i})]\nabla_{\theta}\log p(w_{j}|\theta), \tag{6e}\]
where \(M\) is the number of samples we draw from the distribution \(p(w|\theta)\). We get \(r(w_{j})=\frac{1}{N}\sum_{i=1}^{N}\mathcal{L}(f_{sys}(x_{i},w_{j}),y_{i})\) as the negative reward corresponding to each weight \(w_{j}\) sampled from the distribution \(p(w_{j}|\theta)\). Equation 6e is the score gradient estimator, where the score function is \(\nabla_{\theta}\log p(w_{j}|\theta)\), which has been widely used in other areas, such as policy gradient algorithms in reinforcement learning [19] and diffusion models [23].
_Variance reduction._ The main risk of using such a gradient estimator is the high variance that comes from the Monte Carlo integration step that transits Eq. 6c to Eq. 6d. The Monte Carlo integration step in Eq 6d approximates the integral value in Eq. 6c by first drawing \(M\) independent samples \(\{w_{j}\}_{j=1}^{M}\) from the distribution \(p(w_{j}|\theta)\) and then computing the average function evaluated in these samples. Such a sampling-based integration step has high variance because different sets of random samples may lead to significantly different integral estimates. We reduce the variance by subtracting the \(r(w_{j})\) with baseline value \(r=\frac{1}{M}\sum_{j=1}^{M}r(w_{j})\):
\[\nabla_{\theta}J(\theta)=\frac{1}{M}\sum_{j=1}^{M}(r(w_{j})-r)\nabla_{\theta}\log p (w_{j}|\theta). \tag{7}\]
Figure 2: **System misalignment in a real optical computing system degenerates the performance of the optical computing system trained solely with a physics-based simulator.** (a) Accuracy drops to \(36.4\%\) from \(82.2\%\) when having a misaligned rotation angle \(\Delta\phi_{I}=0.01^{\circ}\). (b) Accuracy drops to \(51.0\%\) from \(82.2\%\) when the x’-axis misalignment of the optical computing layer \(\Delta d_{c}\) is \(41.1\,\mu m\). (c) Accuracy decreases to \(50.9\%\) from \(82.2\%\) when the y’-axis misalignment of the output layer \(\Delta d_{o}\) is \(62.4\,\mu m\).
The subtraction of baseline value in Eq. 7 reduces the variance of gradient estimation while keeping the bias of gradient unchanged [24].
**Training recipe of MFO.** During training, we sample a batch of phase-valued optical computing weights \(\{w_{j}\}_{j=1}^{M}\) from the distribution: \(w_{j}\sim p(w_{j}|\theta)\). Then we upload the sampled weights onto the optical computing layer and test the weights with inputs from the dataset \(\mathcal{D}\). After that, we calculate the rewards and update the distribution parameter \(\theta\) through Eqs. 2 and 7. The algorithm iterates these steps until convergence. After minimizing the objective function Eq. 5, we export \(w_{j}\) with the smallest \(r(w_{j})\) tested on the validation set as the output result of \(w^{\star}\). In practice, this performs better on training and validation sets than setting \(w^{\star}\) as the sampled mean from the last batch of samples. The algorithmic overview of the training recipe is shown in Algorithm 1.
```
1:Input: Classification dataset \(\mathcal{D}=\{x_{i},y_{i}\}_{i=1}^{N}\), learning rate \(\alpha\), number of sampled weights \(M\), optical computing system \(f_{sys}\), distribution parameter \(\theta=\{\mu,\sigma^{2}\}\), loss function \(\mathcal{L}\), epochs \(K\).
2:Output: Optimized optical computing weight \(w^{\star}\).
3:for\(k\) in range \(K\)do
4: Sample \(\{w_{j}\}_{j=1}^{M}\) from distribution \(p(w_{j}|\theta)\).
5:\(\triangleright\) in situ evaluate \(\{w_{j}\}_{j=1}^{M}\).
6:for\(j\) in range \(M\)do
7:\(r(w_{j})\leftarrow\frac{1}{N}\sum_{i=1}^{N}\mathcal{L}(f_{sys}(x_{i},w_{j}),y_ {i})\).
8:\(\triangleright\) in silico update \(\theta\).
9: Calculate \(\Delta\theta\) via Eq. 7.
10:\(\theta\leftarrow\theta-\alpha\Delta\theta\).
11:\(w^{\star}\gets w_{j}\) with the smallest \(r(w_{j})\).
```
**Algorithm 1**Algorithm overview of MFO.
### Experiment and simulation detail of our diffractive optical computing system
**Experimental setup of the real optical computing system \(f_{sys}\).** We build our home-built single-layer optical computing system to validate the effectiveness of the proposed training strategy (see Fig. 3). In the experimental setup, we have a laser field \(u_{laser}\) incident onto the input layer with the object field \(u_{obj}\). The light field reflects from the input layer onto the optical computing layer \(u_{w}\) and is then detected by the sensor in the output layer. The object and optical computing weights are fulfilled using spatial light modulators (SLMs). Specifically, the length \(d_{IC}\) and \(d_{CO}\) for \(Prop_{IC}\) and \(Prop_{CO}\) in Fig. 3 are \(215.1mm\) and \(201.6mm\), respectively. The reproducibility of the hardware system is in Supplement Sec. S.1. Since it is difficult to manually align multiple components with high accuracy and we have programmable optical devices, we use digital homography-enabled registration as compensation for manual alignment. More details of digital alignment are given in Supplement Sec. S.5.
**Differentiable physics-based simulator \(\hat{f}_{sys}\)**. We construct an ideal physics-based simulator \(\hat{f}_{sys}\) corresponding to the aforementioned optical computing system \(f_{sys}\) as the sandbox to test different training algorithms. This simulator is also used inside the design loop of SBT and HBT, which serve as the baseline methods to compare with. Since our system only includes free-space wave propagation \(\hat{f}_{prop}\), wavefront modulation \(\hat{f}_{mod}\), and sensor detection \(\hat{f}_{deci}\), we build the optical computing simulator by stacking these three optical modules as building blocks. The functions of optical modules are:
\[\hat{f}_{prop}(u_{in},z):u_{out}=\mathcal{F}^{-1}(\mathcal{F}(u_{ in})\times\mathcal{F}(h_{prop}(z))), \tag{8a}\] \[\hat{f}_{mod}(u_{in},u_{element}):u_{out}=u_{in}*u_{element},\] (8b) \[\hat{f}_{deci}(u_{in}):I_{out}=|u_{in}|^{2}, \tag{8c}\]
where \(h_{prop}(z)=\frac{e^{h/2}}{\|\lambda^{2}}e^{\frac{\theta}{2}(x^{2}+y^{2})}\) is the propagation kernel under Fresnel approximation [25] with a propagation distance \(z^{\prime}\), wavelength \(\lambda\) and angular wave number \(k\), \(u_{element}\) denotes the wavefront modulation from the programmable optical devices, \(\mathcal{F}\) denotes the Fourier transform.
Based on the modules in Eq. 8, the simulator \(\hat{f}_{sys}=\{\hat{f}_{mod},\hat{f}_{prop},\hat{f}_{mod},\hat{f}_{prop},\hat {f}_{deci}\}\) is constructed by chaining the building blocks \(1-5\):
\[1.u_{out} =\hat{f}_{mod}(u_{laser},u_{obj}), \tag{9a}\] \[2.u_{out} =\hat{f}_{prop}(u_{out},d_{IC}),\] (9b) \[3.u_{out} =\hat{f}_{mod}(u_{out},u_{w}),\] (9c) \[4.u_{out} =\hat{f}_{prop}(u_{out},d_{CO}),\] (9d) \[5.I_{cam} =\hat{f}_{deci}(u_{out}), \tag{9e}\]
## 3 Results
We experimentally evaluate the performance of our MFO method on the open-source MNIST and FMNIST datasets in Subsec. A. We also demonstrate the MFO method on a novel application of stain-free classifying white blood cells in Subsec. B. We then illustrate MFO's advantage of memory- and computation-efficient training in Subsec. C.
Figure 3: **Experimental setup of a single-layer optical computing system.** We have a light source incident from the top left, passing through the polarizer and the first beam splitter (BS1), and then reflecting from the input layer which displays phase object by phase delay from the SLM1. The reflected light then passes through BS1 and beam splitter 2 (BS2) and arrives at the optical computing layer, where we use SLM2 to upload the optical processing weights. The reflected light from the modulator then arrives at the output layer, and we calculate the task-specific loss based on the arrival signal. We label the last two paths as \(Prop_{IC}\) and \(Prop_{CO}\), respectively, in the figure.
### MFO outperforms hybrid training (HBT) in the real systems on the MNIST and FMNIST datasets
We conduct experiments on training the single-layer optical computing system (described in Fig. 3) on two classical image classification datasets: MNIST [26] and FMNIST [27]. We include in silico SBT and in situ HBT methods as the comparison baselines. Table 2 quantitatively shows that our method achieves higher classification accuracy than the HBT and SBT methods on both datasets in the experiments. The SBT method performs poorly due to the gap between the simulator and the real system. The HBT method suffers from the bias between \(f_{\text{sys}}\) and \(f_{\text{sys}}\) in the backward process, while the MFO bypasses the bias-sensitive modeling and updates gradients solely with \(f_{\text{sys}}\). Figure 4 and Fig. S.3 visualize some experimental outputs and confusion matrices using the MFO and HBT methods, respectively.
See Supplement Sec. S.4 for the details of the MNIST and FMNIST datasets. Training details are in Supplement Sec. S.2. The detailed description of the HBT method is in Supplement Sec. S.6.
### Application: all-optical classification on cellular dataset
For the first time, we demonstrate the capability of an optical computing system for stain-free cell analysis, trained by our MFO algorithm (Fig. 5). We work on the white blood cells (WBC), whose abnormal subtype percentages indicate the immune system's malfunction or infectious disease [30, 31]. We include details of the WBC phase map dataset in Supplement Sec. S.4. Previously, researchers used machine learning methods to classify WBC subtypes, including monocyte, granulocyte, B cell, and T cell, by their morphology in a stain-free way [28, 32]. However, the analysis process is computationally heavy and time-consuming. Here, we accelerate the stain-free cell analysis process via computing with light. Our MFO method strikes a training/validation/testing classification accuracy of 72.1%/73.3%/75.8% when classifying 4 types of WBC, exceed
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline & \multicolumn{3}{c|}{MNIST} & \multicolumn{3}{c}{FMNIST} \\ \cline{2-7} & Train & Val & Test & Train & Val & Test \\ \hline \hline Ideal & 92.7\% & 84.3\% & 82.2\% & 85.6\% & 79.9\% & 76.4\% \\ \hline \hline SBT & 81.9\% & 74.4\% & 69.3\% & 68.3\% & 64.1\% & 60.9\% \\ HBT & 81.9\% & 75.5\% & 72.8\% & 68.3\% & 68.7\% & 65.8\% \\ MFO (Ours) & **83.3\%** & **77.8\%** & **73.6\%** & **74.0\%** & **71.1\%** & **70.4\%** \\ \hline \end{tabular}
\end{table}
Table 2: **Performance comparison on MNIST and FMNIST datasets.** Results of the ideal mode are from the simulator, whose parameters are determined from experiments while we impose no misalignment on the simulator. The lower three rows are experimental results from SBT, HBT, and MFO. Our method outperforms the SBT and HBT methods on the MNIST and FMNIST datasets in experiments.
Figure 4: **Visualization of experimental outputs and confusion matrices of the optical computing system trained with MFO. (a) An input phase object digit \(\text{2}^{\prime}\) from the MNIST dataset is modulated by the optical computing layer with weight \(w\) trained using MFO. The system correctly predicts the input as digit \(\text{2}^{\prime}\), as the output image has the largest intensity at the region corresponding to digit \(\text{2}^{\prime}\). We repeat this process on the whole train set. (b) The confusion matrix on the MNIST dataset and a training accuracy of 83.8%. (c) An example of a ’pullover’ from the FMNIST dataset is correctly predicted. (d) Confusion matrix on the FMNIST dataset with a training accuracy of 74.0%.**
ing that of the HBT method (Fig. 5c). Furthermore, Fig. 5d shows that the inference enabled by optical computing is almost instantaneous (\(\frac{d_{20}+d_{\text{occ}}}{5}=1.4\)\(ns\), where \(c\) is the speed of light), compared to the \(1.7\)\(ms\) of ResNet10, the electronic machine learning model used in [28]. We currently need 1 more milliseconds of in silico computing of region intensities corresponding to different classes to obtain the prediction. Such a step can be skipped if we use single-photon avalanche diode (SPAD) [33] point detectors to count the corresponding regions' cumulative signals. Though for now, the performance of our single-layer linear optical computing system is not on par with the electronic neural network, which hits a testing classification accuracy of \(90.5\%\)[28], the potential of having ultra-high inference speed and \(>70\%\) classification accuracy here point out an exciting direction on further increasing the complexity of our optical computing system to improve the absolute classification accuracy on classifying the cells.
### Advantage: memory- and computation-efficient training enabled by MFO
Our MFO method has an advantage over other training algorithms regarding GPU time and memory efficiency, besides the predicting accuracy discussed in the previous subsection. The SBT and HBT methods compute \(\hat{f}_{\text{sys}}(x,w)\) and \(\nabla_{w}\hat{f}_{\text{sys}}\) (Fig. 1a) for each input \(x\) in silico, which requires a lot of in silico computation resources. In contrast, our MFO method executes the calculation of \(f_{\text{sys}}(x,w)\) in the light-speed real optical computing system, but not in the computation- and memory-heavy simulator \(\hat{f}_{\text{sys}}(x,w)\). The only step of our MFO method that consumes in silico computational resources is the one described in Eq. 7, where we calculate the score gradient. The in silico com
Figure 5: **All optical cell classification enabled by MFO training.** (a) Our trained optical computing system classifies four WBC subtypes, including B cell, T cell, monocyte, and granulocyte, in a stain-free manner. The system uses the stain-free phase information of the WBCs (i) to perform optical classification optically (ii). The output plane of the system has different regions corresponding to different classes. We measure the intensity values in these regions (iii) and choose the class with the highest intensity as the prediction (iv). (b) shows that non-morphological features, including size and dry mass, cannot divide the four subtypes well. In contrast, (c)(i) shows the optical computing system trained with our MFO method can classify the WBC subtypes with a training/validation/testing accuracy of \(72.1\%/73.3\%/73.8\%\), which is higher than that of the HBT method. (c)(ii) shows the confusion matrix of MFO results on the training set. (e) The optical computing system produces its output at the speed of light (\(1.4\)\(ns\)). This is faster than ResNet10 [28, 29], which has an inference time of \(1.7\)\(ms\).
Figure 6: **MFO saves in silico computing resources during training.** We compare the GPU memory (a) and time (b) consumption of HBT and MFO methods for training one batch of inputs with batch sizes of \(B=4\) and \(B=32\). MFO is more efficient than HBT during the in situ training process in memory and time. Moreover, MFO’s resource consumption does not increase as the batch size increases, unlike HBT.
putational resources consumption in this step is low because it only scales with the dimension of \(w\) and has no relation to the complexity of the system's light transport \(\hat{f}_{sys}\).
We compare our MFO method with HBT in GPU memory and time usage in Fig. 6. Our MFO method requires far less GPU time and memory than the HBT method during the training.
## 4 Discussion
### Limitation: MFO exhibits the curse of dimensionality
Our method is not without its limits. Our MFO training relies on Monte Carlo integration. It thus inherits the _curse of dimensionality_ from the Monte Carlo integration [34]. That is, the number of samples \(M\) needed to estimate the integration in Eq. 6c with a given level of accuracy grows exponentially with respect to the \(H\), the number of input variables (i.e., dimensionality) of the function. This is also discussed and alleviated with _variance reduction_ in the previous Sec. 2C. However, the MFO strategy presented in this paper is still sample-inefficient, though unbiased and memory-efficient. We need to either limit the number of trainable parameters \(H\), which is also the search space size \(H\) or sample a large number of varied optical computing weights \(\{w_{j}^{M}\}_{j=1}^{T}\) from distribution \(p(w_{j}|\theta)\) in every iteration to make MFO's gradient less noisy. The former limits the design DOF of our method, while the latter requires more executions on the real system, which prolongs the training time.
We quantitatively investigate the influence of this limitation in Fig. 7 with a small dataset, where we visualize how \(M\) and \(H\) impact the training performance MFO. In the investigation, we limit the training dataset to 200 samples from 4 FMNIST classes, as the MFO training in the simulator takes a long time. As shown in Fig. 7, MFO requires a \(M>=128\) to achieve a training accuracy of 94% given a search space size \(H=128^{2}\). Moreover, our MFO method catastrophically fails when increasing the search space size \(H\) beyond \(128^{2}\) when keeping the sampling size \(M\) fixed to 128.
### Future directions.
Exploring the scalability of the optical computing system to more complex optical structures, along with integrating more layers and non-linear activation functions, could potentially enhance absolute performance.
Future research could also consider employing more advanced techniques related to Monte Carlo integration to reduce the training variance discussed in the previous Subsec. A, which we anticipate could substantially broaden the viable search space, thus further empowering the MFO approach. These include using more advanced sampling strategies [35] or integrating MFO with the SBT methods [36]. The latter makes a trade-off between the model bias and sampling variance.
### Conclusion
To conclude, our study underscores the effectiveness of a model-free strategy in training optical computing systems in situ, manifesting considerable potential in computational efficiency and reducing the simulation-to-reality performance gap. Although the study does not focus entirely on absolute image classification accuracy as it is based on a simple single-layer diffractive optical computing system, it shows relative improvements compared to the existing training strategies, indicating that our strategy is a potentially valuable approach. The model-agnostic nature of our technique may become even more beneficial when implemented in intricate optical systems, representing a robust and versatile alternative to current strategies. It promises a strong foundation for exploring and practically implementing optical computing in real-world applications such as high-speed cell analysis.
Hong Kong General Research Fund (14209521); Hong Kong Innovation and Technology Fund (ITS/178/20FP & ITS/148/20); Croucher Foundation (CM/CT/CF/CIA/0688/19ay).
We thank Cheng Zheng for the discussions in the early stage of the work.
G.Z. conceived the project, derived the formulation, and built the backbone code and system. X.S. helped with the code writing and system setup and collected the simulation and experiment results. R.Z. supervised the project. G.Z. and X.S. wrote the manuscript with comments and edits from R.Z.
## Data availability.
Raw data underlying the results presented in this paper are not publicly available but can be obtained from the authors upon request.
The code regarding this research will be released upon publication.
The authors declare no conflict of interest.
See Supplement 1 for supporting content.
|
2303.12771 | Procedure for improving cross-resonance noise resistance using
pulse-level control | Current implementations of superconducting qubits are often limited by the
low fidelities of multi-qubit gates. We present a reproducible and
runtime-efficient pulse-level approach for calibrating an improved
cross-resonance gate CR($\theta$) for arbitrary $\theta$. This CR($\theta$)
gate can be used to produce a wide range of other two-qubit gates via the
application of standard single-qubit gates. By performing an interleaved
randomised benchmarking experiment, we demonstrate that our approach leads to a
significantly higher noise resistance than the circuit-level approach currently
used by IBM. Hence, our procedure provides a genuine improvement for
applications where noise remains a limiting factor. | David Danin, Felix Tennie | 2023-03-22T17:35:04Z | http://arxiv.org/abs/2303.12771v1 | # Procedure for improving cross-resonance noise resistance using pulse-level control
###### Abstract
Current implementations of superconducting qubits are often limited by the low fidelities of multi-qubit gates. We present a reproducible and runtime-efficient pulse-level approach for calibrating an improved cross-resonance gate CR(\(\theta\)) for arbitrary \(\theta\). This CR(\(\theta\)) gate can be used to produce a wide range of other two-qubit gates via the application of standard single-qubit gates. By performing an interleaved randomised benchmarking experiment, we demonstrate that our approach leads to a significantly higher noise resistance than the circuit-level approach currently used by IBM. Hence, our procedure provides a genuine improvement for applications where noise remains a limiting factor.
Quantum computers promise to provide unprecedented computational power in applications such as optimisation or simulation by exploiting the fact that information is not encoded in classical but in quantum systems [1; 2]. In recent years, the field has seen the rapid development of improved quantum hardware [3]. However, the practical benefit of commercially available quantum computers based on superconducting qubits remains limited by the relatively low fidelities of multi-qubit interactions [4].
In addition to the improvement of hardware components, it is possible to enhance gate fidelities using optimised control approaches [5]. With the introduction of Qiskit Pulse [6], it is now possible to precisely control real quantum hardware via the IBM Quantum Lab [7]. As explained in Ref. [8], one can specify the amplitude, frequency and phase of the physical microwave pulses that drive the qubits to implement custom single-qubit and multi-qubit gates [9]. Hence, Qiskit Pulse allows for designing and testing control approaches on the level of physical operations instead of logical operations [10].
The cross-resonance gate (CR) is a particularly important two-qubit interaction on superconducting qubits, as it combines various desirable features and enables the construction of the Controlled-NOT gate [11] which is the standard entangling operation of universal gate sets [2]. As demonstrated in Ref. [12], one can implement a high-fidelity CR gate using the pulse sequence schematically illustrated in Fig. 1. When successfully calibrated, this corresponds to implementing the interaction \(H_{I}\approx g(Z\otimes X)\) with \(g\) some coupling constant that depends on the hardware components and the drive amplitude [12; 13]. The time evolution operator generated by this Hamiltonian reads \(U(t)=\cos(gt)\mathbf{1}\otimes\mathbf{1}-i\sin(gt)Z\otimes X\)[9]. Setting \(gt=\pi/4\) by changing the amplitude or duration of the pulses, a CR(\(\pi/2\)) gate is implemented [12].
By using Qiskit Pulse on publicly available quantum backends via the IBM Quantum Lab, the method presented in Ref. [12] can be extended to significantly improve the noise resistance of multi-qubit gates. Specifically, we describe a pulse-level approach for calibrating a set of cross-resonance gates and demonstrate that they achieve significantly higher noise resistances than their circuit-level implementations used by IBM. Crucially, the procedure we present is straightforwardly replicated. Accordingly, we provide a powerful extension to the set of high-fidelity, multi-qubit gates on currently available quantum computers based on superconducting qubits.
First, we introduce a runtime-efficient procedure for calibrating a \(Z\otimes X\) cross-resonance interaction CR(\(\theta\)) via the IBM Quantum Lab, thereby extending the approach presented in Ref. [12] to values of \(\theta\) other than \(\pi/2\). Second, we describe how this CR(\(\theta\)) gate can be used to straightforwardly implement a range of other two-qubit interactions. And finally, we demonstrate that our pulse-level implementation achieves significantly higher noise resistances, compared to the circuit-level implementation which IBM currently uses, by performing a modified interleaved randomised benchmarking experiment [14].
We begin by presenting our procedure for calibrating a CR(\(\theta\)) gate. The method is adapted from Ref. [12] but differs in two important respects. First, we generalise the procedure to values of \(\theta\) other than \(\pi/2\). And second, we streamline the procedure to make it more runtime-efficient. This enables us to perform the full calibration procedure on publicly available quantum backends via the IBM Quantum Lab, even under runtime constraints.
First, we need to determine the correct amplitude
Figure 1: The schematic adapted from Ref. [12] illustrates the pulse schedule for a CR(\(\pi/2\)) cross-resonance gate. The control qubit \(Q_{C}\) is driven at the resonant frequency \(\omega_{T}\) of the target qubit \(Q_{T}\). Unwanted terms in the interaction Hamiltonian \(H_{I}\) are suppressed by the echo sequence on \(Q_{C}\) (i.e. the upper two drive lines) and cancellation tones on \(Q_{T}\) (i.e. the lower drive line), such that \(H_{I}\approx g(Z\otimes X)\) follows.
for the CR(\(\theta\)) pulse between our control qubit \(Q_{C}\) and target qubit \(Q_{T}\). For this, we define a flat-top pulse with Gaussian edges and some real amplitude \(A\). The width and Gaussian rise time of the pulse are inherited from the CR(\(\pi/2\)) pulse that forms part of the standard Controlled-NOT implementation between \(Q_{C}\) and \(Q_{T}\). While testing these parameters might lead to a more precise calibration, we adopt this assumption to significantly reduce the calibration runtime. We note that this assumption is self-consistent since it leads to a high-fidelity CR(\(\theta\)) gate as shown in the subsequent experiments.
Then, we sweep through different real amplitude values \(A\) and measure \(Q_{T}\) in the computational basis to calculate the Pauli expectation value \(\langle Z(A)\rangle\). We repeat the experiment with \(Q_{C}\) initialised in \(|0\rangle\) and \(|1\rangle\). Assuming that the \(Z\otimes X\) or \(Z\otimes Y\) component in \(H_{I}\) is much larger than the other contributions, we find \(\langle Z\rangle\approx\cos(\theta)\), as for an ideal CR(\(\theta\)) gate we have \(\langle Z\rangle=\cos(\theta)\). Note that the assumption made here is consistent with the results of the subsequent tomography experiments. Hence, for a given \(\theta\), we can find the amplitude \(A_{\theta}\) that leads to the correct value of \(\langle Z\rangle\) and use this amplitude for our pulse.
Second, we need to determine the correct phase for the CR(\(\theta\)) pulse. For this, we sweep through different pulse widths with our flat-top Gaussian pulse using the real amplitude that we previously determined. We repeat the experiment with \(Q_{C}\) initialised in \(|0\rangle\) and \(|1\rangle\). By measuring the expectation values \(\langle X\rangle\), \(\langle Y\rangle\), and \(\langle Z\rangle\) on the target qubit, we reconstruct the coefficients of the terms in the cross-resonance interaction Hamiltonian \(H_{I}\). For details regarding the Hamiltonian tomography experiment, we refer the reader to Ref. [12] and Ref. [15].
Hence, we can determine the coefficients \(C_{ZX}\) and \(C_{ZY}\) of the cross-resonance \(Z\otimes X\) and \(Z\otimes Y\) components in \(H_{I}\), respectively. Recognising that \(C_{ZX}\propto\cos(\phi-\phi_{0})\) and \(C_{ZY}\propto\sin(\phi-\phi_{0})\), where \(\phi\) is the phase of the cross-resonance pulse [11], we can set the phase of the pulse to \(\phi_{0}=-\tan^{-1}(C_{ZY}/C_{ZX})\) such that the \(Z\otimes Y\) component in \(H_{I}\) vanishes. Thereby, we can calibrate the phase of the cross-resonance pulse in a single experiment. This provides a far more efficient method than sweeping through phases as described in Ref. [8] and Ref. [12].
Third, we need to determine the correct phase and amplitude for the cancellation pulse, which is a resonant flat-top Gaussian pulse on the target qubit with the same duration and Gaussian rise times as the cross-resonance pulse. The purpose of the cancellation pulse is to treatise the \(\mathbf{1}\otimes X\) and \(\mathbf{1}\otimes Y\) components in \(H_{I}\). The correct phase for the cancellation tone can be inferred from the Hamiltonian tomography experiment we already performed. By reading off the \(C_{\mathbf{1}X}\) and \(C_{\mathbf{1}Y}\) coefficients of the \(\mathbf{1}\otimes X\) and \(\mathbf{1}\otimes Y\) components in \(H_{I}\), we can calculate \(\phi_{1}=-\tan^{-1}(C_{\mathbf{1}Y}/C_{\mathbf{1}X})\). As the phase of the cross-resonance pulse is set to \(\phi_{0}\), the correct phase for the cancellation tone is \(\phi_{0}-\phi_{1}\) as presented in Ref. [12].
To determine the correct amplitude, we perform two Hamiltonian tomography experiments for the full pulse
Figure 2: Results for the amplitude calibration experiment with \(\theta=\pi/5\). We determine the correct pulse amplitude by reading off \(A_{\theta}\) which is defined by \(\langle Z(A_{\theta})\rangle=cos(\theta)\). Using this amplitude will lead to a CR(\(\pi/5\)) gate which has the same duration as the CR(\(\pi/2\)) pulse that forms part of the standard Controlled-NOT implementation between \(Q_{C}\) and \(Q_{T}\). The error bars are smaller than the marker size.
Figure 3: Results of the Hamiltonian tomography experiment using the fully calibrated cross-resonance pulse sequence. We fit the data as described in Ref. [15] to extract the coefficients of the contributions in the interaction Hamiltonian \(H_{I}\) and find the values indicated at the bottom of the figure. The coefficient \(C_{ZX}\) of the \(Z\otimes X\) term is significantly larger than all other coefficients which indicates a successful calibration. The error bars are smaller than the marker size.
sequence in Fig. 1. In the first experiment, the cancellation tone amplitude is set to zero while in the second experiment, we set it to some value \(A_{0}\). The correct order of magnitude for \(A_{0}\) can be estimated from the cancellation tone of the CR(\(\pi/2\)) pulse that forms part of the Controlled-NOT implementation between \(Q_{C}\) and \(Q_{T}\). Hence, we can extract the values \(C^{1}_{1X}\) and \(C^{1}_{1Y}\) as well as \(C^{2}_{1X}\) and \(C^{2}_{1Y}\) from the two experiments. Assuming a linear relationship between the cancellation tone amplitude and the coefficients as seen in Ref. [12], we find \(A_{X}=A_{0}C^{1}_{1X}/(C^{1}_{1X}-C^{2}_{1X})\) and \(A_{Y}=A_{0}C^{1}_{1Y}/(C^{1}_{1Y}-C^{2}_{1Y})\).
If the value of \(\phi_{1}\) is calibrated correctly, then we find \(A_{X}\approx A_{Y}\) as the unique solution for the correct amplitude of the cancellation tone [12]. Hence, to calibrate the full cross-resonance pulse sequence, we only require four Hamiltonian tomography experiments which provides a far more efficient procedure than the calibration methods described in Ref. [8] and Ref. [12]. Furthermore, it is now possible to calibrate the pulse sequence such that it implements a CR(\(\theta\)) gate for values of \(\theta\) other than \(\pi/2\).
We have implemented this calibration procedure using the seven-qubit IBM Quantum backend ibm_oslo with qubit 2 and qubit 1 as the control qubit and target qubit, respectively. The resonance frequency and anharmonicity of the control qubit are \(f_{2}=4.962\) GHz and \(\delta_{2}=-0.344\) GHz, and \(f_{1}=5.046\) GHz and \(\delta_{1}=-0.343\) GHz for the target qubit [16]. To illustrate our generalised procedure by way of example, we calibrate a CR(\(\theta\)) gate for \(\theta=\pi/5\). In all calibration experiments, we use at least 4 000 repetitions per circuit such that the statistical errors become negligibly small. We also mitigate readout errors using the method described in Ref. [17]. The results of the amplitude calibration experiment are illustrated in Fig. 2 and the results of an experiment that verifies the calibration are displayed in Fig. 3. For these two experiments, we have used 20 000 repetitions per circuit. Setting the pulse width to the inherited width as described above, we receive the CR(\(\pi/5\)) gate. This demonstrates that we can use our runtime-efficient, pulse-level procedure to calibrate a CR(\(\theta\)) gate for \(\theta\) other than \(\pi/2\).
Having implemented the CR(\(\theta\)) gate with \(H\propto Z\otimes X\), it is straightforward to implement a range of other two-qubit interactions. As illustrated in Fig. 4, we can use standard single-qubit gates on \(Q_{T}\) and \(Q_{C}\) to convert the \(Z\otimes X\) interaction into any \(A\otimes B\) interaction with \(A,B\in\{X,Y,Z\}\). Note that we have written the gate that corresponds to the Hamiltonian \(H=(-\theta/2)A\otimes B\) as AB(\(\theta\)) for ease of notation. The relations in Fig. 4 are easily proven using standard gate identities [2]. Fi
Figure 5: (a) Schedule for the ZX(\(\theta\)) gate using IBM’s circuit-level implementation. The four yellow pulses are the cross-resonance pulses which form part of the two required Controlled-NOT gates. The gate duration is 497.8 ns. (b) Schedule for the ZX(\(\theta\)) gate using our pulse-level approach which only requires two cross-resonance pulses, halving the number of two-qubit pulses in comparison with the circuit-level approach. The gate duration is reduced to 206.2 ns.
Figure 4: Circuit identities that show how the ZX(\(\theta\)) gate can be converted into a range of other two-qubit cross-resonance gates with \(H\propto A\otimes B\) where \(A,B\in\{X,Y,Z\}\), using single-qubit gates only. The relations are easily shown by applying standard gate identities from Ref. [2] to the time evolution operator of the ZX(\(\theta\)) gate \(U(t)=\cos(gt)\mathbf{1}\otimes\mathbf{1}-i\sin(gt)Z\otimes X\). Here, H indicates the Hadamard gate and S the phase gate.
nally, the \(\text{XZ}(\theta)\), \(\text{YZ}(\theta)\), and \(\text{YX}(\theta)\) gates can either be implemented by circuit identities used in Fig. 4, or alternatively by swapping the control and target qubit in the calibration procedure. Hence, we conclude that having calibrated the \(\text{ZX}(\theta)\) gate, it is straightforward to implement any of the nine \(\text{AB}(\theta)\) gates with \(A,B\in\{X,Y,Z\}\).
Using pulse-level methods, we can further extend the set of easily implemented two-qubit gates. Note that the \(S\) and \(S^{\dagger}\) gates in Fig. 4 correspond to virtual phase shifts with \(\Delta\phi=\pm\pi/2\) on the relevant qubit, respectively [18]. As Qiskit Pulse allows us to directly specify a phase shift, we can also implement values of \(\Delta\phi\) other than \(\pi/2\). For instance, by shifting the phase of the cross-resonance pulse and cancellation tone by \(\Delta\phi_{0}\), we can convert the \(\text{ZX}(\theta)\) gate into a \(\text{Z}(\cos(\Delta\phi_{0})\text{X}+\sin(\Delta\phi_{0})\text{Y})(\theta)\) gate. While this treatment is not exhaustive, it nicely illustrates that using circuit-level and pulse-level methods, a range of two-qubit interactions are straightforwardly implemented once we have calibrated the \(\text{ZX}(\theta)\) gate.
Any of these gates can also be implemented using circuit-level methods with at most three Controlled-NOT gates [19]. However, the advantage of our pulse-level implementation of the \(\text{ZX}(\theta)\) gate is that we require fewer cross-resonance pulses as illustrated in Fig. 5. As two-qubit interactions on superconducting qubits are susceptible to noise [9], minimising the number of cross-resonance pulses should increase the noise resistance of the \(\text{ZX}(\theta)\) gate. Further, as single-qubit gates achieve near-perfect fidelities [20] while virtual phase gates have perfect fidelities [18], converting our \(\text{ZX}(\theta)\) gate into other two-qubit gates as in Fig. 4 should lead to similar improvements for the noise resistances of these gates.
We test both hypotheses by performing an interleaved randomised benchmarking experiment similar to those in Ref. [14] and Ref. [21]. To measure the noise resistance we proceed as follows. We define a set of gate sequence lengths \(\{m_{1},...,m_{j}\}\) with \(\Delta=m_{i+1}-m_{i}\) some fixed positive integer and \(m_{j}=N\). Using the StandardRB method in Qiskit Pulse, we sample a set of random gate sequences \(\{C_{1},...,C_{N}\}\) and define a new set of gate sequences \(\mathbf{R}=\{R_{m_{1}},...,R_{m_{j}}\}\) with \(R_{m_{i}}=C_{1}...C_{m_{i}}\tilde{C}_{m_{i}}\) where \(\tilde{C}_{m_{i}}\) inverts the previous operations such that the action of each \(R_{m_{i}}\) is just the identity operation. Then, to measure the noise resistance of the standard and custom \(\text{ZX}(\theta)\) gate, we interleave \(\text{ZX}(\theta)\text{ZX}(-\theta)\) after every \(C_{l}\) in each \(R_{m_{i}}\), giving two sets of gate sequences \(\mathbf{R_{S}}\) and \(\mathbf{R_{C}}\), respectively. We run each of the gate sequences in \(\mathbf{R}\), \(\mathbf{R_{S}}\) and \(\mathbf{R_{C}}\), and measure the fractional ground state population. As the sequences in \(\mathbf{R_{S}}\) and \(\mathbf{R_{C}}\) are identity operations that acquire an additional error due to the interleaved cross-resonance gates, we expect the ground state population to decay faster by a factor of \(F^{2m_{i}}\), where \(F\leq 1\) characterises the additional error introduced by the \(\text{ZX}(\theta)\) gate, in comparison to the case of non-interleaved gate sequences in \(\mathbf{R}\). By fitting the data to an exponential decay, we find the values of \(F_{S}\) and \(F_{C}\).
Figure 6: Results of the interleaved randomised benchmarking experiments for six different two-qubit gates in the custom pulse-level and standard circuit-level implementation on a real IBM Quantum backend. Each point indicates the fractional ground state population averaged over ten random gate sequences while error bars show the standard deviations. We observe an exponential decay to the fully mixed state indicated by the dashed line at a fractional ground state population of 0.25. For each gate, \(F_{S}\) and \(F_{C}\) characterise the respective noise resistances of the standard and custom implementation as discussed in the main text. In all cases, we find that \(F_{C}\geq F_{S}\) marks a significant improvement in the noise resistances of the gates.
We have implemented the interleaved benchmarking procedure using the same quantum backend and qubits as described above. The gates ZX, ZY, ZZ, XY, XX and YY are tested using \(m_{1}=5\), \(\Delta=7\), and \(N=68\) as benchmarking parameters. For the custom implementation we have used the ZX(\(\pi/5\)) gate implemented above while for the standard implementation we have used circuit-level methods in Qiskit [22], employing the circuit identities in Fig. 4 where necessary. For each gate, we repeat the experiment with ten random gate sequences using 20 000 repetitions per circuit. The results are shown in Fig. 6.
We must be careful in interpreting \(F_{S}\) and \(F_{C}\) as they do not characterise the total gate error but rather the error associated with performing an identity operation by using the ZX(\(\theta\)) gate and its inverse. In general, we expect this error to come from coherent errors and noise. Performing additional Hamiltonian tomography experiments for the ZX(\(\theta\)) gate and its inverse both in the custom and standard implementation, we measure similar coefficients for all terms in the respective Hamiltonians. This allows us to rule out the possibility that the large discrepancy between \(F_{S}\) and \(F_{C}\) is due to coherent errors.
Hence, we can interpret \(F_{S}\) and \(F_{C}\) as characterising the error from noise for the standard and custom ZX(\(\theta\)) gate implementations, respectively. With this interpretation, we can verify both hypotheses. First, for the ZX(\(\theta\)) gate, we observe that \(F_{C}\) is significantly larger than \(F_{S}\) in Fig. 6, indicating that our custom implementation, requiring fewer cross-resonance pulses, is more resistant to noise. And second, we see similar improvements for the noise resistances of the other gates that we tested in Fig. 6, as expected due to high single-qubit gate fidelities. Therefore, our pulse-level implementation of the ZX(\(\theta\)) gate provides us with a wide range of two-qubit gates with significantly improved noise resistances. Since the overall pulse schedule time is significantly shorter than the coherence time of the qubits [16], we conjecture that this improvement is due to the simplified pulse architecture we developed, rather than the reduced gate time.
Finally, we comment on the relevance of this result for practical quantum computing. While generally advantageous, improved noise resistances for two-qubit operations are particularly useful in Hamiltonian Simulation. This often requires the repeated application of multi-qubit gates, for instance in Trotterisation approaches [2], and thus remains limited by the low noise resistances of multi-qubit interactions. By calibrating a gate using our pulse-level approach, the noise resistance can be significantly improved. This can enable Hamiltonian Simulation, as we will present in a subsequent paper [23]. In this sense, our procedure is not just interesting from an engineering but also from a physics perspective, as we can use the improved gates to simulate interesting physical systems on publicly available IBM quantum backends.
To conclude, we provide a powerful extension to the set of high-fidelity, multi-qubit gates on currently available quantum computers based on superconducting qubits. With our runtime-efficient and reproducible pulse-level approach, one can calibrate a CR(\(\theta\)) cross-resonance gate for a given value of \(\theta\) which is extended to a wide range of other two-qubit gates by applying single-qubit gates. We have demonstrated that this pulse-level approach, requiring fewer two-qubit pulses than the circuit-level approach currently used by IBM, significantly improves the noise resistances of the CR(\(\theta\)) gate and related interactions.
While providing a compelling proof of principle, we were limited to performing experiments on publicly available IBM Quantum backends. Future work should focus on repeating our experiments on those IBM Quantum backends which are currently not available to the general public. Further, our pulse-level approach should be tested in quantum computing applications to demonstrate the practical usefulness of the improvement. This will be explored for Hamiltonian Simulation in a subsequent paper.
###### Acknowledgements.
We acknowledge the use of IBM Quantum services for this work. The views expressed are those of the authors, and do not reflect the official policy or position of IBM or the IBM Quantum team. DD acknowledges support from the Studienstiftung des Deutschen Volkes. FT acknowledges support from the UKRI New Horizons Grant EP/X017249/1.
|
2305.16699 | Automatic Tuning of Loss Trade-offs without Hyper-parameter Search in
End-to-End Zero-Shot Speech Synthesis | Recently, zero-shot TTS and VC methods have gained attention due to their
practicality of being able to generate voices even unseen during training.
Among these methods, zero-shot modifications of the VITS model have shown
superior performance, while having useful properties inherited from VITS.
However, the performance of VITS and VITS-based zero-shot models vary
dramatically depending on how the losses are balanced. This can be problematic,
as it requires a burdensome procedure of tuning loss balance hyper-parameters
to find the optimal balance. In this work, we propose a novel framework that
finds this optimum without search, by inducing the decoder of VITS-based models
to its full reconstruction ability. With our framework, we show superior
performance compared to baselines in zero-shot TTS and VC, achieving
state-of-the-art performance. Furthermore, we show the robustness of our
framework in various settings. We provide an explanation for the results in the
discussion. | Seongyeon Park, Bohyung Kim, Tae-hyun Oh | 2023-05-26T07:39:26Z | http://arxiv.org/abs/2305.16699v1 | Automatic Tuning of Loss Trade-offs without Hyper-parameter Search in End-to-End Zero-Shot Speech Synthesis
###### Abstract
Recently, zero-shot TTS and VC methods have gained attention due to their practicality of being able to generate voices even unseen during training. Among these methods, zero-shot modifications of the VITS model have shown superior performance, while having useful properties inherited from VITS. However, the performance of VITS and VITS-based zero-shot models vary dramatically depending on how the losses are balanced. This can be problematic, as it requires a burdensome procedure of tuning loss balance hyper-parameters to find the optimal balance. In this work, we propose a novel framework that finds this optimum without search, by inducing the decoder of VITS-based models to its full reconstruction ability. With our framework, we show superior performance compared to baselines in zero-shot TTS and VC, achieving state-of-the-art performance. Furthermore, we show the robustness of our framework in various settings. We provide an explanation for the results in the discussion.
Seongyeon Park\({}^{1}\), Bohyung Kim\({}^{1}\), Tae-hyun Oh\({}^{2,3}\)\({}^{1}\)CNAI, Korea
\({}^{2}\)Institute for Convergence Research and Education in Advanced Technology, Yonsei University, Korea
\({}^{3}\)Dept. of EE and GSAI, POSTECH, Korea.
[email protected]
**Index Terms**: Zero-shot, Voice Conversion, Text-to-speech, Speech Synthesis, Efficient Optimum Discovery
## 1 Introduction
The recent advance of neural networks has enhanced the quality of speech synthesis models in the areas of Text-to-Speech (TTS) and Voice Conversion (VC), resulting in the production of realistic and naturally sounding speech. Among multiple categories of TTS and VC, zero-shot approaches, _e.g._, [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], have obtained much attention. These approaches are practical, because they can synthesize speech of given voices even unseen during training.
While models for zero-shot VC and TTS were developed separately, Glow-TTS [12] enables VC and multi-speaker TTS with a single model, utilizing the invertibility of Normalizing Flows (NFs) [13]. Building upon this, VITS [14] extends it with Variational Autoencoder (VAE) [15] structure where its decoder is HiFi-GAN [16]. By jointly training its components, VITS is not only practical in terms that it works in a single stage (_i.e._, can predict waveform directly), but can also produce ground truth level fidelity speech of multiple speakers.
By virtue of these favorable properties, there have been attempts [17, 18, 19, 20] to extend VITS or Glow-TTS to zero-shot VC and TTS. They use speaker encoders instead of the speaker embedding table in VITS or Glow-TTS, so that the models can extract and synthesize diverse unseen voices in the inference phase, _i.e._, enabling zero-shot. While extending the useful properties of Glow-TTS and/or VITS, these zero-shot models reported superior performance over previous zero-shot VC and TTS models. In particular, VITS-based models [19, 20] show superior performance compared to Glow-TTS-based models [17, 18], as the models can internally optimize the latent representations instead of using human-designed features such as mel-spectrograms.
However, we found that VITS and zero-shot modifications of VITS tend to be sensitive to how losses are balanced. Specifically, performance depending on the balance hyper-parameter for reconstruction loss can be seen in Figure 1. A naive approach to finding the optimal point would involve searching and tuning the loss balance hyper-parameters. In this work, we propose a framework that alleviates the exhaustive tuning process and instead enables a near-optimal trade-off selection without search.
Our proposed method is built on our hypothesis that the converged reconstruction loss value of separately training the HiFi-GAN vocoder would provide a favorable reference for the reconstruction loss in training VITS-based models. More specifically, as a preliminary task, we first train just HiFi-GAN as a mel-spectrogram-to-waveform vocoder. Denoting the converged reconstruction loss value as \(\varepsilon^{*}\), we obtain this value from the preliminary task. Then, we train VITS-based models for the main speech synthesis task, such that its reconstruction loss is
Figure 1: Word Error Rate of speech synthesized through VC or TTS, from VITS and YourTTS [20] according to the loss balance hyper-parameter \(\alpha\) (the loss weight parameter of the reconstruction loss). Both axes are in log scale. The WER has a noticeable variance depending on the balance hyper-parameter. Interestingly, the results obtained with the parameter suggested by the original authors show a notable gap from the best points we found. These graphs suggest that there is likely to exist an optimal value of the hyper-parameter \(\alpha\). One might exhaustively tune balance hyper-parameters for better performance. In this work, we propose a framework that alleviates the exhaustive hyper-parameter tuning process, and instead enables a near-optimal trade-off selection without search.
converged at \(\varepsilon^{*}\), which would be sufficient to get high fidelity audios even in the main task. We realize this learning process by the Modified Differential Multiplier Method (MDMM) [21].
With this method, our framework solves the pre-mentioned trade-off problem in VITS-based models without hyper-parameter tuning. We experimentally show the superior performance achieved by this framework in zero-shot VC and TTS, achieving state-of-the-art performance. We also show its robustness across various model and audio configurations. We further explain the results in the discussion. Source code and audio samples are available at: [https://github.com/cnaig](https://github.com/cnaig) ithub/Auto_Tuning_Zeroshot_TTS_and_VC
## 2 Method
In this section, we first describe the base models used in this work. We explain VITS [14], the state-of-the-art model for multi-speaker speech synthesis, and introduce a simple variant of VITS that enables zero-shot synthesis. Then we describe our framework and how to apply it to the base models.
### Base Model for Speech Synthesis
VITS basically integrates Glow-TTS [12] and HiFi-GAN [16].
Glow-TTS is a Normalizing Flow (NF) [13]-based TTS model, which uses the monotonic alignment search algorithm to obtain a hard attention alignment between input text and target mel-spectrogram timesteps. The original Glow-TTS is designed to conduct text-to-mel-spectrogram conversion, _i.e._, TTS.
HiFi-GAN is a waveform synthesizer, which typically decodes mel-spectrograms into speech waveforms via adversarial training. Instead of mel-spectrograms as input, the HiFi-GAN used as the decoder in VITS decodes latent features obtained from Glow-TTS and directly converts to waveforms, which enables achieving high fidelity audio synthesis.
With these modules, VITS integrates them and adds the VAE structure. The latent representation of this VAE follows a posterior distribution, encoded from linear spectrograms of speech audio. Also, a text-conditioned prior distribution is produced by the Glow-TTS module. These posterior and prior are regularized by the KL divergence during training. Noticeably, the invertibility of NF in VITS allows latent features to go forward and backward, which enables performing both VC and TTS in a single model. In addition, to achieve multi-speaker speech synthesis, VITS adopts a trainable speaker embedding table. The speaker embeddings are injected to condition the posterior encoder, the NF layer, and the HiFi-GAN decoder.
**Training of VITS.** VITS employs adversarial training in the waveform domain by training both a generator and discriminator network. The generator and discriminator are optimized by the following respective losses,
\[L_{g} =\alpha L_{recon}+L_{KL}+L_{dur}+L_{adv}(G;D)+L_{fm}(G;D),\] \[L_{d} =L_{adv}(D;G), \tag{1}\]
where the reconstruction loss term \(L_{recon}\) is defined by an \(L_{1}\) loss between the mel-spectrogram of the generated waveform and ground truth waveform, \(L_{adv}\) is the adversarial loss, and \(L_{KL}\) is the KL divergence between the posterior obtained from audio and the prior obtained from text. For detail on the other losses \(L_{dur}\) and \(L_{fm}\), refer to the literature [14]. The balance hyper-parameter \(\alpha\) weights the reconstruction loss against other losses, and is the same one used in Figure 1. As the model outputs waveform, a differentiable STFT and linear projection to the mel-scale is required to back-propagate from \(L_{recon}\)[14].
Note that, when training HiFi-GAN as a mel-spectrogram-to-waveform vocoder, it uses the same losses for its generator and discriminator as Equation (1), except for excluding \(L_{KL}\) and \(L_{dur}\). This enables a direct comparison of \(L_{recon}\) values in HiFi-GAN training with the one in VITS training despite noticeable differences in modeling and tasks.
**Zero-shot Modification of VITS.** We modify and extend the original multi-speaker VITS model with simple changes in order to enable end-to-end zero-shot speech synthesis. To prevent performance drop on non-English datasets, we use a jointly trained speaker encoder instead of a pre-trained one. We replace the speaker embedding table of VITS with a speaker encoder consisting of convolutional layers. The architecture of the speaker encoder is similar to the one in [19], except that our speaker encoder takes linear spectrograms as the input source. Second, we do not condition the posterior encoder by speaker embedding. This is meant to prevent performance drop on voice conversion with unseen source speakers. Third, we use 10 transformer blocks for the text encoder following YourTTS [20]. Fourth, we use the Deterministic Duration Predictor (DDP) instead of the Stochastic Duration Predictor (SDP). This is due to reports on the instability of SDP in [20, 22]. For simplicity, we call this modified model _Zero-shot VITS_. Through experiments, we show that this model shows comparable performance to YourTTS [20] (a baseline zero-shot modification of VITS) without our framework applied, but outperforms it when our framework is applied.
### Loss Value of Separate Vocoder Training as a Guide
As shared in all VAE-based methods, both VITS and zero-shot modifications of VITS show trade-offs between their reconstruction loss and the other losses. By changing the balance ratio of these losses, these models converge on different Pareto equilibria. As mentioned before, model performance varies drastically depending on which Pareto equilibrium it converged (Figure 1). A vanilla method to find the optimum would be to conduct exhaustive iterative experiments with different balance parameters, which is computationally burdensome. Instead of directly attempting different balance hyper-parameters \(\alpha\), we tune a specific target value of \(L_{recon}\) to find an optimal balance.
The motivation of this approach is that the decoder of a VITS-based model is a HiFi-GAN; thus, if HiFi-GAN works well for its own vocoder task, the level of the converged value of \(L_{recon}\) may indicate the desired quality and may be transferred to other tasks. To test this hypothesis, we separately train HiFi-GAN for a vocoder task alone, we call a preliminary task. Then, we obtain the converged \(L_{recon}\) value, denoted as \(\varepsilon^{*}\), and do not use any other results, including the trained vocoder model from the preliminary task. In our framework, our idea is to bring and use \(\varepsilon^{*}\) as the target value to converge \(L_{recon}\) in VITS training. That is, we propose to use this particular \(L_{recon}\) value as the desired point for a trade-off between \(L_{recon}\) and other losses.
### Constraint Optimization of VITS to a Specific \(L_{recon}\)
**Constrained Optimization for Neural Networks.** Platt and Barr [21] introduces methods for constrained optimization on neural networks. We briefly explain the Modified Differential Multiplier Method (MDMM) proposed in [21]. Consider a neural network, whose parameters are \(\theta\). Suppose we would like to solve the following optimization problem:
\[\min_{\theta}F(\theta)\quad\text{s.t.}\quad\ G(\theta)=0 \tag{2}\]
To optimize such problem, the Lagrange multiplier method can be used. To be more specific, the solution \(\theta\) of Equation (2) should be a critical point of the following Lagrangian function:
\[\mathcal{L}(\theta,\lambda)=F(\theta)+\lambda G(\theta) \tag{3}\]
for some Lagrange multiplier \(\lambda\). Note that the solution is also a critical point for \(\lambda\), as \(\frac{\partial\mathcal{L}}{\partial\lambda}=G(\theta)=0\). The Lagrange multiplier \(\lambda\) can be regarded as an additional variable and can be updated via gradient descent or ascent. The MDMM method adds a penalty term for the constraint as:
\[\mathcal{L}(\theta,\lambda)=F(\theta)+\lambda G(\theta)+\tfrac{c}{2}G(\theta)^ {2}, \tag{4}\]
where \(c\) is the damping constant parameter. The MDMM updates \(\theta\) with gradient descent and \(\lambda\) with gradient ascent over \(\mathcal{L}\). Please refer to more details. [21].
Selecting a VITS' Trade-Off Point with MDMM.We can select a trade-off point of VITS by enforcing the reconstruction loss \(L_{recon}\) to a user-chosen constant value \(\varepsilon\). To be more specific, we define \(G(\theta)=L_{recon}(\theta)-\varepsilon\). The loss \(F(\theta)\) is set to be the sum of all the remaining loss terms. Then, we can update \(\theta\) and \(\lambda\) according to the following gradients:
\[\begin{array}{l}\frac{\partial\mathcal{L}}{\partial\lambda}=G(\theta)=L_{ recon}(\theta)-\varepsilon\\ \frac{\partial\mathcal{L}}{\partial\theta}=\frac{\partial F}{\partial\theta }+\lambda\frac{\partial L_{recon}}{\partial\theta}+c(L_{recon}-\varepsilon) \frac{\partial L_{recon}}{\partial\theta}\end{array} \tag{5}\]
By this optimization, the model converges on the specified Pareto equilibrium, where \(L_{recon}\) is equal to \(\varepsilon\). In our proposed framework, we set \(\varepsilon=\varepsilon^{*}\), but for the sake of proving its empirical optimality, we also report model performance on nearby values, specifically \(\varepsilon=\varepsilon^{*}\pm 0.1\).
## 3 Experiments, Results and Discussion
### Experiment Settings and Configurations
EvaluationWe use Word Error Rate (WER) / Character Error Rate (CER) of the synthesized speech and Resemblyzer Embedding Cosine Similarity (RECS) [23]1 between the target speech and synthesized speech. We use a pre-trained ASR model2 for obtaining the WER / CER between synthesized speech and the ground truth transcript.
Footnote 1: [https://github.com/resemble-ai/Resemblyzer](https://github.com/resemble-ai/Resemblyzer)
Footnote 2: [https://huggingface.co/facebook/hupler-large-ls960-ft](https://huggingface.co/facebook/hupler-large-ls960-ft)
Footnote 3: The dataset consists of 15 different languages, mostly publicly available data. English [24, 25], Dutch / French / German / Italian / Polish / Portuguese / Spanish [26, 27], Thai / Cantonese [28], Korean [29], Japanese [30, 31], Mandarin [32], Vietnamese [33], and Arabic [34]. We also add some proprietary data containing 4 Korean speakers, which take up less than 0.5% of the entire training set. We exclude any data longer than 13 seconds, and trim silences with librosa [35].
DatasheetsWe either use the VCTK [24] dataset or a multilingual dataset, we hold out 4 male and 4 female speakers from its training set and use them for evaluation on unseen speakers. For evaluation on seen speakers, we randomly hold out 128 utterances from the remaining training set. The rest not used for evaluation is used for training. Evaluation for models trained with the multilingual dataset was conducted using the _test-clean_ split of LibriTTS [25], consisting of speakers unseen during training. The rest is used for training.4 We use International Phonetic Alphabet as transcription, following VITS [14].
Footnote 4: For training _Zero-shot VITS_ with the multilingual dataset, we add a language embedding lookup table and condition the model on language embeddings, by following the multilingual training of YourTTS.
Detail ConfigurationsWe downsampled audio to 16kHz, and used linear/mel spectrograms with a hop size of 320 and a window size of 1280. The sampling rate and hop size were selected according to the input and output frequencies of WavLM [36] features, which are used in one of the baselines [11]. We trained each model with a batch size of 32 for 500k steps. We use the AdamW optimizer for our methods, with a learning rate of 2e-4 and a weight decay of 0.01. For the preliminary task, we train HiFi-GAN for 500k steps, as improvement in perceptual quality is small afterward. We used the optimizers and learning rates specified in the original works for the other methods.
### Effectiveness of Our Framework
To see the effectiveness of our framework, we apply our framework to VITS and zero-shot modifications of VITS, and compare them with the state-of-the-arts in zero-shot VC / TTS on VCTK [24]. We denote this experiment as Exp1.
Compared MethodsYourTTS [20] is a VITS-based model, that uses a language embedding table to be trained as a multilingual TTS model. It uses a pre-trained speaker encoder to enable zero-shot synthesis, and uses the speaker consistency loss to improve the similarity of the synthesized and reference speeches. Its VITS-based structure enables both TTS and VC. C-DSVAE [11] is a VAE-based VC model, and has separate encoders for speaker information and content. Due to its fast convergence, we train C-DSVAE for 150k steps. We refer the readers to the original papers [11, 20] for more details. C-DSVAE is a two-stage model, which predicts the mel-spectrogram and thus needs an external vocoder to produce waveform. We use the separate HiFi-GAN vocoder [16] trained on VCTK for 500k steps with our configurations.
ResultsThe HiFi-GAN trained with our settings converged to \(L_{recon}=0.25\). Thus we use \(\varepsilon=\varepsilon^{*}=0.25\) for the MDMM optimization framework in Exp15. As shown in Table 1, _Zero-shot VITS_ with our framework shows favorable performance in all settings of VC and TTS compared to previously proposed state-of-the-art zero-shot models. Also, VITS and YourTTS' performance is significantly improved by our framework.
Footnote 5: The VITS-based models used in Exp1, _i.e._ VITS, YourTTS, and _Zero-shot VITS_, have identical decoders, and thus share the same \(\varepsilon^{*}\) value.
### Robustness of Our Framework
We experiment our framework with different models and data configurations to show the robustness of our method. For each experiment, we first train HiFi-GAN vocoders with different conditions to obtain \(\varepsilon^{*}\). Then, we train VITS-based models following each setting with \(\varepsilon{=}\varepsilon^{*}\) and nearby values \(\varepsilon{=}\varepsilon^{*}\pm 0.1\). Unless specified otherwise, VCTK is used as the default dataset, and _Zero-shot VITS_ as the default model. We train models for 300k steps as metric improvements were small afterward.
* Exp2 (Different dataset): Use the multilingual dataset.
* Exp3 (Decoder with less reconstruction ability): Reduce the internal channel numbers of HiFi-GAN and _Zero-shot VITS_' decoder by a factor of 8.
* Exp4 (Different model with same decoder): Use YourTTS, which has the same decoder as _Zero-shot VITS_.
* Exp5 (Different model with different dataset): Similar to Exp4, but use the multilingual dataset.
* Exp6 (Different audio configuration): Use audio with 22050 Hz sampling rate, and use spectrograms with hop size 256, window size 1024.
ResultsAs shown in Table 2, \(\varepsilon{=}\varepsilon^{*}\) shows the best trade-off performance compared to nearby values. In Exp\(3\), although
\(\varepsilon=\varepsilon^{*}-0.1\) showed better WER / CER than \(\varepsilon{=}\varepsilon^{*}\), the model produces noisy artifacts that clearly degrade perceptual quality. The samples can be found in the supplementary material. Also, the models trained with the multilingual dataset with \(\varepsilon{=}\varepsilon^{*}\) (Exp2\(g\)5) show superior results over the multilingual baseline (YourTTS in Table 2). They even show VC performance quite similar to the ground truth, in terms of WER.
**The Agnosticity of the \(\varepsilon^{*}\) Value.** Interestingly, despite different audio configurations, datasets, and models, similar values of \(\varepsilon^{*}\) were effective for Exp1,2,4\(\sim\)6. This would be a useful result for future work, as _it even dismisses the necessity of the preliminary task of training a separate HiFi-GAN_.
### Subjective Evaluation Results
For the Mean Opinion Score (MOS) test, we randomly sampled 10 sentences as source speeches from the evaluation set of unseen speakers. The target voices were also randomly sampled but selected to be different from the source. We used Amazon Mechanical Turk, and asked 15 native English speakers to rate the naturalness of the synthesized speech (Nat MOS), and how similar the synthesized voice and target voice is (Sim MOS). For limited resources and simplicity of user study, we evaluate perceptual quality only for the VC task. Ratings ranged from 1 to 5 in integers, 5 being the best.
**Results.** In Table 3, the methods using our framework show better or comparable perceptual naturalness and speaker similarity against other methods, while having significantly better WER / CER according to Tables 1 and 2. When using the multilingual dataset, using _Zero-shot VITS_ with MDMM for unseen-to-unseen VC shows Nat / Sim MOS comparable to the ground truth audios.
### Discussion
We explain why using the \(\varepsilon^{*}\) value obtained from HiFi-GAN is effective. When training HiFi-GAN, the perceptual quality improved as \(L_{recon}\) got smaller. This hints that converging a VITS-based model to a smaller \(L_{recon}\) induces the decoder to a state where it produces high-quality audio, and thus provides a reason not to use \(\varepsilon\) bigger than \(\varepsilon^{*}\). However, the audio quality improvement of HiFi-GAN was not significant after \(L_{recon}\) got smaller past \(\varepsilon^{*}\). This means that pushing the model towards a \(L_{recon}\) value smaller than this does not lead to improvement, but rather hinders the model's ability to regularize the posterior to the text conditioned prior. This provides a reason not to use \(\varepsilon\) smaller than \(\varepsilon^{*}\).
## 4 Conclusion
In this work, we first introduced the importance of tuning the trade-off between the reconstruction loss and the other losses of VITS-based models. We hypothesized that the converged reconstruction loss value \(\varepsilon^{*}\) obtained from the preliminary vocoder task with HiFi-GAN might guarantee a sufficient level of reconstruction quality across other tasks. To train models with a specific target reconstruction loss \(\varepsilon^{*}\), we used MDMM to enforce the model to be trained with the constraint. This allows us to obtain a model superior to competitive baselines without searching the balance hyper-parameter tuning. We showed our framework is generalized well across various scenarios of different datasets and models. Moreover, with a bigger multilingual dataset, our method achieves a quality close to ground truth in zero-shot VC. These results hint that the performance of VITS-based models can be improved by pushing towards a certain low reconstruction loss value determined by the quality of the decoder. While we show this with the HiFi-GAN vocoder in this work, it would be an interesting future direction to investigate whether our framework can be applied to other decoders.
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline Model & \multicolumn{3}{c}{VC(Sim MOS-sent)} & \multicolumn{3}{c}{VC(Unseen-to-unseen)} & \multicolumn{3}{c}{TTS (Seen)} & \multicolumn{3}{c}{TTS (Seen)} \\ \cline{2-11} & WERERER (\(\%\)) & ERS (\(\uparrow\)) & WER |
2301.13216 | Wave function network description and Kolmogorov complexity of quantum
many-body systems | Programmable quantum devices are now able to probe wave functions at
unprecedented levels. This is based on the ability to project the many-body
state of atom and qubit arrays onto a measurement basis which produces
snapshots of the system wave function. Extracting and processing information
from such observations remains, however, an open quest. One often resorts to
analyzing low-order correlation functions - i.e., discarding most of the
available information content. Here, we introduce wave function networks - a
mathematical framework to describe wave function snapshots based on network
theory. For many-body systems, these networks can become scale free - a
mathematical structure that has found tremendous success in a broad set of
fields, ranging from biology to epidemics to internet science. We demonstrate
the potential of applying these techniques to quantum science by introducing
protocols to extract the Kolmogorov complexity corresponding to the output of a
quantum simulator, and implementing tools for fully scalable cross-platform
certification based on similarity tests between networks. We demonstrate the
emergence of scale-free networks analyzing data from Rydberg quantum simulators
manipulating up to 100 atoms. We illustrate how, upon crossing a phase
transition, the system complexity decreases while correlation length increases
- a direct signature of build up of universal behavior in data space. Comparing
experiments with numerical simulations, we achieve cross-certification at the
wave-function level up to timescales of 4 $\mu$ s with a confidence level of
90%, and determine experimental calibration intervals with unprecedented
accuracy. Our framework is generically applicable to the output of quantum
computers and simulators with in situ access to the system wave function, and
requires probing accuracy and repetition rates accessible to most currently
available platforms. | T. Mendes-Santos, M. Schmitt, A. Angelone, A. Rodriguez, P. Scholl, H. J. Williams, D. Barredo, T. Lahaye, A. Browaeys, M. Heyl, M. Dalmonte | 2023-01-30T19:00:02Z | http://arxiv.org/abs/2301.13216v1 | # Wave function network description and Kolmogorov complexity of quantum many-body systems
###### Abstract
Programmable quantum devices are now able to probe wave functions at unprecedented levels. This is based on the ability to project the many-body state of atom and qubit arrays onto a measurement basis which produces snapshots of the system wave function. Extracting and processing information from such observations remains, however, an open quest. One often resorts to analyzing low-order correlation functions - that is, discarding most of the available information content. Here, we introduce wave function networks - a mathematical framework to describe wave function snapshots based on network theory. For many-body systems, these networks can become scale free - a mathematical structure that has found tremendous success and applications in a broad set of fields, ranging from biology to epidemics to internet science. We demonstrate the potential of applying these techniques to quantum science by introducing protocols to extract the Kolmogorov complexity corresponding to the output of a quantum simulator, and implementing tools for fully scalable cross-platform certification based on similarity tests between networks. We demonstrate the emergence of scale-free networks analyzing experimental data obtained with a Rydberg quantum simulator manipulating up to 100 atoms. Our approach illustrates how, upon crossing a phase transition, the simulator complexity decreases while correlation length increases - a direct signature of build up of universal behavior in data space. Comparing experiments with numerical simulations, we achieve cross-certification at the wave-function level up to timescales of \(4\mu\)s with a confidence level of 90%, and determine experimental calibration intervals with unprecedented accuracy. Our framework is generically applicable to the output of quantum computers and simulators with _in situ_ access to the system wave function, and requires probing accuracy and repetition rates accessible to most currently available platforms.
## I Introduction
Harnessing and probing many-body systems at the single particle/qubit level are hallmark features of present-day quantum simulators and computers [1; 2; 3; 4]. One of the most drastic demonstrations of these tools is the possibility of taking a large number of 'photos' of a many-body system, obtained via projective measurements of the full many-body wave function. While this flood of available observations could be seen as a blessing, it immediately encounters practical as well as conceptual challenges: how can this large amount of information be processed, without _a priori_ discarding some (in the data science language, before performing a dimensional reduction)? What can one learn, that is, e.g., not available by utilizing low-order correlation functions? Answering these questions requires a structured, mathematical understanding of the experimental wave function snapshots, that addresses the _information limbo_ between traditional many-body theory based on few-points correlation functions [5], and full-fledged - but experimentally limited to few-particle systems - tomographic methods [6].
Here, we develop a theoretical framework to characterize and classify experimentally accessible collections of wave-function snapshots utilizing network theory, that is scalable and allows one to retain all available information. The backbone of our method is a mapping between collections of wave-function snapshots, and a 'wave-function' network, schematically depicted in Fig. 1, that is applicable to spin-, bosonic, and fermionic systems. Utilizing well-established tools in network theory we unravel several key characteristics of the underlying quantum wave function, that are inaccessible by conventional means.
The pivotal finding is that the resulting quantum wave function networks can become scale free - a mathematical structure that has found wide-spread application in
several fields, ranging from power distribution and internet networks to epidemics [7; 8; 9]. We demonstrate this property using experimental snapshots obtained on a Rydberg quantum simulator operating with more than 100 atoms [2; 10] and with large-scale numerical simulations using neural quantum states [11; 12]. We then argue about its generic applicability to state preparation protocols, and discuss how other types of networks - Erdos-Renyi [13] - can instead emerge if the resulting dynamics describes uncorrelated states. In terms of observables, required resources and applicability regimes, our approach is complementary to other methods aimed at fully characterizing quantum states via snapshots such as those based on classical shadows [14], randomized measurements [15; 16; 17], and chaotic dynamics [18; 19]. Its main distinctive features, that we elaborate upon below, are direct interpretability and straightforward scalability for strongly correlated, low-temperature states.
The correspondence between quantum simulator outputs and conventional network theory immediately enables a transfer of methods and concepts from previously disconnected fields. We leverage this connection to address two challenges in the field of quantum simulation. Firstly, we show that we are able to characterize the complexity of the quantum simulator output by determining its Kolmogorov complexity - the accepted absolute measure of information content of finite objects [20; 21], that quantifies the (in-)compressibility of the quantum wave function information as contained in the snapshots. This allows us to demonstrate the emergence of critical behavior at the level of information complexity, directly probing at the wave function level the emergent simplicity dictated by renormalization group theory.
Secondly, we introduce a method to perform cross-platform verification of quantum simulators [22; 23]. The method is based on the full network information without the need to perform an exponentially increasing number of measurements for increasing system size, which is the case for generic cross-verification based on the density matrix [22; 23]. By means of the Epps-Singleton test [24] we identify, with statistical significance, a time scale beyond which cross-verification falters due to experimental imperfections not covered by our theoretical description. In addition, we provide statistically rigorous bounds for previously observed time-delay effects, that demonstrate the capability of our methods to identify systematic effects that are invisible to low-order correlation functions. Beyond these two demonstrative tools, the quantum wave function networks introduced in this work provide a new generically applicable framework to probe and characterize the quantum many-body wave function accessible in a variety of atomic and solid state quantum hardware, solely requiring in situ imaging of the many-body wave function.
## II Wave function networks: theoretical framework
In this section, we describe how data sets generated by a collection of wave function snapshots can be represented by a network structure with nodes and links. For the sake of simplicity, we consider a many-body system composed of spin-1/2 degrees of freedom defined on a two-dimensional lattice: the approach can be straightforwardly generalized to continuum theories, as well as to different types of local Hilbert spaces.
_Snapshot data set._ - Each wave function snapshot, labeled by an index \(j\), takes the form:
\[X_{j}[w]=(s_{1}^{j},s_{2}^{j},...s_{N}^{j}) \tag{1}\]
where \(s_{m}^{j}\) is the measured value of the spin at position \(m\). \(N\) is the total number of sites in the system, while \(w\) are the external parameters related to the snapshot - in our case the Hamiltonian couplings. Each of these configurations corresponds to a single data point embedded in a data space whose embedding dimension is \(N\). This is depicted in Fig. 1(a) with the three examples of green, orange and blue dots.
The data set we are interested in is formed by the collection of all available snapshots:
\[\mathbf{X}[w]=\{X_{j}\}=\{X_{1},X_{2},...,X_{N_{r}}\} \tag{2}\]
where \(N_{r}\) is the number of available snapshots, that is, the number of realizations. The data set might, in principle, include repetitions - e.g., \(X_{l}=X_{f}\) for some \(l\neq f\) - in particular, at very small volumes. It is possible to take care of them, as we detail in [25]. However, to simplify the remainder of the discussion here, we assume no repetitions are present.
_From data sets to wave function networks._ - We now discuss how to translate the wave function snapshot data sets into a network structure. There are two key choices that have to be made: (i) the selection of a proper metric in the embedding space, that allows one to compute distances between data points; and (ii) a criterion to activate links between data points, based solely on their distances.
The choice of a proper metric is an important aspect of the approach. Taking inspiration from recent results in the context of classical and quantum statistical mechanics models, we use the Hamming distance [26; 27]. Given two configurations \(X_{i},X_{j}\), such distance counts the number of spins that are aligned differently and reads
\[d(X_{i},X_{j})=\sum_{p=1}^{N}|s_{p}^{i}-s_{p}^{j}|. \tag{3}\]
The statistics of Hamming distances are related to arbitrary rank correlation functions between local degrees of freedom (i.e., \(s_{k}\))[26]. Hence, they are sensitive to short-range and long-range correlations alike, which justifies their use as a similarity measure to define links between
nodes. Specifically, we define a (geometric) network from our data sets by adopting the following procedure:
1. Each point \(X_{i}\) in the data set represents a node.
2. If two nodes are at distance \(d<R\), we draw a link.
3. The distance \(R\) is chosen in a way that is dependent on the number of samples taken and reflects the typical value of distances for a given set of external parameters \(w\). In particular, we define \(R\) as \[R=\langle r_{c}\rangle=\frac{1}{N_{r}}\sum_{i=1}^{N_{r}}r_{c}(i),\] (4) where \(r_{c}(i)\) is the distance between point \(X_{i}\) and its \(c\)-nearest neighbor.
An essential aspect of our approach is the choice of a cutoff \(R\) that avoids overcounting isolated nodes while keeping a non-trivial network structure (i.e., in which all the nodes are not simply fully connected). In particular, we show that our main conclusions are independent of the choice of \(R\) for a certain range of values of \(c\) in Eq. (4). We discuss this issue in detail in Appendix VII.1.
### Network representation and correlations
At a naive level, one could expect that such WFNs simply reflect the intrinsic randomness of the wave function sampling - that is, they are ultimately generated by a Poissonian process. It turns out that this intuition is fundamentally incorrect.
In order to underpin the relation between network representation and correlations, we start by schematically illustrating the above procedure in Fig. 1(a) (i) - (iv). A graphical example of a network with spin-1/2 systems and cutoff radius equal to 1 (that is, only configurations differing by a single spin flip are connected) is depicted in Fig. 1(b): there, the black circle represents the Neel state that is connected to several other states by a single spin-flip - and, thus, minimal distance - while it is not connected to any other states. This example allows us to intuitively connect physical properties to network ones: a wave function network that carries correlations will feature 'hubs', that is, few states with many connections, and a lot of states with few connections. Conversely, a random "infinite-temperature" state will likely feature a majority of states with an intermediate number of neighbors, and won't feature either hubs or states with very few links.
The simple picture defined above is, _per se_, not particularly informative: however, it crucially sheds light on the classes of networks we can expect depending on how correlated the system is. This description of correlated states is reminiscent of what happens in several classes of scale-free networks: typically described by a scaling relation of the probability distribution \(P_{k}\) associated to the number of connections \(k\) of each node (or, as is more commonly called, the degree distribution), that follows a power law distribution
\[P_{k}\propto k^{-\alpha}. \tag{5}\]
Such a function monotonically decreases with \(k\), and allows us to distinguish between the majority of nodes that have few links, and the minority of those that have many links (see Fig. 1(b)). While the prominence of hubs seems to be mostly relevant to ordered states, it is in fact a property that is even more robust in the presence of very strong correlations - such as, e.g., those emerging at quantum critical points. Conversely, networks representing random states will not be scale free, and can
Figure 1: Network description of many-body wave function snapshots. Panel _a)_: construction of the network. First, samples of a wave function are collected (i) and individually mapped onto the target data space (ii). All data are then merged into a single data structure (iii), that defines a set of points in the configuration data space. This data structure is then mapped onto the corresponding wave function network (iv) by drawing links in the network according to a cutoff distance \(R\) that is determined by the data structure and the choice of metric (see text). Panel _b)_ physical interpretation of the network structure. Within the network, the number of neighbors of given points follows a specific distribution. Points with a large number of links \(k\) (i.e., large number of points within \(R\)) are hubs, and are indicated in darker colors. As an example, taking snapshots of a classical antiferromagnet below its critical temperature will feature the antiferromagnetic state as a hub (top cartoon), while doing so well above its critical temperature will lead to a graph with no hubs and random connections (bottom cartoon).
be construed as Erdos-Renyi (ER) networks - where the probability \(P_{k}\) of a node having \(k\) neighbors is approximately given by a Poisson distribution [13].
We emphasize that in the many-body regime, the number of snapshots, \(N_{r}\), available from an experiment, is typically insufficient to tomographically reconstruct the wave function, i.e., \(N_{r}\ll 2^{N}\). The WFN construction aims at a characterization of a state that focuses solely on the most important (yet unknown) degrees of freedom in the system, and not its entire data structure. So, our method is conceptually different from tomographic methods, including those based on specific ansatze.
### An illustrative example: quantum Ising model at equilibrium
Before discussing the experimental relevance of WFNs, we illustrate the emergence of scale-free networks in many-body systems by utilizing an example borrowed from equilibrium statistical mechanics. In Fig. 2, we show the degree distribution \(P_{k}\) obtained via sampling the partition-function snapshots of the 2D quantum Ising model on a square lattice, with samples in the \(z\)-basis. The Hamiltonian reads:
\[H=-\sum_{i,j}\sigma_{i}^{z}\sigma_{j}^{z}-g\sum_{j}\sigma_{j}^{x}. \tag{6}\]
It features a quantum phase transition at \(g_{c}\approx 3.04\) separating a non-correlated disordered phase (for \(g>g_{c}\)) from a ferromagnetically ordered state. The corresponding \(P_{k}\) is obtained by taking snapshots of the partition function, calculated via stochastic series expansion Monte Carlo simulations [28; 29] for a system of \(N=L\times L=8\times 8\) sites, at inverse temperature \(\beta=2L\), which in our calculations was high enough to observe convergence within statistical uncertainty of energy and squared magnetization, i.e., to reach the ground state regime. Hence, the generated data sets correspond to the ground-state WF snapshots described above.
Fig. 2 (a) displays the results from \(N_{r}=10^{5}\) realizations. Deep in the paramagnetic phase, \(g=5.0\), there are only weak correlations: the corresponding network is very well described by a Poisson distribution with \(\langle k\rangle=1\). In the correlated regime, which is also the most entangled one, close to the phase transition, the network is described by a scale-free structure. We note that such a scale-free structure is unrelated to the absence of scale at criticality (scale-free networks can still be compatible with the presence of real-space finite scales [7]).
Once the degree \(k\) of neighbors becomes a sizeable fraction of the total size of the network, we observe deviations from a scale-free profile, as expected. Above this size, the network properties are influenced by limited sampling [30]. To further inspect this, we plot in the lower panel \(P_{k}\) against \(k\) for various \(N_{r}\). We observe that indeed, the origin of the bending is due to the finite number of samples, and that the curves for various \(N_{r}\) are all compatible with a single power law, in this case, with exponent \(\alpha\simeq 2.4\). Changing the cutoff distance \(R\) used to build the WFN does not affect the power-law scaling behavior of \(P_{k}\) for \(k\) above a certain threshold \(k_{c}\); see Appendix VII.1 and VII.2 for more details.
At equilibrium, we expect the same dichotomic structure to appear generically in models that feature both weak- and strong-correlated regions. The key question we address below is, whether such structures are purely theoretical constructions, or if they can indeed be representative of the intricate dynamics taking place in quantum simulators, that are (i) off-equilibrium, open, and - a key difference from simulations - (ii) inherently probed with very high but not 100% fidelity.
Figure 2: Degree distribution, \(P_{k}\), for the WFN of the ground-state quantum Ising model. Panel (a) shows \(P_{k}\) of the WFN with \(N_{r}=10^{5}\) nodes for \(g=5.0\) and \(g=3.04\approx g_{c}\). In the paramagnetic region, the resulting network is compatible with a Poisson distribution (solid line, i.e., Erdos-Renyi network) with \(\langle k\rangle=1\). As expected, in the vicinity of the critical point, the WFN becomes scale free, with \(\alpha\simeq 2.4\) (dashed line). For comparison we compute \(P_{k}\) using both linear (triangles) and logarithmic (circles) histograms. Panel (b) shows \(P_{k}\) for different values of \(N_{r}\) for \(g\approx g_{c}\) using logarithmic histograms, again with the scale free distribution shown as a dashed line. In all cases we build the network using a cutoff \(R=\langle r_{1}\rangle\) (where \(\langle r_{1}\rangle\) is defined in Eq. (4)).
## III Experimental observation of Erdos-Renyi and scale-free wave function networks
### Experimental data and analysis of the network
We now discuss the network structure of quantum simulation experiments. We analyze a recent experiment, that focuses on the quasi-adiabatic state preparation of a large antiferromagnetic state using a Rydberg quantum simulator [10]. This protocol plays a fundamental importance in quantum simulation and computing, and is very widely employed in atomic physics platforms. In addition, it typically features both regimes of no-correlations (short times), and of strong correlations, enabling us to test predictions based on both Erdos-Renyi and scale-free networks. Below, we summarize the main features of the experiment, that have been reported in [10].
The experiment consists of arrays of laser-cooled Rb atoms, individually trapped into optical tweezers separated by a distance \(a\). Each atom can be considered as a pseudo-spin, the ground state being \(\ket{\downarrow}\) and a Rydberg state state being \(\ket{\uparrow}\). Initially all the atoms are prepared in \(\ket{\downarrow}\). The atoms are then laser-excited to Rydberg states via a two-photon transition, so that the effective time-dependent Hamiltonian describing the dynamics reads:
\[H(t)=\hbar\delta(t)\sum_{i}n_{i}+\frac{\Omega(t)}{2}\sum_{i}\sigma_{i}^{x}+ \sum_{ij}J_{ij}n_{i}n_{j} \tag{7}\]
with \(n_{i}=(\sigma_{i}^{z}+1)/2\), and \(\sigma_{i}^{\alpha}\) Pauli matrices at the site \(i\). Here, we have that \(J_{ij}=C_{6}/\tau_{ij}^{6}\) as the atoms interact via the van der Waals interaction. This quantum spin model exhibits both paramagnetic and antiferromagnetic phases in its ground state, for a schematic phase diagram see Fig. 3(a).
In the experiment a dynamical process has been implemented which, upon varying slowly \(\Omega(t)\) and \(\delta(t)\) over time, transforms an initial paramagnetic state into an antiferromagnetic one, as depicted in Fig. 3(a). The adiabatic theorem guarantees that such a transformation is possible for ground states of systems with a nonzero gap whenever the parameter variations are sufficiently slow. Close to a continuous quantum phase transition, however, the gap closes for a thermodynamically large system and excitations are generated unavoidably. Importantly, the celebrated quantum Kibble-Zurek mechanism (QKZ) predicts that this defect generation, and on a more gen
Figure 3: _Observation of scale-free wave-function networks in Rydberg quantum simulators._ Panel (a) shows a schematic ground state phase diagram and the quasi-adiabatic state preparation scheme: the inset shows the sweep shape and the corresponding trajectory is represented by the dashed lines in the phase diagram. In the paramagnetic (PM) regime, one expects a network description compatible with a Renyi-Erdos network, while in the vicinity to the antiferromagnetic (AF) region, that contains the Kibble-Zurek regime, a scale-free network structure is expected with power-law degree distribution, \(P_{k}\), as illustrated by the network structures. The panel (b) presents the \(P_{k}\) vs \(k\) of the experimentally observed wave-function networks for a square lattice with \(L=8\). At short times, i.e., before crossing the phase transition, the distribution decreases exponentially (similar to Erdos-Renyi degree distribution with \(\langle k\rangle\simeq 1\), represented by the dashed lines in the graphs). At later times (\(t>3\)\(\mu s\)), we observe a power law decay over two orders of magnitude, limited only by a bending that is due to a finite value of \(N_{r}\). Graph (c) shows NQS simulations of this quasi-adiabatic protocol for the same square lattice. The scale-free behavior of \(P_{k}\) is again observed until one is sensitive to the effects of finite sampling. We note that the value of the decay exponent is \(\alpha<2\), signifying very stable wave function network properties, that will be discussed later in the presence of defects. For all cases we considered WFNs with \(N_{r}=2500\) nodes.
eral level the dynamical properties itself of crossing such a transition, displays universal behavior controlled by the underlying quantum phase transition [31; 32; 33]. In the context of two-dimensional systems, this has been recently described at the theoretical level [34], and signatures have been observed in Rydberg experiments [35]. For a finite-size system, such as the ones we deal here, the gap remains always finite. Because of this, a crossover from a QKZ regime towards an adiabatic regime emerges upon lowering the velocity of the ramp [34]. In the experiment an antiferromagnetic ordering pattern has been achieved with a correlation length of the order of the system diameter, so that it is to be expected that the system resides in the crossover regime between QKZ and adiabaticity.
In what follows we will support the experimental data with numerically exact theory calculations, which will be key in a later stage in the cross-certification of the quantum simulator output. For that purpose we will use neural quantum states (NQSs), which have been recently introduced as novel class of variational wave functions for the quantum many-body problem [11]. Most importantly for the purpose of this work, recent paramount advances have pointed out a route to numerically calculate quantum many-body dynamics in interacting two-dimensional quantum matter beyond what is achievable with other state-of-the-art methods [12; 34]. For details on the utilized numerical method we refer to Refs. [12; 34] and to Appendix C.
Contrarily to the work in [10] we consider for our network analysis here two types of data sets: in the first one we use post-selected data without any defects in the array, i.e., each trap contains exactly one atom. In the second one we instead consider data sets including a mean number of defects of \(\sim 3\%\), coming from an imperfect assembly of the atomic array [36]. The purpose of this second choice is that it will allow us to make quantitative statements on the resilience of scale-free structures, and most importantly, on their significance in terms of information - and, thus, complexity - content.
_Scale-free and Erdos-Renyi networks._ - In Fig. 3(b), we plot the distribution \(P_{k}\) for defect-free experimental data for square lattices of size \(8\times 8\) and \(N_{r}=2500\) at different times. We identify two regimes:
(A) At short times \(t=1.52\mu\)s, \(P_{k}\) decays exponentially with \(k\), and its distribution resembles the one of a random ER network with \(\langle k\rangle\simeq 1\). This indicates that only limited correlations in the \(z\)-basis measurements are present in the system.
(B) Upon approaching the quantum phase transition (\(t\sim 2.6\mu\)s) and at later times, the distribution changes drastically. In particular, we observe the emergence of a stable power-law profile with \(\alpha<2\) over almost two orders of magnitude, until at large \(k\) finite sampling with \(N_{r}<\infty\) introduces an inevitable cutoff in the form of an exponential decay. This phenomenology is characteristic of scale-free networks.
In Fig. 3(c) we include as a comparison numerically exact theoretical results for \(P_{k}\) by means of NQS simulations. We utilize the same system parameters and number of samples as for the experimental data. The simulations capture the exact same qualitative pattern described by the experiment, already indicating that, for the depicted timescale of the experiments, the effect of dissipation on the full many-body wave functions are likely to be negligible, and validating the microscopic modelling at a quantitative level.
As depicted in Fig. 3, at large \(k\), deviations from a power-law scaling become appreciable. Such eventual deviations appear to be a sole effect of working with a finite number of samples \(N_{r}<\infty\), which in turn implies that the range of the power-law behavior in \(P_{k}\) can be extended by increasing \(N_{r}\). We show in Fig. 4 the distribution \(P_{k}\) for three reference cases of times \(t\) by means of data obtained using NQS. Both qualitatively and quantitatively, \(P_{k}\) exhibits the same features in all regimes: for ER graphs (Fig. 4(a)), increasing the number of nodes \(N_{r}\) yields essentially the same structure of the network (keeping \(\langle k\rangle\simeq 1\)). For the case of scale-free networks, see Figs. 4(b,c), increasing the number of samples has the effect to enlarge the regime in \(k\) of power-law behavior shifting the eventual bending, i.e., the deviation from the scale-free structure at large \(k\), to larger and larger \(k\).
_Robustness of quantum simulator outputs.-_ We observe that at late times \(t>3\mu s\), the exponent \(\alpha\) of the power-law tail in \(P_{k}\) satisfies the condition \(\alpha<2\) (see Fig. 4 (c)). As is known from network theory, scale-free networks with such an exponent \(\alpha\) exhibit very robust information content with respect to perturbations. We identify such a robustness also in the experimental data. Specifically, as can be seen in Fig. 5, the experimental data sets with defects in the atomic array capture the same scaling behavior as without defects. In analogy to network theory, this analysis provides an interesting tool to characterize the robustness of quantum simulators based solely on their outputs, whenever they are described by scale-free or ER networks. An important comment is in order: such small values of the power law exponents are typically characteristic of finite networks: this is compatible with our theory, since we know that, in the infinite sampling limit \(N_{r}\rightarrow\infty\) our network becomes infinitely large and it will be unavoidable to generate repetitions of the same snapshot. Such repetitions, however, we have excluded from the beginning and would require an adaption of our approach by means of certain weighted networks.
### Theory of wave function networks evolution over quasi-adiabatic state preparation
The scale-free and ER WFN phenomenologies we observe in both experiment and numerical simulations are not tied to the specific problem we explore here, but, as we argue in this section, are generic features of quasi-adiabatic state preparation protocols. Starting from an uncorrelated product state, it is natural to expect that
at short times one may typically find random networks of wave function snapshots, i.e., networks with ER type structures. An example of such an instance is the case covered in this work, where we start from a product state with spins aligned in the \(z\)-direction. At short times the unitary dynamics will generate some weak but noticeable superposition of other configurations with a few flipped spins, which we expect to appear similar to the presence of local fluctuations such as those caused by dissipation or thermal fluctuations. These are inherently random, and should therefore yield an ER network, with Poisson-like degree distribution. For \(N_{r}\ll 2^{N}\), such a process is expected to generate a very sparse network with \(\langle k\rangle\simeq 1\), due to the fact that the average distance between configurations is roughly a constant.
Upon approaching the quantum phase transition, we observe the emergence of a scale-free network structure. The basic mechanism behind this can be understood upon the inspection of the introduced metric in Eq. (3), which is used to impose the fundamental underlying structure on our data sets. The network structure, which we probe through \(P_{k}\), is generated by correlations in distances between different snapshots. Only in the case where such distances are correlated is it possible to find a power-law distribution \(P_{k}\) of nodes having a connectivity \(k\). As we discuss in the following, these correlations in distances between nodes in the network might be linked to the real-space correlation length in the system. Upon entering the quantum phase transition regime the system develops a large correlation length of the order of the system diameter due to the almost adiabatic dynamics generated through the experimental protocol.
From previous works on data analysis of snapshot measurements, it is expected that such large real-space correlation lengths yield Pareto distributions, i.e., power-law distributions, of distance measures in the data set [26; 27]. In this light our observations of a scale-free network structure in \(P_{k}\) appears natural in particular because \(P_{k}\) quantifies correlations between distances of network nodes. We note that the scale-free property of a scale-free network solely concerns \(P_{k}\) - indeed, other network properties may carry information that reflects the presence of a finite correlation length.
Once the quantum phase transition regime is reached, the system is effectively described by a large real-space correlation length. Let us note that the dynamical behavior of the considered quantum spin model is expected to be potentially much richer as compared to the mostly studied case of dimension \(D=1\). Upon entering the broken-symmetry phase, the system will eventually thermalize implying an infinite correlation length. In analogy to classical systems, the temporal process of generating a long-range ordered state is typically associated by coarsening and phase-ordering kinetics [37], which also comes along with universal power-law behavior. In turn this means that upon crossing the quantum phase transition it is expected that the correlation length grows further in time also deep in the broken-symmetry phase. When
Figure 4: Dependence of the degree distribution, \(P_{k}\), with the total number of nodes in the WFNs, \(N_{r}\), at the different values of \(t\). In the scale-free regime the maximum size of the WFN at \(k_{max}\) exhibit a strong dependence with \(N_{r}\). The WFNs are obtained with data sets generated by NQS simulations of Rydberg experiments.
Figure 5: _Robustness of quantum simulator outputs.-_ Comparison of the degree distribution, \(P_{k}\), of experimental WFNs generated without defects, and with a mean density of defects of \(\sim 3\%\). We consider \(N_{r}=800\) in both cases. The results are qualitatively equivalent.
linking large real-space correlation lengths with scale-free network structures, this would imply that the scale-free network structure might survive also beyond the quantum phase transition region. This underlines the universal character of the data structure dynamics observed in the experiments. The reasoning above applies also to first order phase transitions, as long as the correlation length at the critical point is larger than the system diameter (so that, in fact, the correlation functions in the system are not able to discern differences with respect to a continuous transition).
## IV Application 1: Kolmogorov complexity of wave function snapshots
The output of a quantum simulator obtained via wavefunction snapshots is, _per se_, a classical object. How complex must a classical computer program be, in order to reproduce such output? This is quantified by the so-called Kolmogorov Complexity (KC) [20, 21].
For generic strings, computing the KC is a NP-hard problem. The same holds true for generic graphs, where the KC is quantified by the Haussdorf dimension [38]. This implies that computing the Kolmogorov Complexity of wave function snapshots is an extremely challenging task that cannot be undertaken in general.
However, as noted in the previous sections, quantum simulators often generate scale-free networks: for these, there exist known non-parametric learning algorithms that allow us to estimate the intrinsic dimension of the data points, and thus, the KC, in a manner that does not depend on scale. In particular, we utilize the 2-NN algorithm [39, 40], that has already been applied in the determination of critical properties of both classical and quantum statistical mechanics partition functions [26, 27].
The starting point is to consider, for each point \(X_{j}\) in our data set, the distances to its first and second nearest-neighbor, \(r_{1}(X_{j})\) and \(r_{2}(X_{j})\), respectively. Under the condition that the data set is locally uniform in the range of second nearest-neighbors, it has been shown in Ref. [39] that the cumulative distribution function \(F^{\rm emp}\) of \(\mu=r_{2}(\mathbf{x})/r_{1}(\mathbf{x})\) obeys:
\[I_{d}=-\frac{\ln\left[1-F^{\rm emp}(\mu)\right]}{\ln\left(\mu\right)}, \tag{8}\]
where \(I_{d}\) is the intrinsic dimension of the data set. The intrinsic dimension quantifies the number of degrees of freedom required to capture the information content of the data set. While this is in principle a length-scale dependent property, our estimator directly focuses on the physically relevant distance that is determined by the sampling of the many-body wave functions.
In Fig. 6(a), we depict the relation between \(F^{\rm emp}\) and \(\mu\) obtained from (a1) experiments and (a2) NQS-simulations. In both cases, and for all times considered, the distribution is compatible with Pareto (additional oscillations appear at short times, likely due to the very simple structure of the network). These results guarantee the applicability of the 2-NN approach [39].
In Fig. 6(b), we show the time-dependence of the KC as measured by the intrinsic dimension across the ramp. Both experimental and simulation data clearly display two regimes: (i) up to 2 \(\mu\)s, the complexity increases. This effect is trivial: the initial state is very close to a product state along the \(z\)-direction, so that at short times there is just one single dominant snapshot as the measurement outcome. The unitary evolution will necessarily generate additional correlations afterwards, thus increasing complexity. (ii) From 2 \(\mu\)s onward, the complexity becomes a monotonously decreasing function of time. This second regime is a manifestation of the emergence of universal behavior whilst crossing a phase transition. Following quasi-adiabatic dynamics, the correlation length monotonously increases as a function of time: this implies that, in order to describe network properties, fewer variables are actually required - at equilibrium, these would just be the critical exponents and the amplitudes of correlation functions. The observations above are thus a direct manifestation of the emergent simplicity associated to universality at critical points [27], and represent, to the best of our knowledge, the first experimental demonstration of the link between complexity and quantum critical behavior.
We note that, after some time, NQS simulations predict a faster decrease of complexity with respect to the experimental data. We attribute this to the fact that the simulations can only partly keep track of the time
Figure 6: Complexity scaling in quantum simulators. Panels (a1-a2): cumulative distributions against \(\mu=r_{2}/r_{1}\) for selected times. Panels (a1) and (a2) show results for experimental and NQS-simulations data sets, respectively. The quality of a description in terms of Pareto distribution (lines ) increases as a function of time, for both simulation and experiment. Panels (b1-b2): time dependence of the intrinsic dimension, \(I_{d}\), along the quasi-adiabatic time evolution for both experimental and simulated data sets. For \(t>2\mu s\), the complexity of the WFN is a monotonously decreasing function of time in both experiments and simulations, capturing the emergent simplicity (decrease of degrees of freedom) that is expected from the emergence critical behavior.
evolution: the neural network structure utilizes a smaller number of effective variables, compatible with a decrease of KC, with respect to those that are describing the time evolution realized in experiment.
## V Application 2: Cross-Certification Based on Network Properties
One of the key challenges for quantum computers and simulators is to verify their correct functioning or to certify the validity of their outcome. One basic idea in the field is cross-certification, which consists of directly comparing the output of one quantum machine with another - either quantum or classical. Recent protocols based on random unitary circuitry, aiming to compare the full ground-state wave functions, have been experimentally demonstrated to be superior to tomographic methods [41, 16]. However, resources still scale exponentially with system size, making the present methods inapplicable to large devices.
Here, we take a complementary angle and focus on a comparison based on wavefunction snapshots that take into account the maximum amount of extractable information with currently available resources. At the formal level, our goal is to compare two distributions in the limit where \(N_{r}\ll 2^{N}\), i.e. which is relevant to experiments exploring many-body problems (oppositely, for \(N\simeq 10\), it is possible to reach by brute force the regime \(N_{r}\simeq 2^{N}\)[41]). Clearly, a configuration-by-configuration comparison sampled by two distributions is meaningless unless the states are very close to a product state; for generic states, the probability of sampling the same set of configurations will scale exponentially to zero with system size.
The network representation we use allows us to bypass this limitation. Specifically, we wish to compare two WFNs obtained either by two experiments or by an experiment and a simulation, see Fig 7 (a). For the concrete case considered here, let us point out that the numerical simulation by itself is a formidable challenge, which we target again by means of the NQS approach [11, 12, 34], see also Appendix C for details. Finding and quantifying similarities between two networks is a problem largely explored in different applications of network theory and is particularly useful for data sets that cannot be distinguished by direct inspection or low-order correlations [42]. In our case, such comparisons between networks are directly related to the choice of metric used to define the WFN. For scale-free WFN, this is particularly suitable, as we are guaranteed to have chosen a metric distance capturing correlations in the system.
As a simple and efficient way to compare experimental and simulated WFNs, we check the hypothesis that the corresponding degree distributions are equal by employing a nonparametric test, known as Epps-Singleton (ES) test [43]. The latter allows us to identify, with statistical significance, when two WFNs are different. In the following, since we will employ this test to establish the identity of two WFNs, we will take as statistically significant proof of our claim, cases in which the p-value of the test takes values \(p_{\rm value}>0.1\), i.e., in which both experimental and simulation data are compatible with a common probability distribution.
The results are summarized in Fig. 7. In panels (a1-a6), we present the corresponding \(p_{\rm value}\) of the ES tests obtained by comparing experimental data at a given time \(t_{\rm exp}\) with the simulation data over a given time window (i.e., \(1<t_{\rm sim}<4.2~{}\mu\)s). This allows us to identify time windows where the quantum and classical simulators can
Figure 7: Comparing the experimental and simulated WFNs, as illustrated in panel (a). In particular, we consider the Epps-Singleton two-sample test to check the hypothesis that the experimental and simulated degree distribution, \(P_{k}\), are equal. For each experimental time \(t_{\rm exp}\), we consider ES tests with the different simulated results at times \(t_{\rm sim}\). Both WFNs have \(N_{r}=2500\) nodes, and we choose a cutoff distance \(R=\langle r_{1}\rangle\) to generate them. Panel (a1-a6) shows the corresponding \(p_{\rm value}\) as a function of \(t_{\rm sim}\): results with \(p_{\rm value}>0.1\) (marked by the dashed lines) are interpreted as statistically significant. To cross-check our analysis, we also consider in panel (b) the order parameter, \(m_{\rm stg}\), as a function of \(t_{\rm exp}\). Each simulated result corresponds to the different times \(t_{\rm sim}\) for which \(p_{\rm value}>0.1\). Such an analysis allows us to identify \(t^{*}_{\rm sim}\), the times where the best agreement exists between simulation and experimental data; the results corresponding to \(t^{*}_{\rm sim}\) are marked as the black star points in the panels (a1-a6). Finally, panel (c) shows the corresponding time-shift, \(\Delta t^{*}=t^{*}_{\rm sim}-t_{exp}\), between experiments and NQS simulations (see text).
be cross-certified with statistical significance in terms of the maximum amount of information available from their wave function networks. Interestingly, we note that the cross-certification agreement occurs at time windows that are shifted from the actual experimental times presented in the legends of Figs. 7 (a1-a6).
The fact that the cross-certification agreement of quantum systems occurs at a time \(t_{\text{exp}}\) that is different from times considered in the simulations \(t_{\text{sim}}\) can be attributed to miscalibrations of the parameters of the Hamiltonian (e.g., \(\Omega(t)\) and \(\delta(t)\)). Similar observations are found when comparing experiments with simulations of physical observables based on matrix product states [10]. Although such miscalibrations do not affect the actual physics, quantifying the corresponding time shift is essential for the cross-validation of the quantum simulators.
In general, we find that the ES test can provide, for a given \(t_{exp}\), multiple candidate simulation times \(t_{sim}\) for which \(p_{\text{value}}>0.1\), see Fig. 7. In order to finally select across these multiple candidates, we perform a second test by computing for each of the candidates an independent second quantity. Here, we consider the staggered magnetization \(m_{stg}=\sum_{i_{x},i_{y}}(-1)^{i_{x}+i_{y}}\sigma_{i_{x},i_{y}}^{z}\), where we included the results for all the candidate simulation times (see Fig. 7(b)). Finally, we choose the candidate \(t_{sim}^{*}\), in which the simulated results for the order parameter are closest to the experimental data. As one can see from Fig. 7, up to a time of \(t_{exp}=3.0\)\(\mu\)s, we find that this procedure is capable of cross-certifying the experimental and theoretical data within the achievable accuracy, which is limited, for instance, by the finite time grid of the theoretical data. For intermediate times \(3.0\lesssim t_{\text{exp}}\lesssim 4.0\)\(\mu\)s small deviations start to emerge, whereas for times \(t_{exp}\gtrsim 4.0\)\(\mu\)s the cross-certification fails, which could be caused by dissipation effects in the experiment that are not included in the theory calculation, or by a decreasing accuracy of our variational computation (similarly to what is observed in the complexity scaling).
This scheme further defines an optimal time-shift \(\Delta t^{*}=t_{sim}^{*}-t_{exp}\) for the experimental data. Fig. 7 (c) shows the estimated values of \(\Delta t^{*}\). Importantly, in the time interval that the quantum simulator can be cross-certified, we identify a small time dependence of \(\Delta t^{*}\), which has not been addressed previously. We note that the procedure does not work well for \(t<1.5\mu\)s, as expected: there, the network is not scale-free yet, so a direct comparison can only provide some rough qualitative guidance.
## VI Conclusions and outlook
We have introduced a network theory framework to interpret the maximum amount of information extractable from quantum simulators - wave function snapshots. Remarkably, such networks can become scale-free for strongly correlated states of matter, and are of direct experimental relevance, as we demonstrate with data from a large-scale Rydberg atom array experiment. We have illustrated the power of network description with two applications: demonstrating the scaling of complexity across a quantum phase transition during Kibble-Zurek scaling, and cross-certifying the wave function of a quantum and classical simulator up to system sizes that have never been attained previously.
Our work opens up a series of research directions based on a transfer of methods and concepts between network and quantum science. At the big picture level, it would be important to determine to which extent Erdos-Renyi and scale-free networks are able to characterize quantum simulators and computers. While our framework provides strong evidence that it works at and close to equilibrium, the structure of wave function networks in genuinely out-of-equilibrium situations is presently completely unknown. Understanding network properties corresponding to such dynamics might provide qualitative insights into how equilibrium is established at the wave function level, complementing current efforts focusing on observables, and providing direct links between dynamics and Kolmogorov complexity. Going beyond the case of unitary dynamics, understanding the role of dissipation might help characterize the stability of quantum dynamics to noise, which will ultimately always kick in and - very likely - imprint an Erdos-Renyi structure onto the system wave function.
In addition to conceptual insights, our framework is ideally suited to developing scalable quantum information tools. Examples range from improving cross-certification methods, and, most relevantly, applying them to data sets from large scale experiments, for which computing direct wave function overlap is hopeless. On a broader level, we believe that the parallelism between two very active, but so far disconnected, fields could be an ideal playground for developing new insights into how information is associated to many-body phenomena.
###### Acknowledgements.
We thank G. Bianconi, J. Grilli, M. Marsili, R. Panda, R. Verdel, V. Vitale and P. Zoller for insightful discussions. The work of M. D. and A. A. was partly supported by the ERC under grant number 758329 (AGEnTh), and by the MIUR Programme FARE (MEPH). M. D. further acknowledges funding within the QuantERA II Programme that has received funding from the European Union's Horizon 2020 research and innovation programme under Grant Agreement No 101017733. D.B. acknowledges support from MCIN/AEI/10.13039/501100011033 (RYC2018-025348-I and NextGenerationEU PRTR-C17.I1). This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (Grant Agreement No. 853443). Moreover, the authors gratefully acknowledge
the Gauss Centre for Supercomputing e.V. (www.gauss-centre.eu) for funding this project by providing computing time through the John von Neumann Institute for Computing (NIC) on the GCS Supercomputer JUWELS at Julich Supercomputing Centre (JSC) [44]. This work is supported by the European Union's Horizon 2020 research and innovation program under grant agreement No. 817482 (PASQuanS), the Agence Nationale de la Recherche (ANR, project RYBOTIN), and the European Research Council (Advanced grant No. 101018511-ATARAXIA).
## VII Appendices
### Wave Network structure and the choice of the distance cutoff \(R\)
As described in Sec. II, the structure of the wave function network (WFN) is defined by choosing a cutoff distance, \(R\), in an embedded space defined by the Hamming distances, which allow us to define links between nodes. We now discuss in more detail the influence of \(R\) on the observation that WFNs can exhibit a scale-free structure.
Let us consider the list of all pair of distances \(d(X_{i},X_{j})\) between nodes \(X_{i}\) and \(X_{j}\). One crucial aspect to consider is that the choice of \(R\) is bounded by the minimum and maximum distances on such a list (let us call \(d_{min}\) and \(d_{max}\), respectively): \(R<d_{min}\) would generate a network with all the nodes isolated, while \(R>d_{max}\) would generate a featureless, fully connected network. The choice of \(R\) introduced on Eq. (4) naturally takes into account the typical scale distance in the embedded space, which depends on the \(N_{r}\) or the Hamiltonian parameters.
Another important aspect is that we deal with distances in a "high-dimensional" embedded space (the embedded dimension is equal to the number of spins, \(N\)), where the so-called curse of dimensionality is expected to play a fundamental role. For instance, we could expect that the difference between the minimum and the maximum distance (i.e., \(d_{min}-d_{max}\)) would become indiscernible compared to any reasonable choice of \(R\)[45] given that the volume of a high-dimensional space increases so fast that the available data becomes sparse when \(N_{r}\ll 2^{N}\). If this were the case, we would have observed just a featureless, fully connected network. In the correlated regime, however, we observe non-trivial network structures, which can be attributed to the fact that, in reality, the intrinsic dimension of the WFNs is much lower than the dimension of the embedded space [26; 27].
Let us discuss how changes on \(R\) influence the scale-free WFN. Figure 8 shows the degree distribution \(P_{k}\) associated with the WFN generated at the quantum critical point of the quantum Ising model. By increasing \(R\), we observe two main effects. First, the \(P_{k}\) is shifted by larger values of \(k\). Second, the threshold \(k_{c}\) above which \(P_{k}\) starts to behave as power law increases, see Fig. 8 (a). Specifically, we observe that our data for different values of \(R\) collapse in a same curve when we rescale the \(x\)-axis; see Fig. 8 (b). This result indicates that for the scale-free WFN, the main effect of increasing \(R\) is to enlarge the cutoff \(k_{c}\) below which the network is not scale-free. In addition, we show the fraction of isolated nodes, \(f_{N_{0}}\) by increasing \(R\). For \(R=\langle r_{10}\rangle\) less then 1% of the nodes are isolated, however we still observe a power law behavior for almost one decade in \(k\). Overall, we notice that for \(g\approx g_{c}\) we can always observe a scale-free WFN for a wide range of choices of \(R\).
### Power law fitting of the degree distribution
One of our key results in this work is that wave function networks (WFNs) emerging in the vicinity of the quantum critical points are scale-free networks. As we discuss in this section, our conclusion is based on the empirical observation that it is very likely that a power law function describes the corresponding degree distribution, i.e., \(P_{k}\sim k^{-\alpha}\).
First, given a WFN, we count the number of links \(k\) of each point in the network. The corresponding list of values is used to generate the histogram \(P_{k}\). Second, we fit the \(P_{k}\) by employing the approach proposed in Ref. [46]. Specifically, in such an approach, one (i) defines an optimal scaling range, i.e., \(k>k_{min}\), and the value for the power-law exponent \(\alpha\) by selecting an optimal fit. In particular, the one that minimizes the Kolmogorov-Smirnov (KS) distance between the empirical data and a set of trial fits. (ii) The power-law distribution (i.e., with the selected \(k_{min}\) and \(\alpha\)) is used to generate many synthetic data sets, which are compared with the power-law form by using the KS distance. The fraction of the synthetic KS distances that are larger than the empirical KS distance is then defined as the p-value of the test statistics. If the resulting p-value is greater than \(0.1\), the power law is a plausible hypothesis for \(P_{k}\); otherwise, it is rejected [46]. To implement the described approach, we use the Python power law package of Ref. [47].
As shown in Fig. 9, when considering a fitting of \(P_{k}\) within an interval between \(k_{min}=3\) and \(k_{max}=120\), the results of the ES test satisfy the criterion \(p_{\text{value}}>0.1\). We note that for obtaining \(p_{\text{value}}>0.1\) we have to impose a maximum cutoff \(k_{max}\) for \(P_{k}\). The departure from the scale-free behavior for \(k>k_{max}\) can be attributed to the finite size of the network, \(N_{r}\). Based on the analysis presented on Fig. 2 (b), we indeed observe the range in \(k\) that the power-law behavior is observed increases with \(N_{r}\). For future works, it will be interesting also to investigate the role of system size (i.e., \(L=\sqrt{N}\)) in the scale-free behavior of \(P_{k}\). In particular, if it is possible to establish a finite-size scaling expression addressing the role of both the network-size \(N_{r}\) and system-size \(L\).
### Simulations with Neural Quantum States
Neural quantum states (NQS) have emerged recently as a new versatile class of variational wave functions [11]. The goal is to find an efficient representation of a many-body wave function \(|\psi\rangle\) in the form of a parameterized function \(\psi_{\mathbf{\theta}}(\mathbf{s})\) that maps a computational basis configuration \(\mathbf{s}=(s_{1},\ldots,s_{N})\) to a complex number, such that
\[|\psi_{\mathbf{\theta}}\rangle=\sum_{\mathbf{s}}\psi_{\mathbf{\theta}}(\mathbf{s})\,|\mathbf{s} \rangle\enspace. \tag{9}\]
Here, \(|\mathbf{s}\rangle=|s_{1}\rangle\otimes\ldots\otimes|s_{N}\rangle\) denotes the computational basis states of a system with \(N\) degrees of freedom, and for our purposes \(s_{i}\in\{\uparrow,\downarrow\}\). There is a number of appealing reasons to choose \(\psi_{\mathbf{\theta}}(\mathbf{s})\) in the form of an artificial neural network (ANN) to render the ansatz a NQS. Most importantly, rigorous representation theorems guarantee that any possible wave function can be approximated by an ANN in the limit of large network sizes [48; 49; 50; 51]. This means that the approach is numerically exact in the sense that the accuracy of results can be certified self-consistently by convergence checks. While the general function approximation theorems do not tell us whether the representation in the form of an ANN is efficient, it has been shown that NQS cover some volume law entangled states and correlated states of systems in two spatial dimensions, which are notoriously difficult to capture with established methods [52; 53; 54; 55; 56; 57]. Finally, the complexity of the algorithms involved scales gently with sys
Figure 8: Degree distribution, \(P_{k}\), for the WFNs of the ground-state quantum Ising model at the critical point, \(g=g_{c}\). Panel (a) shows \(P_{k}\) of the WFNs with different values of \(R=\langle r_{c}\rangle\) (see Eq. (4)). We also show the fraction of isolated nodes, \(f_{N_{0}}\). In panel (b) we consider a phenomenological collapse for the different \(P_{k}\).
Figure 9: Power law fitting of the degree distribution presented on Fig. 2. We use Komolgorov-Smirnoff statistics to perform the fitting and define the p-value (see text).
tem size and number of parameters, and large parts are amenable to large-scale parallelization to take advantage of distributed GPU clusters [58].
While the variational ansatz with a limited number of parameters solves the problem of efficient representation, the efficient extraction of information from the wave function is achieved by Monte Carlo sampling. For example, the quantum expectation value of an operator \(\hat{O}\) can be rewritten as
\[\bra{\psi_{\mathbf{\theta}}}\hat{O}\ket{\psi_{\mathbf{\theta}}}=\sum_{\mathbf{s}}\frac{| \psi_{\mathbf{\theta}}(\mathbf{s})|^{2}}{\bra{\psi_{\mathbf{\theta}}}|\psi_{\mathbf{\theta}}}O _{\text{loc}}(\mathbf{s}) \tag{10}\]
with the local estimator \(O_{\text{loc}}(\mathbf{s})=\sum_{\mathbf{s}^{\prime}}O_{\mathbf{s},\mathbf{s}^{\prime}}\frac{ \psi_{\mathbf{\theta}}(\mathbf{s}^{\prime})}{\psi_{\mathbf{\sigma}}(\mathbf{s})}\) that can be computed efficiently for local operators with only a polynomial number of non-vanishing matrix elements \(O_{\mathbf{s},\mathbf{s}^{\prime}}=\bra{\mathbf{s}}\hat{O}\ket{\mathbf{s}^{\prime}}\). This means that the expectation value can be estimated efficiently by Monte Carlo sampling the Born probability distribution \(\frac{|\psi_{\mathbf{\theta}}(\mathbf{s})|^{2}}{\bra{\psi_{\mathbf{\theta}}}|\psi_{\mathbf{ \theta}}}\)[59], and the same holds for all quantities of interest appearing in NQS algorithms. Notice that the only way to access the wave function in quantum simulation experiments are projective measurements, which are likewise a sampling of the Born distribution; this is a very useful parallel when attempting a direct comparison of the obtained data, because obtaining samples from the wave function could turn out to be very costly with alternative numerical approaches [10].
An optimal approximate solution of the Schrodinger equation \(i\frac{d}{dt}\ket{\psi_{\mathbf{\theta}}}=\hat{H}\ket{\psi_{\mathbf{\theta}}}\) within the manifold of wave functions \(\ket{\psi_{\mathbf{\theta}}}\) is obtained via a time-dependent variational principle (TDVP) [11; 12; 60]. This leads to an ordinary differential equation prescribing the time evolution of the variational parameters,
\[\text{Im}\big{[}S_{k,k^{\prime}}\big{]}\dot{\theta}_{k^{\prime}}=-\text{Im} \big{[}iF_{k}\big{]} \tag{11}\]
with the quantum metric tensor \(S_{k,k^{\prime}}=\big{\langle}\partial_{\theta_{k}}\psi_{\mathbf{\theta}}\big{|} \partial_{\theta_{k^{\prime}}}\psi_{\mathbf{\theta}}\big{\rangle}-\big{\langle} \partial_{\theta_{k}}\psi_{\mathbf{\theta}}\ket{\psi_{\mathbf{\theta}}}\big{\rangle} \bra{\psi_{\mathbf{\theta}}}\big{|}\partial_{\theta_{k^{\prime}}}\psi_{\mathbf{\theta}} \big{\rangle}\) and the force vector \(F_{k}=\bra{\partial_{\theta_{k}}\psi_{\mathbf{\theta}}}\hat{H}\ket{\psi_{\mathbf{ \theta}}}-\bra{\partial_{\theta_{k}}\psi_{\mathbf{\theta}}}\ket{\psi_{\mathbf{\theta}} }\bra{\psi_{\mathbf{\theta}}}\hat{H}\ket{\psi_{\mathbf{\theta}}}\); notice that the imaginary part appears on both sides of the equation as we are considering real parameters [58; 60]. Hence, the time-evolved wave function starting from a given initial state can be obtained by integrating Eq. (11). In previous works it was found that careful regularization is crucial to achieve state-of-the-art results in this way [12; 34]. For the present work we developed a new way of phrasing and solving the variational problem, which we call the _conditional_ TDVP. The details of this approach will be described in a separate manuscript [61]. All results presented here were obtained in this way.
The network architecture used in our simulation is a variant of the recurrent neural network (RNN) for two-dimensional systems introduced in Ref. [62]. The structure of this architecture is depicted schematically in Fig. 10. The starting point is a one-hot encoding \(\mathbf{\sigma}_{i,j}\) of the local spin configurations \(s_{i,j}\), i.e., \(\mathbf{\sigma}_{i,j}=(1,0)\) if \(s_{i,j}=\uparrow\) or \(\mathbf{\sigma}_{i,j}=(0,1)\) if \(s_{i,j}=\downarrow\). The neural network is then evaluated by traversing the two-dimensional lattice in a snake-like manner. Let us denote the \(k\)-th lattice site index along the snake path as \((i_{k},j_{k})\) and assume that the linear dimension of the lattice is \(L\). At each lattice site, a conditional single qubit state \(\psi(s_{i_{k},j_{k}}|s_{1,1},\dots,s_{i_{k-1},j_{k-1}})\) is generated in the way detailed below. From these conditional states, the coefficient of the many-body wave function is obtained as
\[\psi(\mathbf{s})=\prod_{k=1}^{L^{2}}\psi(s_{i_{k},j_{k}}|s_{1,1},\dots,s_{i_{k-1},j _{k-1}}). \tag{12}\]
For the conditional states at every lattice site, a local hidden state \(\mathbf{h}^{(i,j)}\) is computed based on the spin configuration and hidden state of two neighboring sites as
\[\begin{split} h^{(i_{k},j_{k})}_{l}&=f\big{(}W^{ H}_{lm}h^{(i_{k-1},j_{k-1})}_{m}+W^{V}_{lm}h^{(i_{k-L},j_{k-L})}_{m}\big{)}\\ &\quad+f\big{(}W^{S_{1}}_{lm}\sigma^{(i_{k-1},j_{k-1})}_{m}+W^{S _{2}}_{lm}\sigma^{(i_{k-L},j_{k-L})}_{m}\big{)}\.\end{split} \tag{13}\]
Here, \(f\) denotes the non-linear activation function and \(W^{(\cdot)}_{lm}\) denote the weights of the dense layers; double indices indicate summation. At the boundaries, where required neighboring sites do not exist, the corresponding
Figure 10: Schematic depiction of the used neural network architecture. For evaluation the lattice is traversed along the path indicated by the blue arrow. A hidden state \(h^{ij}\) is computed at each site using the one-hot encoded local basis configurations and the hidden states of previously visited neighboring sites as indicated by the pink arrows, which correspond to dense layers. From the hidden state a correlated contribution to the conditional qubit state, \(\bar{q}^{\,ij}\), is computed and an additional uncorrelated contribution \(\bar{q}^{\,ij}\) is added to it to obtain the logarithmic conditional amplitudes \(\chi^{ij}\) after normalization as noted in Eq. (14).
input is replaced by zeros. Next, the hidden state is processed by a dense layer with two-dimensional output \((\bar{q}_{\rm R}^{ij},\bar{q}_{\rm I}^{ij})\), corresponding to the real and imaginary parts of a complex number \(\bar{q}_{ij}\). This number constitutes the correlated contribution to the logarithmic \(\uparrow\)-coefficient of the conditional local qubit state, up to normalization and a global phase. In addition, we introduced one complex-valued variational parameter \(\tilde{q}^{ij}=\tilde{q}_{\rm R}^{ij}+i\tilde{q}_{\rm I}^{ij}\) for each lattice site, which corresponds to a contribution to the conditional qubit state that is uncorrelated. With \(q^{ij}=\bar{q}^{ij}+\tilde{q}^{ij}\) we finally produce the logarithmic conditional wave function amplitudes
\[\chi_{\uparrow}^{ij} =\frac{1}{2}\log\left(\frac{\exp\left(q_{\rm R}^{ij}\right)}{1+ \exp\left(q_{\rm R}^{ij}\right)}\right)+iq_{\rm I}^{ij}\] \[\chi_{\downarrow}^{ij} =\frac{1}{2}\log\left(\frac{1}{1+\exp\left(q_{\rm R}^{ij}\right)} \right)+iq_{\rm I}^{ij} \tag{14}\]
such that
\[\psi(s_{i_{k},j_{k}}|s_{1,1},\ldots,s_{i_{k-1},j_{k-1}})=\exp\left(\chi_{s_{i _{k},j_{k}}}^{i_{k}j_{k}}\right)\,. \tag{15}\]
The uncorrelated contribution \(\tilde{q}^{ij}\) extends the standard RNN architecture. We introduced it, because we found it difficult with the plain RNN to capture the initial part of the control protocol, where only the orientation of the uncorrelated qubit states is rotated and hardly any correlations are produced. In our architecture \(\tilde{q}^{ij}\) can fully capture the product state, such that the job of the RNN is just to account for correlations on top of it. Including \(\tilde{q}^{ij}\) does not affect the autoregressive property of the ansatz introduced by the decomposition into a product of conditionals (12). This means that the architecture allows for direct sampling of uncorrelated configurations at the cost of a single network evaluation per sample [62; 63].
For the simulations we incorporate further experimental details, extending the elementary Rydberg atom Hamiltonian given in Eq. (7) of the main text. We include spatial laser intensity profiles that were extracted from the experimental setup such that the considered model Hamiltonian reads
\[H(t)=\hbar\sum_{k,l=0}^{L-1}\delta_{k}(t)n_{(kl)}+\frac{1}{2}\sum_{k,l=0}^{L-1 }\Omega_{k}(t)\sigma_{(kl)}^{x}+\sum_{i<j}U_{ij}n_{i}n_{j}\,. \tag{16}\]
Here, we introduced the notation \((kl)\equiv kL+l\) to map between double and single indices of the lattice sites; accordingly, the lasers shining in along one of the lattice dimensions exhibit an intensity profile perpendicular to that direction. The spatial and temporal form of the control fields during the considered protocol are shown in Fig. 11a,b. The coupling is \(U_{ij}=U/\Delta r_{ij}^{6}\) with nearest neighbor interaction energy \(U/h=1.947\)MHz, where \(h\) is Planck's constant, and \(\Delta r_{(kl)(mn)}=\sqrt{(k-m)^{2}+(l-n)^{2}}\) the Euclidian distance between lattice sites.
At the beginning of the protocol all atoms are prepared in their ground state, meaning that the initial state in the spin language is a polarized state \(|\psi(t=0)\rangle=|\downarrow,\ldots,\downarrow\rangle\). The initial part of the protocol mostly consists of a nearly adiabatic rotation of the polarization. This situation is difficult to address with NQS when using a fixed computational basis, because polarized states that align with the computational basis are hard to encode with NQS. Therefore, we implemented our simulation in a time-dependent frame \(W(t)=\exp(-i\alpha(t)\sum_{i}\sigma_{i}^{y})\) with \(\alpha(t)\) as shown in Fig. 11c) such that polarizations that align with the computational basis are avoided throughout the time evolution.
Figure 11: (a,b) Control protocols of the external fields \(\Omega_{k}(t)\) and \(\delta_{k}(t)\). (c) Time-dependence of \(\alpha(t)\), which parameterizes the time-dependent choice of the computational basis as described in the text. Initially, the quantization axis aligns with \(\sigma^{x}\), before it is rotated on the time interval between \(t_{0}=0.8\mu\)s and \(t_{1}=1\mu\)s with \(\alpha(t)=-\frac{\pi}{2}\cos^{2}\left(\frac{\pi}{2}(t-t_{0})/(t_{1}-t_{0})\right)\) to align with the \(\sigma^{x}\) quantization axis. |
2310.05335 | The rotating excitons in two-dimensional materials: Valley Zeeman effect
and chirality | We propose the rotational dynamics of the intralayer and interlayer excitons
with their inherent momenta of inertia in the monolayer and bilayer transition
metal dichalcogenides, respectively, where the new chirality of exciton is
endowed by the rotational angular momentum, namely, the formations of left- and
right-handed excitons at the +K and -K valleys, respectively. We find that
angular momenta exchange between excitons and its surrounding phononic bath
result in the large fluctuation of the effective g-factor and the asymmetry of
valley Zeeman splitting observed in most recently experiments, both of which
sensitively depend on the magnetic moments provided by the phononic
environment. This rotating exciton model not only proposes a new controllable
knob in valleytronics, but opens the door to explore the angular momentum
exchange of the chiral quasiparticles with the many-body environment. | Yu Cui, Xin-Jun Ma, Jia-Pei Deng, Shao-Juan Li, Ran-Bo Yang, Zhi-Qing Li, Zi-Wu Wang | 2023-10-09T01:38:44Z | http://arxiv.org/abs/2310.05335v1 | # The rotating excitons in two-dimensional materials: Valley Zeeman effect and chirality
###### Abstract
We propose the rotational dynamics of the intralayer and interlayer excitons with their inherent momenta of inertia in the monolayer and bilayer transition metal dichalcogenides, respectively, where the new chirality of exciton is endowed by the rotational angular momentum, namely, the formations of left- and right-handed excitons at the +K and -K valleys, respectively. We find that angular momenta exchange between excitons and its surrounding phononic bath result in the large fluctuation of the effective g-factor and the asymmetry of valley Zeeman splitting observed in most recently experiments, both of which sensitively depend on the magnetic moments provided by the phononic environment. This rotating exciton model not only proposes a new controllable knob in valleytronics, but opens the door to explore the angular momentum exchange of the chiral quasiparticles with the many-body environment.
\(Introduction.\)--A series of extraordinary optical and electric properties in transition metal dichalcogenides (TMDs) are dominated by excitons in different spin, valley and layer configurations[1; 2; 3; 4; 5; 6]. In particular, both intralayer excitons in monolayer and interlayer excitons in bilayer structures are endowed with a valley degree of freedom (valley pseudospin) at two inequivalent but energy-degenerate \(\pm\)K valleys[7; 8]. This valley pseudospin, like real spin, is associated with valley magnetic moment, giving rise to exciton Zeeman splitting in the presence of the external magnetic field[9; 10; 11; 12], which provides not only an attracted method of breaking the valley degeneracy, but a powerful lever to exploit the fundamental physical properties of the valley states, as well as for the development of new approaches to valleytronic control[13; 14].
The magnitude of this valley Zeeman splitting can be characterized by the effective g-factor (g\({}_{eff}\)) of exciton, whose predicted value is -4 (or +4) including the contribution from the intra- and intercellular orbital magnetic moments of the band structures[15; 16]. However, there exists obviously disagreements between the theoretical prediction and the experimental measurement. To explain this discrepancy, several microscopic models have been proposed, e.g., the interplays among the magnetic moment of the transition metal \(d_{x^{2}-y^{2}}\pm\)\(id_{xy}\) orbitals[17; 18], the valley magnetic moment and the lattice contribution stemming from Berry curvature should be considered[19; 20; 21]; the strain induces a hybridization of the direct and indirect excitons, giving rise to the renormalization of g\({}_{eff}\)[22]; instead of the local approximation, the distinct reduction of g\({}_{eff}\) could be obtained when full Bloch states are included based on the first-principles calculation in the Bethe-Salpeter equation[23]. For the larger g\({}_{eff}\) of interlayer exciton, the mechanism that the brightening of forbidden optical transitions induced by the moire potential has also been suggested[24]. Nevertheless, the accuracy of the developed models is rather poor comparing with the experimental data. The underlying physics for this factor remains a subject of hot debate.
In this letter, we investigate the rotational motion of the intralayer and interlayer excitons with their inherent momenta of inertia in monolayer and bilayer TMDs, respectively. This rotational degree of freedom gives rise to an additional orbital angular momentum in the out-of-plane due to the difference of effective masses between electron and hole, adding a finite magnetic moment that enhances the exciton Zeeman splitting upon interaction with an external magnetic field. This picture enables us, on the one hand, to define the new chirality of the exciton by the left- and right-handed rotations at the +K and -K valleys, respectively, as shown in Fig. 1; on the other hand, to explore angular momenta transfer between excitons and phononic bath. We find that these values of g\({}_{eff}\) predicted by this model are in excellent agreement with experimental measurements for both intralayer and interlayer excitons. More importantly, the clockwise and anticlockwise motions of the phononic bath could be used to reveal an unexplained puzzle that the asymmetry of valley splitting.
\(The\)\(Hamiltonian\)\(of\)\(the\)\(rotating\)\(exciton.\)--We propose an exciton as an electric dipole with the rotational motion before its recombination schemed in Fig. 1. For an exciton in two-dimensional material, its rotational degree of freedom confers upon an out-of-plane orbital angular momentum with opposite signs depending on the rotational directions[16; 23; 25]. Meanwhile, the rotating exciton is immersed into the phononic bath, leading to the exchange of angular momenta between them. In the presence of an external magnetic field, the total Hamil
tonian is given by[26; 27; 28; 29]
\[\hat{\mathcal{H}}=\hbar\xi_{0}\hat{\mathbf{L}}_{z}^{2}+\mu_{B}g_{ \mathbf{s}_{\mathbf{r}}}\hat{\mathbf{B}}\cdot\hat{\mathbf{L}}_{z}-\mu_{B}g_{0} \hat{\mathbf{B}}\cdot\hat{\mathbf{L}}_{z}^{0}\] \[+\sum_{q\lambda}\overline{\hbar\omega}_{\nu}\hat{b}_{q\lambda}^{ \dagger}\hat{b}_{q\lambda}+\sum_{q\lambda}\mathrm{V}_{\lambda}(q,r)\left[e^{i \lambda\hat{\varphi}}\hat{b}_{q\lambda}+e^{-i\lambda\hat{\varphi}}\hat{b}_{q \lambda}^{\dagger}\right], \tag{1}\]
where the first term is the kinetic energy of a rotating exciton with the rotational constant \(\xi_{0}=\hbar/(2I)\), \(I=\eta r^{2}\) is the moment of inertia depending on the reduced mass \(\eta^{-1}=m_{e}^{-1}+m_{h}^{-1}\) and the relative distance \(r\) between the electron and the hole; \(m_{e}\) (\(m_{h}\)) is the electron (hole) effective mass; \(\hat{L}_{z}=-i\partial/\partial\varphi\) is the angular momentum operator with the eigenvalues \(l_{z}=0,\pm 1,\pm 2\ldots\) and eigenenergies \(E_{l_{z}}=\hbar\xi_{0}l_{z}^{2}\)[30], where the sign \(\pm\) represents two opposite directions of this angular momentum in the out-of-plane of two-dimensional materials, corresponding to the left- and right-handed excitons at the +K and -K, respectively, and representing a new chirality of exciton, which is also corroborate with the formation of valley excitons at the +K and -K valleys upon absorption of left- and right-handed circularly polarized light[31; 32]. Just this angular momentum endows a magnetic moment that results in the interaction between the rotating exciton and the external magnetic field, thus giving rise to the additional valley Zeeman effect, which can be described by the second term \(\mu_{B}g_{\mathbf{s}_{\mathbf{r}}}\hat{\mathbf{B}}\cdot\hat{\mathbf{L}}_{z}\) (the detailed derivation for it in supplemental materials Part I) with \(\mu_{B}=e\hbar/(2m_{0}c)\) is the Bohr magneton and \(g_{\mathbf{s}}=m_{0}(m_{h}-m_{e})/(m_{e}m_{h})\) is the gyromagnetic ratio for the rotating exciton (\(m_{0}\) is the free electron mass) and \(\hat{\mathbf{B}}\) is the vector of magnetic field. In other word, the magnetic moment of the rotating exciton is equivalent to a single particle rotating around the fixed axis with the radius \(r\) and the effective charge \(e^{*}=e(m_{h}-m_{e})/(m_{h}+m_{e})\); see the supplemental materials Part II for details. We will see, in fact, this magnetic moment is very tiny for intralayer exciton because of the small difference of effective masses between electron and hole in TMDs. For interlayer exciton, however, it plays the key role to modulate the valley Zeeman splitting because of the tunability of the ratio of effective masses between electron and hole, hosting two dividual layers, respectively. The third term represents the intrinsic exciton valley splitting with the Lande factor \(g_{0}=1\), which mainly attributes to the contribution of \(d\)-orbital of transitional-metal atom with the magnetic quantum \(\mathfrak{L}_{z}^{0}=+2\) at the +K valley and \(\mathfrak{L}_{z}^{0}=-2\) at the -K valley, yield the ideal values \(g_{eff}^{0}=-4\), which has been predicted by several theoretical models and proved by some experiments[15; 16]. The fourth term describes the kinetic energy of the phononic bath with \(\overline{\hbar\omega}_{\nu}=\sqrt{\hbar\omega_{\nu}^{\prime}\hbar\omega_{ \nu}^{\prime\prime}}\) being the phonon energy depending on the hosting layer index \(j\) (\(j^{\prime}\)) for hole (electron). For the intralayer exciton, the same layer index \(j=j^{\prime}\) is fulfilled. For the interlayer exciton in bilayer TMDs, the electron and hole reside in different layers, so \(j=j^{\prime}\) and \(j\neq j^{\prime}\) correspond to the homobilayer and heterobilayer structures, respectively. Here \(q=|\mathbf{q}|\) is the scalar representation of the phonon wave vector, satisfying the relation \(\sum_{q}\equiv\int dq\). \(\lambda\) represents the phonon angular momentum. The corresponding creation and annihilation operators, \(\hat{b}_{\mathbf{q}}^{\dagger}\) and \(\hat{b}_{\mathbf{q}}\), are expressed in polar coordinates, \(\hat{b}_{q\lambda}^{\dagger}\) and \(\hat{b}_{q\lambda}\)[29; 33]. The last term describes the interaction between the rotating exciton and phonons. The angular momentum-dependent coupling strength \(\mathrm{V}_{\lambda}(q,r)\) is given by
\[V_{\lambda}(q,r)=\sqrt{\frac{q}{2\pi}}\left[\mathcal{M}_{h}^{j}J_{\lambda}\left( -\beta_{1}qr\right)-\mathcal{M}_{e}^{j^{\prime}}J_{\lambda}\left(\beta_{2}qr \right)\right], \tag{2}\]
depending on the microscopic details of two-body interaction \(\mathcal{M}_{h(e)}^{j(j^{\prime})}=[e^{2}\alpha Z_{0}\hbar\omega_{\nu}^{j(j^{ \prime})}/(2\hbar\varepsilon_{0})]^{1/2}\) between hole (electron) and the trh branch of phonon mode[34; 35], where \(Z_{0}\) is the monolayer thickness, \(\mathbb{A}\) is the quantization area in the monolayer materials, and \(\varepsilon_{0}\) is the
Figure 1: (a) Schematic diagrams for the left- and right-handed rotating intralayer excitons in monolayer transition metal dichalcogenides (TMDs) at the +K and -K valleys, respectively, where an out-of-plane rotational angular momentum \(\hat{\mathbf{L}}_{z}\) with opposite signs is formed. For a rotating exciton, the electron and hole induce the opposite magnetic moments, but they can’t cancel each other out due to the difference of effective masses between them. Thus, a finite magnetic moment is produced, which could be equivalently regarded as an effective charge \(e^{*}=e(m_{h}-m_{e})/(m_{h}+m_{e})\) rotating around a fixed axis with the radius \(r\) (the relations between this effective charge and the valley Zeeman term are given in supplemental materials Part II). Similar to intralayer excitons, two rotating interlayer excitons in bilayer TMDs are schemed in (b). \(D\) is the interlayer distance between two hosting layers.
permittivity of vacuum. \(\alpha\) denotes the strength of the rotating exciton coupled with its surrounding phononic bath and is regarded as a changeable parameter in the large scale, because of the following reasons: (i) different branches of phonon modes will give the coupling contribution together; (ii) the strength could be tuned significantly by the dielectric environment of the monolayer and bilayer structures, e.g., the various choices of the substrate and encapsulation materials, and the structural parameters, e.g., the internal distances between the hosting monolayer and the substrates[34; 35; 36]; (iii) the moire potential induced by stacking angle not only gives the new phonon modes, but enhances the exciton-phonon coupling remarkably due to the strong quantum confinement effect[37; 38]. \(J_{\lambda}(-\beta_{1}qr)\) and \(J_{\lambda}(\beta_{2}qr)\) are the Bessel functions of the first kind, where \(\beta_{1}=m_{h}/m\) and \(\beta_{2}=m_{e}/m\) for hole and electron, respectively (\(m\) is the total mass.).
To show the significant influence of phonon angular momenta on the rotating particle, a variational ansatz for an exciton rotating in the phononic bath is introduced based on the angulon model developed by Schmidt and Lemeshko[26; 33]:
\[\left|\Psi\right\rangle=\sqrt{C}\left|j_{z}\right\rangle\left|0\right\rangle+ \sum_{q\lambda}\beta_{q\lambda}\left|j_{z}-\lambda\right\rangle\hat{b}_{q \lambda}^{\dagger}\left|0\right\rangle, \tag{3}\]
where \(\hat{J}_{z}=\hat{L}_{z}+\hat{\Lambda}_{z}\) is the total angular momentum operator of the system with the eigenvalues of \(j_{z}=0,\pm 1,\pm 2,\ldots\), and \(\hat{\Lambda}_{z}\) is the collective angular momentum operator of the phononic bath. \(\sqrt{C}\) and \(\beta_{q\lambda}\) are the variational parameters with the normalization condition \(\left|C\right|+\sum_{q\lambda}\left|\beta_{q\lambda}\right|^{2}=1\), and \(\left|0\right\rangle\) represents the vacuum of phonons. Here, the angular momentum conservation satisfies \(l_{z}+\lambda=j_{z}\). By minimizing of the function \(\left\langle\Psi\right|\hat{\mathcal{H}}-E\left|\Psi\right\rangle\) with respect to the parameters \(\sqrt{C}^{*}\) and \(\beta_{q\lambda}^{*}\), we get the valley-dependence eigenenergies of the system (the detailed mathematical processes are given in the supplemental materials Part III)
\[E_{\pm\mathrm{K}} = \hbar\xi_{0}l_{z}^{2}+\mu_{B}\mathcal{E}_{\kappa}^{*}Bl_{z}^{\pm \mathrm{K}}-\mu_{B}\mathcal{E}_{0}B\mathcal{E}_{z}^{0,\pm\mathrm{K}} \tag{4}\] \[+\overline{\hbar\omega}_{\nu}-\sum\nolimits_{l_{z}}^{(1)}(E),\]
with
\[\mathrm{g}_{\kappa}^{*}=\left[1+\sum_{q\lambda}\frac{\lambda|V_{\lambda}(q,r) |^{2}}{l_{z}\big{(}\hbar\xi_{0}j_{z}^{2}-\overline{\hbar\omega}_{\nu}-\hbar \xi_{0}l_{z}^{2}\big{)}^{2}}\right]\mathrm{g}_{\kappa},\]
\[\sum\nolimits_{l_{z}}^{(1)}(E)=\sum_{q\lambda}\frac{|V_{\lambda}(q,r)|^{2}}{ \hbar\xi_{0}j_{z}^{2}-\overline{\hbar\omega}_{\nu}-\hbar\xi_{0}l_{z}^{2}}.\]
Obviously, the valley Zeeman splitting is determined by the second and third terms in Eq. (4), where the energy level diagram showing the contribution to the splitting is schemed in Fig. S1 in supplemental materials[29]. The magnitude of the energy splitting with valley dependence could be rewritten as \(E_{\pm\mathrm{K}}^{S}=\mu_{B}\mathrm{g}_{\kappa}^{*}Bl_{z}^{\pm\mathrm{K}}-\mu_ {B}\mathcal{E}_{0}B\mathcal{E}_{z}^{0,\pm\mathrm{K}}\). In the following sections, the lowest rotational states \(l_{z}^{+\mathrm{K}}=+1\) (\(l_{z}^{-\mathrm{K}}=-1\)) for the +K (-K) valley is adopted. The impact of phonon magnetic moment on the valley splitting is reflected by the renormalization of \(\mathrm{g}_{\kappa}^{*}\) in the second term. Consequently, the effective \(g-\)factor of exciton could be obtained by \(\mathrm{g}_{eff}=(E_{+\mathrm{K}}^{S}-E_{-\mathrm{K}}^{S})/(\mu_{B}B)\). The last term is the self-energy of the rotating exciton due to the angulon effect, which, in fact, only gives the tiny contribution to the valley splitting, but plays the crucial role in inducing the fine structure of the spectroscopy for the rotating particles in the many-body environment. Here, the first-order approximation of this term is adopted (see the supplemental materials Part III).
We mainly discuss the renormalization of \(g-\)factors for the rotating intralayer and interlayer excitons in four typical monolayer TMDs and their bilayers structures, where phonon angular momenta \(\lambda=-1\) and \(\lambda=+1\) are selected in the calculation processes[39], representing the clockwise and anti-clockwise rotations of the phononic bath respecting to the motion of exciton, respectively. The adopted values of other parameters for these materials are shown in Table S1 and Table S2 in the supplemental materials[29]. In addition, we only consider the A-type excitons in this work and assume that B-type excitons have the similar behavior.
\(Exciton\)\(\mathrm{g}_{eff}\)\(in\)\(monolayer\) TMDs.--- Fig. 2(a) shows the Zeeman splitting of intralayer excitons with the magnetic field for four well-known monolayer TMDs, where the precondition is assumed that the orientation of the phonon angular moment is always keeping opposite respecting to the right- and left-handed rotating excitons at the +K and -K valleys, respectively. Comparing to the normal valley Zeeman splitting (purple solid lines for \(\mathrm{g}_{eff}=-4\))[15; 16], the slopes of energy shift decrease obviously under the influence of the transfer of phonon angular momentum, which also directly show the existence of phonon magnetic moment, and even the corresponding magnitude could be evaluated quantitatively. In addition, the slopes of energy shift for the W-based monolayers are flatter than Mo-based ones. This behavior could be attributed to two reasons: the stronger strength of the exciton-phonon coupling and the bigger value of the gyromagnetic ratio \(\mathrm{g}_{\kappa}\) stemming from the larger difference of effective masses between electron and hole in W-based monolayer (see the Table S1), both of which lead to the more correction to exciton magnetic moment.
Another abnormal phenomenon that the asymmetrical distribution of Zeeman shifts at two valleys in the monolayer TMDs has been widely observed by earlier experiments[20; 23]. To explain it from the microscopic aspect, several advanced theoretical models based on the first-principles calculation have been given, e.g., Caruso \(et\)\(al.\) proposed the out-of-plane component of the or
bital angular momentum of an exciton is expressed in terms of the Bethe-Salpeter equation[23], Deilmann _et al._ developed a new approach merging the contribution of full Bloch states to the excitonic magnetic moments[40]. However, the effective simulations for the experimental measurements are still lacking. Fig. 2(b) shows the valley Zeeman shift of the rotating exciton as a function of the magnetic field in monolayer WS\({}_{2}\) (solid lines), along with magneto-reflectance spectroscopy data from Ref. [20] (pink dots), where phonon angular momentum \(\lambda=+1\) at both the +K and -K valleys is assumed. Namely, the direction of phonon angular momentum is the same to the exciton angular momentum at the +K valley, but reversely at the -K valley, as illustrated in the insets. One can see that the experimental data are very excellently fitted with the negative and positive magnetic moments provided by phonon angular momentum at the +K and -K valley excitons, respectively. The quantum state \(\lambda=-1\) also has the similar effect. This implies that (i) the rotation of valley exciton is not just hindered by the phononic bath, and it also can get the assistance from the surrounding environment; (ii) the magnitude of Zeeman shift could be varied in the large scale due to the phononic bath endowing the positive and negative contribution to the exciton magnetic moment in each valley, which may provide a potential explanation for the large fluctuation of g\({}_{eff}\) in experiments. To show the latter effect clearly, we present the renormalization of g\({}_{eff}\) as a function of the coupling constant for the monolayer MoS\({}_{2}\) in Fig. 2(c). According to the positive and negative contribution to the exciton magnetic moment, the upper- (g\({}_{eff}^{p}\)) and lower-limits (g\({}_{eff}^{n}\)) of the renormalization of g\({}_{eff}\) are presented, in which some values of g\({}_{eff}\) measured by recent experiments are also illustrated[40; 41; 42; 43; 44; 45; 20; 46]. It is obviously show that the large fluctuation of g\({}_{eff}\) could be covered successfully with increasing the coupling strength. The same results are obtained for other monolayer TMDs in Fig. S3.
From 2015, Schmidt and Lemeshko introduced a new quasiparticle--angulon, describing the quantum impurity rotating in phononic bath[26], such as molecules merging into superfluid helium droplets. They pointed out the angulon induces the rotational fine structures due to the angular momenta transfer between the rotor and the phonons. These transfer processes reflected by the rotational Lamb shift in spectra have been proved in experiments[47; 48; 49]. However, the distinguishment between the clockwise and anti-clockwise motions for phononic bath is still a challenge task. The model of the rotating exciton in these two-dimensional TMDs systems presents an ideal platform to overcome this problem by analyzing the renormalization of the g\(-\)factor of the valley exciton.
\(Exciton\) g\({}_{eff}\)\(in\)\(bilayer\) TMDs.--Comparing with the intralayer excitons, the interlayer excitons have more tunnelities stemming from the separation of the hosting layers for electron and hole as well as the modulation of internal distance between two hosting layers. For instance, four typical monolayer TMDs could be stacked into sixteen bilayer structures, including four homostructures and twelve heterostructures. Based on the difference of effective masses between electron and hole (see Table S1 in the supplemental materials), these values of the gyromagnetic ratios (g\({}_{\kappa}\)) for sixteen rotating excitons and their correction to the ideal value g\({}_{eff}^{0}=4\) are listed in Figs. 3(a) and (b), respectively. We can see that g
Figure 2: (a) Energy shift of intralayer exciton at +K and -K valleys without (solid lines) and with (dash lines) the transfer of phonon angular momentum in the presence of the magnetic field at \(\alpha=0.08\). (b) Energy shifts of exciton with the contribution of the phonon angular momentum \(\lambda=+1\) at the +K and -K valleys for the monolayer WS\({}_{2}\) at \(\alpha=0.024\), in which experimental data (pink dots) are reproduced from Ref. [20]. (c) The renormalization of g\(-\)factors with the positive (the upper-limit g\({}_{eff}^{p}\)) and negative (the lower-limit g\({}_{eff}^{p}\)) contribution of phonon magnetic moments as a function of the coupling constant \(\alpha\) in monolayer MoS\({}_{2}\), where these solid dots represents the values of g\({}_{eff}\) obtained in experiments.
varies remarkably with increasing the ratios of effective masses between electrons and holes, and thus results in the larger variation of \(\textsl{g}_{eff}\) in Fig. 3(b), which presents a very practical way to control the valley Zeeman splitting of interlayer exciton by different ratios of effective masses of electron-hole pairs. Furthermore, the renormalization effect of the angular momentum transfer of the phononic bath on \(\textsl{g}_{eff}\) are shown in Figs. 3(c) and (d) with the positive and negative magnetic moment contribution at \(\alpha=0.13\), respectively. One can see that phonon angular momenta give the smaller correction to \(\textsl{g}_{eff}\) comparing with the ratio of effective mass. Besides these monolayer TMDs materials, various of heterostructures could be composed of by the huge members of two-dimensional material family for interlayer excitons, ensuring the almost continuously changing ratios of effective mass for electron-hole pairs. This allows us to obtain much larger variation for the \(\textsl{g}_{eff}\) as plotted in Fig. 3(e). Moreover, the positive and negative correction of phonon magnetic moments will be enhanced clearly with the increasing this ratio. These results suggest that the difference between electron and hole plays a predominate role to adjust the \(\textsl{g}-\)factor of interlayer exciton in two-dimensional van der Waals heterostructures.
Another controllable parameter for the interlayer exciton is the internal distance \(D\) between two hosting layers that related to the rotational constant \(\xi_{0}\) (see the supplemental materials Part II). The interlayer distance dependence of \(\textsl{g}_{eff}\) is given in Fig. 3(f) for MoSe\({}_{2}\)/WSe\({}_{2}\) bilayer with the positive (\(\textsl{g}_{eff}^{p}\)) and negative (\(\textsl{g}_{eff}^{n}\)) contribution of phonon magnetic moments. We find the renormalization of \(\textsl{g}_{eff}\) varies in the range of \(-4.5\sim-9.5\) with the increasing of \(D\), covering most experimental data (solid dots) illustrated in Fig. 3(f)[46; 50; 51; 52; 24]. This behavior can be attributed to the enhanced transfer of phonon angular momentum with \(D\), resulting in the large fluctuation of \(\textsl{g}_{\kappa}^{*}\). Hence, the interlayer distance is also a key parameter for the modulation of the rotational interlayer exciton. But this role has been neglected in most experiments, which may explain why dif
Figure 3: (a) \(\textsl{g}_{\kappa}\) for interlayer excitons in sixteen TMDs bilayers based on the difference of effective masses between electron and hole. (b) The correction of \(\textsl{g}_{eff}^{0}=-4\textsl{g}_{0}\) by the \(2\textsl{g}_{\kappa}\) for interlayer excitons in sixteen TMDs bilayers. The renormalization of \(\textsl{g}_{eff}\) for the rotating interlayer excitons with the positive (c) and negative (d) phonon magnetic moments in these bilayer structures at \(\alpha=0.13\), \(D=1.5\) nm. (e) The impact of the effective mass ratios between electron and hole on the \(\textsl{g}_{eff}\) with and without the contribution of phonon magnetic moments. (f) \(\textsl{g}_{eff}\) depends on the internal distance with the positive (the upper-limit \(\textsl{g}_{eff}^{p}\)) and negative (the lower-limit \(\textsl{g}_{eff}^{n}\)) phonon magnetic moments in MoSe\({}_{2}\)/WSe\({}_{2}\) heterostructure at \(\alpha=0.35\).
ferent values of g\({}_{eff}\) were obtained for the same bilayer structures[46; 50; 51; 52; 24].
Recently, the chiral phonon has been predicted by the theoretical studies and proved by some experiments in two-dimensional TMDs structures[53; 54; 55; 56; 57; 58], which plays an important role for the interconversion between the dark and bright excitons due to the required momentum conservation. Here, the chirality of the rotational angular momentum constitutes an inherent degree of freedom of valley excitons that induces the coupling to other degree of freedom, e.g., spin and orbital angular momentum, and the external perturbations[59]. In turn, the chirality of other degrees of freedom could be reflected by the exchange of angular momenta between them. On the other hand, the larger fluctuation of g\({}_{eff}\) for trion has been observed in recent experiments[60; 61; 62]. Using the rotating exciton model, we present a potential explanation that an extra electron (or hole) in a rotating trion will induce the bigger magnetic moment, enhancing the valley Zeeman effect. Moreover, the transfer of phonon angular momentum results in the fine-structures of trions.
\(Conclusion.\)--In summary, we propose the rotational motion of valley excitons in monolayer TMDs and their bilayer structures, where the new chirality of exciton is defined by the rotational angular momentum, which could be manifested by the valley Zeeman splitting. Furthermore, the rotating excitons induce the transfer of phonon angular momentum, resulting in the renormalization effect of exciton g\(-\)factors, and indicating the chirality of the phonon and phonon magnetic moment. These results also provide important enlightenments for revealing the underlying physics of the optical properties of valley excitons in two-dimensional materials.
This work was supported by National Natural Science Foundation of China (Grant Nos. 11674241, 62022081, 61974099 and 12174283).
|
2302.12458 | Design and Mechanics of Cable-Driven Rolling Diaphragm Transmission for
High-Transparency Robotic Motion | Applications of rolling diaphragm transmissions for medical and teleoperated
robotics are of great interest, due to the low friction of rolling diaphragms
combined with the power density and stiffness of hydraulic transmissions.
However, the stiffness-enabling pressure preloads can form a tradeoff against
bearing loading in some rolling diaphragm layouts, and transmission setup can
be difficult. Utilization of cable drives compliment the rolling diaphragm
transmission's advantages, but maintaining cable tension is crucial for optimal
and consistent performance. In this paper, a coaxial opposed rolling diaphragm
layout with cable drive and an electronic transmission control system are
investigated, with a focus on system reliability and scalability. Mechanical
features are proposed which enable force balancing, decoupling of transmission
pressure from bearing loads, and maintenance of cable tension. Key
considerations and procedures for automation of transmission setup, phasing,
and operation are also presented. We also present an analysis of system
stiffness to identify key compliance contributors, and conduct experiments to
validate prototype design performance. | Hoi Man Lam, W. Jared Walker, Lucas Jonasch, Dimitri Schreiber, Michael C. Yip | 2023-02-24T05:18:00Z | http://arxiv.org/abs/2302.12458v1 | Design and Mechanics of Cable-Driven Rolling Diaphrag Transmission for High-Transparency Robotic Motion
###### Abstract
Applications of rolling diaphragm transmissions for medical and teleoperated robotics are of great interest, due to the low friction of rolling diaphragms combined with the power density and stiffness of hydraulic transmissions. However, the stiffness-enabling pressure preloads can form a tradeoff against bearing loading in some rolling diaphragm layouts, and transmission setup can be difficult. Utilization of cable drives compliment the rolling diaphragm transmission's advantages, but maintaining cable tension is crucial for optimal and consistent performance. In this paper, a coaxial opposed rolling diaphragm layout with cable drive and an electronic transmission control system are investigated, with a focus on system reliability and scalability. Mechanical features are proposed which enable force balancing, decoupling of transmission pressure from bearing loads, and maintenance of cable tension. Key considerations and procedures for automation of transmission setup, phasing, and operation are also presented. We also present an analysis of system stiffness to identify key compliance contributors, and conduct experiments to validate prototype design performance.
## I Introduction
Transmissions are essential in systems where placing actuators at the joints is infeasible or dangerous. Examples of such systems include surgical robots or wearable robots, where high inertias on distal joints can pose dangers to the patient or user. Another example is MR-compatible surgical robots, where actuators are incompatible and can disturb the magnetic fields used for imaging.
Hydraulic transmissions are a viable solution to these problems, offering high power density, routing flexibility, and high stiffness. Recent usage of rolling diaphragms in hydraulic transmissions provide high force transparency and bandwidth [1], which are important advantages in medical robotic applications.
However, current research applications of rolling diaphragm transmissions usually introduce a tradeoff between beneficial transmission pressures versus detrimental frictional forces and torque loading. Furthermore, the fluid transmission's complex setup procedure is a detriment to the adoption of rolling diaphragm transmissions beyond research.
### _Related Works_
Rolling diaphragms have been applied to many applications, such as in medical or teleoperated robotics. In medical applications, rolling diaphragm transmissions have found great use in MR-safe robots, which make surgery during live scans possible by allowing the MR-incompatible actuator to power the robot from outside the MR field [2, 3, 4, 5]. The usage of rolling diaphragm transmissions also provides smooth force transmission in a compact package, which is important for patient comfort, and accommodation of a variety of patient sizes.
The hydrostatic rolling diaphragm transmission itself was investigated in various works, where some works modelled the transmission as a second-order spring model [2, 6, 7]. John Peter Whitney proposed a N+1 hybrid hydrostatic transmission setup, utilizing N hydraulic lines and 1 common pneumatic preload line, which minimizes transmission complexity without hampering performance [8, 9, 10].
Whitney's designs featured a belt-driven design for rotary actuation, and a linkage design for directly actuating a finger, to convert the transmission's translational motion into the desired mechanical work. In the linkage hand design, the load is attached to the diaphragm piston through the air-side diaphragm, introducing a dynamic seal, which can be tolerance-intensive and introduces constant pressure leakage.
In a hydrostatic transmission used for wearable robotic limbs, a ball-screw design is used on the actuation side, whilst the manipulator side uses a rolling diaphragm [11, 12]. A cable drive provides reduced friction and backlash to accentuate the rolling diaphragm transmission's transparency
Fig. 1: Image of one prototype transmission unit with a motor attached to the input/output shaft. Two of these units connected through fluid lines form a full transmission.
and backdrivability, while the floating-cylinder layout makes use of the self-aligning features of the rolling diaphragm.
### _Contributions_
In this work, we investigate an alternative coaxial rolling diaphragm transmission design, which retains proven features such as the N+1 hybrid transmission layout, and a cable drive for its ability to minimize backlash and friction forces. Building on top of those elements, we introduce the following:
1. A translating inner core inline inline with the rolling diaphragm, coupled with a force-balanced angled cable drive design, which decouples transmission pressure from cable tension and friction, reduces bearing load thanks to cable preload tension balancing, and provides a constant mechanical advantage over a range of motion beyond a full rotation.
2. A method for individual transmission component stiffness analysis to predict full system stiffness and key compliance contributors.
3. An electronic fluid transmission control system for automated pressure and phasing regulation and control.
## II Method
### _Overview_
The transmission consists of two identical paired units, where each unit uses two rolling diaphragms to interface with the two air or water transmission lines. In each transmission unit (Fig. 1(a)), a cable drive converts the fluid-driven linear rolling diaphragm motion to rotary input-output motions for robotic joints. A linearly translating core actuates that cable drive, while isolating the transmission pressure forces from the cable tension and central pillar bearings. Lastly, a stationary frame provides structural support and alignment between rolling diaphragms and translating core.
### _Cable Drive and Translating Core_
A cable drive was chosen due to its inherent advantages of low friction, low backlash, and high stiffness [13, 14]. In the prototype transmission, a 1/16" diameter 302 stainless steel cable with 7x19 construction was chosen for its high flexibility, and sufficient breaking strength of \(2100N\) to match the maximum supportable load of \(600N\). The maximum supportable force is a function of maximum rated diaphragm pressure \(P_{max}=1.7MPa\), and diaphragm piston radius \(r_{piston}=15mm\) in Eqn. 1. The corresponding maximum supportable torque is \(6Nm\) for a capstan radius of \(r_{capstan}=10mm\).
\[F_{max}=\frac{P_{max}}{2}(\pi r_{piston}^{2}) \tag{1}\]
Due to the limited travel of the rolling diaphragm, there is a direct tradeoff between the diameter of a flat cable pulley and the angular range of motion. However, wrapping the cable helically up a capstan pillar allows for increased capstan effect and more than one rotation of motion despite limited diaphragm travel.
Within the cable drive, the cables are run at an angle enforced by wall and capstan geometry throughout the full range of motion (Fig. 1(b)). The fleet angle is kept constant by terminating the cables with the same angle as the helix pitch. Constant fleet angle is necessary to maintain constant cable tension for a linear performance, which is useful in direct teleoperation applications.
The cable drive's forces are balanced by the symmetrical placement of cable angle and departure locations, which decouples cable preload tension from bearing loading. The free body diagram moment balance of Fig. 3 sums to 0, indicating full cancellation:
\[\Sigma M_{y}=T_{L}r_{L}-T_{L}r_{L}+T_{R}r_{R}-T_{R}r_{R}=0 \tag{2}\]
Where \(T_{L}\) and \(r_{L}\) are the cable preload tension force and moment arm on the left side of the capstan, and \(T_{R}\) and \(r_{R}\) on the right side. High cable preload tension is preferred to improve axial stiffness, which decreases with load range to preload ratio [15].
The translating core structure fixes the cable terminations and absorbs the transmission pressure between the two
Fig. 2: Mechanical design of one actuator unit. (a) shows the section view. Fluid pressure difference between the two rolling diaphragms (red) actuate the translating core (green), which drives the input/output shaft (cyan) through the cable drive (purple). The coaxial rolling diaphragms are fully sealed, and the translating core maintains cable tension even when the system is depressurized. (b) shows how the capstan pillar rolls relative to the core. Fixed to the housing via bearings, the pillar no longer translates, and is limited to rotation as the core translates. Cable drive fleet angle is kept constant across the full range of motion through the geometry of the capstan pillar and the cable wrap-around walls, maintaining constant cable tension for controllability. Fluid line pressure preload forces are coaxially balanced through the translating core.
rolling diaphraggrams. One drawback to cable drives can be the difficulty in tuning and maintaining cable tension, which is a prerequisite for consistent performance. By fixing the cable terminations within the rigid core structure, the cables can be preloaded at high tension, allowing for low slack and high stiffness [13]. Cable tension is held even when the transmission is depressurized, which is advantageous in medical settings, where surgeons might not have the expertise or tools to troubleshoot and repair cable slack issues. The prototype translating core comprises of solid 3D printed termination walls in Markforged Onyx filament and carbon fiber rods.
### _Coaxial Enclosed Rolling Diaphragm Layout_
Interacting with the fluid transmission lines, rolling diaphragms interface with coaxial pistons on either side of the translating core. Rolling diaphragms are flexible seals that extend and retract via a rolling action, eliminating the sliding friction that typical piston O-Ring seals have [16]. The rolling diaphragms used in the prototype are DM3-35-35 rolling diaphragms manufactured by IER Fujikura, with stroke of 46mm, diameter of 35mm, and maximum pressure rating of \(1.7MPa\).
Both fluid transmission lines have a pressure preload, which fills out the rolling diaphragm's convolution to prevent jamming, enables high fluid stiffness, and determines transmission load capacity. This transmission preload pressure has a much higher magnitude of force relative to the magnitude of input/output force transferred.
In the proposed transmission, the two rolling diaphragms are aligned in a coaxial opposed layout, such that preload pressures are balanced against one another (Fig. 2b). These preload pressure forces compress the translating core, but are isolated from bearings by the cable drive, decoupling transmission pressure from bearing friction. The cable drive is situated between the diaphragm pistons, keeping both fluid chambers fully enclosed to avoid pressure leakage.
### _Hydrostatic Transmission_
Two identical actuators connected by a hydraulic line form one transmission system, detailed in Fig. 4. The hydraulic line acts as an incompressible link between the actuator, while an opposing pneumatic line provides a preload pressure on the water line. High preload pressures help dissolve remaining air into the water to achieve a stiff and responsive system.
The volume of water in the hydraulic line determines the phase offset between the input and output shafts. Manual alignment is time consuming and difficult, and misaligned actuators may hit their endstops prematurely resulting in reduced range of motion and control. Automation of this phasing process removes a large hurdle for system adoption, and effectively negates long-term issues such as minuscule leaks and settling. It also reduces misalignment factors like tube and cable flex by allowing phasing at a high pressure, close to the standard operating preload pressure.
### _Predictive Phasing_
The microcontroller phases the actuators efficiently through a proportional controller, calculating the adjustment that is needed to rotationally align the actuators. The flow rate \(Q\) through a solenoid valve is given by:
\[Q=K_{v}\sqrt{\Delta P} \tag{3}\]
where \(\Delta P\) is the pressure drop and \(K_{v}\) the flow factor of the valve. The relationship between water volume and phase
Fig. 4: Transmission system setup. Stiffness is maximized along the hydraulic transmission distance by using a copper tube, while small sections of flexible plastic tube at each end allow for some flexibility in actuator placement. The incoming water line is maintained at \(700kPa\) by a water pump. The outgoing line releases into a depressurized reservoir which feeds into the pump. The preload pressure is controlled through a proportional electric pressure regulator, which allows for precise pressure control anywhere from 0 to \(860kPa\). The design is intended for an N+1 configuration (introduced in [8]), such that multiple transmissions are preloaded by one single pneumatic line, simplifying scalability.
Fig. 3: Cable tension forces are symmetrically balanced, thereby eliminating moment bearing loads in the y-axis (out of page) and decoupling cable tension from axial bearing loads in x/y directions.
offset was determined empirically, resulting in the following relationship:
\[V_{W}=\frac{|\Delta\phi|}{9.594} \tag{4}\]
where \(V_{W}\) is water volume in mL and \(\Delta\phi\) is the phase offset in degrees. By combining Eq. 3 and 4, the time the solenoid valve should be opened is found. The sign of the phase offset \(\Delta\phi\) determines if the intake or outlet valve should be used.
\[t=\frac{V_{W}}{Q}=\frac{|\Delta\phi|}{9.594K_{v}\sqrt{\Delta P}} \tag{5}\]
Fig. 5 shows the actuator phasing algorithm. The alignment resolution is limited by the \(\Delta P\) from the water refill system to the transmission line. To achieve finer angle adjustments, the transmission pressure is brought to just below the water injection pressure to decrease \(\Delta P\). This increases \(\Delta P\) for water ejection, but it is inconsequential since any overshoot can be corrected by water injection.
### _Automated Operation_
The system operating procedure is outlined in Fig. 6. Users operate the system through a text-based interface with a microcontroller. Plain language instructions and commands allow even a non-technical user to set up and adjust a transmission. An air bleed mode removes air bubbles that entered the water line during assembly. When not in operation, the system is stored in a hibernation state rather than completely removing all water and air pressure. Maintaining the system at a low pressure (for example, \(100kPa\)) removes the need to bleed air from the water lines or reseat the rolling diaphragms. Completely depressurizing the actuator is used to disassemble or move the transmission setup.
## III Theoretical Transmission Stiffness
To predict the transmission stiffness and identify the main compliance contributors in the system, a simple spring system, as illustrated in Fig. 7, was used to model the transmission. Fluid stiffness and cable stiffness are determined analytically, while more complex components such as the translating core and diaphragm stiffnesses are determined respectively using FEA and empirical methods. The system assumes that the change in force is applied by the input side capstan pillar, while the output side capstan pillar is locked.
The fluid transmission stiffness is separated into a water stiffness (\(K_{water}\)) and air stiffness (\(K_{air}\)), assuming there is some proportion of undissolved air \(p_{air}\) in the system. These fluid stiffnesses were found using Eq. 6 and the variables in Table I.
\[K_{fluid}=[p(2\frac{L_{cyl}}{A_{cyl}E_{fluid}}+\frac{L_{hose}}{A_{hose}E_{fluid} })]^{-1} \tag{6}\]
Cable stiffness (\(K_{cable}\)) was calculated with Eq. 7 Where \(A_{cable}\) is the cable's cross-section, \(L_{cable}\) is the cable length between the capstan and the wrap-around wall, and \(E_{cable}\) is \(200GPa\) for 304 stainless steel.
\[K_{cable}=\frac{E_{cable}A_{cable}}{L_{cable}} \tag{7}\]
Fig. 5: Automatic phasing algorithm. \(\Delta P\) of \(15kPa\), along with potentiometer accuracy and solenoid valve response time, resulted in a 0.4\({}^{\circ}\) acceptable maximum phase offset for our setup.
Fig. 6: Operating procedure for the transmission system. Ovals represent transitory steps that lead to steps intended to be used for extended periods of time, represented by rectangles.
Fig. 7: Transmission stiffness system model, assuming locked output shaft and torqued input shaft applying a force into the system. The model assumes some proportion of undissolved air left in the system, which acts as an additional spring in series in the fluid transmission.
For both rolling diaphragm stiffness (\(K_{RD}\)) and translating core stiffness (\(K_{core}\)), the stiffness was estimated via \(K=\frac{\Delta F}{\Delta x}\), where \(\Delta F\) and \(\Delta x\) are the change in force and deflection respectively. For \(K_{RD}\), the rolling diaphragm stretch under force application was measured on a material test system. And for \(K_{core}\), the deflection of the cable termination point under a unit applied force was estimated via FEA (Fig. 8).
The resultant transmission stiffness estimate is a combination of the individual component stiffness estimates:
\[K_{tot}=(\frac{1}{K_{cable}}+\frac{1}{2K_{core}}+\frac{2}{K_{RD}}+\frac{1}{K_{ water}}+\frac{1}{K_{air}})^{-1} \tag{8}\]
Where the individual estimates and overall stiffness estimate are featured in Table II. Of the individual stiffnesses, the rolling diaphragm and undissolved air contribute the most to overall compliance. The equivalent estimated angular stiffness over a \(20mm\) diameter capstan pillar is \(23.54Nm/rad\), using \(K_{rot}=K_{lin}r^{2}\). However, the stiffness value can vary greatly with \(p_{air}\), as shown in Fig. 9.
## IV Experiments and Results
### _Experimental Setup_
To characterize the system, experiments were conducted to fit a second-order spring model, and to understand the hysteresis and friction of the system. On the input side, the input shaft is actuated either by hand or by a motor (Maxon EC393023). Torque inputs are measured through a torque sensor (Futek TRS600) in the hand-actuated case, and estimated via the motor current draw in the motor actuated case. On the output side, a second torque sensor at the output shaft measures the torque output. Encoders (US Digital E5-2000) are fitted to both shafts to measure the angular deflection both into and out of the transmission.
The effects of hose length and diameter were minimized in this experiment by using a minimum hose length of \(42.6mm\), to focus on the characteristics of the rolling diaphragm and cable drive. The system is bled of air bubbles until no more air can be visually seen in the system. The stiffness of shaft couples and torque sensors facilitating measurement are stiff enough to be disregarded in the stiffness estimation.
To estimate the second-order spring model, the output shaft is clamped, and the input shaft is driven with a consecutive torque step signal. A _tfest_ is performed with measured torque and position as input and output (Eq. 9), starting from an initial model based on parameters from prior theoretical calculations (Table III).
\[H(s)=\frac{\theta(s)}{\tau(s)}=\frac{1}{Js^{2}+Bs+K} \tag{9}\]
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Source** & **Variable** & **Stiffness [\(N/m\)]** & **\% Compliance** \\ \hline Water in fluid line & \(K_{water}\) & 1.54E6 & 15.2\% \\ Undissolved air in fluid line & \(K_{air}\) & 9.97E5 & 23.6\% \\ Single cable & \(K_{cable}\) & 8.98E6 & 2.60\% \\ Translating core & \(K_{core}\) & 3.80E6 & 12.4\% \\ Rolling diaphragm & \(K_{RD}\) & 1.02E6 & 46.1\% \\ \hline Full Transmission & \(K_{total}\) & 2.35E5 & \\ \hline \hline \end{tabular}
\end{table} TABLE II: Theoretical component stiffness values, and resultant total transmission stiffness assuming \(p_{air}\) of 0.01 %.
Fig. 8: FEA estimated cable termination deflection from application of \(1N\) from cables.
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Variable** & **Value** & **Description** \\ \hline \(E_{water}\) & 2.20 [\(GPa\)] & Bulk modulus of water \\ \(E_{air}\) & 1.42 [\(GPa\)] & Bulk modulus of air \\ \(A_{cyl}\) & 9.62E-4 [\(m^{2}\)] & Cross section of diaphragm cylinder \\ \(A_{hose}\) & 3.17E-5 [\(m^{2}\)] & Cross section of hose \\ \(L_{cyl}\) & 3.80E-2 [\(m\)] & Length of diaphragm cylinder \\ \(L_{hose}\) & 4.26E-2 [\(m\)] & Length of hose \\ \(p_{water}\) & 0.99 [\%] & Proportion of water in transmission \\ \(P_{air}\) & 0.01 [\%] & Proportion of air in transmission \\ \hline \(K_{water}\) & 1.58E6 [\(N/m\)] & Est. K of fluid line water \\ \(K_{air}\) & 3.99E3 [\(N/m\)] & Est. K of fluid line undissolved air \\ \hline \hline \end{tabular}
\end{table} TABLE I: Theoretical fluid line stiffness variables
Fig. 10: Experimental setup for hand-driven configuration. Shaft torques and positions are measured via torque sensors (1) and encoders (2). The input shaft is driven by a handle (3), or a motor can be substituted for motor-driven inputs.
Fig. 9: Theoretical transmission stiffness as a function of undissolved air % remaining in fluid line.
The resultant fit (Table III) estimates a stiffness of \(18.71Nm/rad\), damping of \(2.1mNm/s\), and inertia of \(0.52\mu kgm^{2}\). This lines up with the theoretical stiffness value corresponding to 0.02% undissolved air in the fluid line (see Fig. 9), which is reasonable given that visually no air bubbles were left in the transmission, but some imperceptible undissolved air might still remain. Additionally, hand-actuated datasets were collected for model validation, measuring input torque applied and input shaft deflection. The model predicted position tracks the measured position closely in Fig. 11, indicating model accuracy.
From a hysteresis plot of one torque sin wave motion from rest (Fig. 12), there is an approximate maximum hysteresis value of \(0.076Nm\), corresponding to 1.25% of the full \(6Nm\) torque range. This hysteresis is likely caused by hose flexibility or air line pressure regulation inaccuracy, which affects shaft phasing, absorbs input energy, and has directional behaviour under positive and negative pressures. The static friction, identified by the change in torque without change in angular position at either end of the hysteresis curve, is \(0.025Nm\), corresponding to 0.45% of the full torque range. Though the rolling diaphragm has little static friction, some other elements such as the cable drive and bearings still contribute to static friction.
To compare the tracking accuracy between the input and output shafts, the transmission was actuated across a large range of motion by hand via a handle on the input shaft, while a load with inertia \(0.0387kgm^{2}\) is attached to the output shaft (Fig. 13). The results show good tracking over the entire motion, with slight tracking error at the position and torque extremities that are likely caused by hysteresis and energy losses. The close tracking between the input and output torques validate the transmission's constant mechanical advantage across a large range of motion.
## V Conclusion
In this work, a rolling diaphragm transmission featuring a coaxial opposed rolling diaphragm layout, and a translating core enclosed cable drive, was prototyped and tested. The prototype displays low hysteresis, low friction, and good position and torque transparency. An automated transmission pressurization and phasing system was also detailed, improving the ease of system setup and maintenance.
According to the theoretical stiffness model, minuscule amounts of undissolved air can have a significant impact on system stiffness. Apart from undissolved air, the rolling diaphragm stiffness contributes the most to system compliance, forming a 'bottom line' on transmission stiffness. Further investigation into methods to thoroughly dissolve and bleed air in the transmission, as well as stiffer diaphragm choice, can greatly improve system stiffness. Further understanding the rolling diaphragm in isolation, such as it's damping and rolling friction, may also help identify other design limitations for rolling diaphragm based transmissions.
## Acknowledgements
The authors would like to thank Professor Raymond de Callafon for his advice on system identification, and Alexander Luke for his work on prototype mechanical design.
Fig. 11: Verification of fitted model against the hand-driven dataset, where the model predicted position from measured input torque is compared against the measured position, exhibiting accurate tracking and good model fit.
Fig. 12: Hysteresis plot of one torque sin wave input starting from rest, maximum hysteresis measured is \(0.0760Nm\) (1.27% of \(6Nm\) full torque range), and maximum static friction measured is \(0.0272Nm\) (0.45% of \(6Nm\) full torque range).
Fig. 13: Transmission input/output shaft angular position and torque over time, where input shaft is actuated by hand and output shaft is loaded with an inertia of \(0.0387kgm^{2}\).
\begin{table}
\begin{tabular}{c c} \hline Initial Model Coeffs & Step Fitted Model Coeffs \\ \hline J = 6.54E-5 \(|kgm^{2}|\) & J = 5.20E-5 \(|kgm^{2}|\) \\ B = 0.005 \(|Nm/s|\) & B = 0.0021 \(|Nm/s|\) \\ K = 23.54 \(|Nm/rad|\) & K = 18.71 \(|Nm/rad|\) \\ \hline \end{tabular}
\end{table} TABLE III: Fit results of mass-spring-damper model on step dataset |
2304.06748 | $\texttt{LIMpy}$: A Semi-analytic Approach to Simulating Multi-line
Intensity Maps at Millimetre Wavelengths | Mapping of multiple lines such as the fine-structure emission from [CII]
(157.7 $\mu \text{m}$), [OIII] (52 \& 88.4 $\mu \text{m}$), and rotational
emission lines from CO are of particular interest for upcoming line intensity
mapping (LIM) experiments at millimetre wavelengths, due to their brightness
features. Several upcoming experiments aim to cover a broad range of scientific
goals, from detecting signatures of the epoch of reionization to the physics of
star formation and its role in galaxy evolution. In this paper, we develop a
semi-analytic approach to modelling line strengths as functions of the star
formation rate (SFR) or infrared (IR) luminosity based on observations of local
and high-z galaxies. This package, $\texttt{LIMpy}$ (Line Intensity Mapping in
Python), estimates the intensity and power spectra of [CII], [OIII], and CO
rotational transition lines up to the $J$-levels (1-0) to (13-12) based both on
analytic formalism and on simulations. We develop a relation among halo mass,
SFR, and multi-line intensities that permits us to construct a generic formula
for the evolution of several line strengths up to $z \sim 10$. We implement a
variety of star formation models and multi-line luminosity relations to
estimate the astrophysical uncertainties on the intensity power spectrum of
these lines. As a demonstration, we predict the signal-to-noise ratio of [CII]
detection for an EoR-Spec-like instrument on the Fred Young Submillimeter
Telescope (FYST). Furthermore, the ability to use any halo catalogue allows the
$\texttt{LIMpy}$ code to be easily integrated into existing simulation
pipelines, providing a flexible tool to study intensity mapping in the context
of complex galaxy formation physics. | Anirban Roy, Dariannette Valentín-Martínez, Kailai Wang, Nicholas Battaglia, Alexander van Engelen | 2023-04-13T18:00:03Z | http://arxiv.org/abs/2304.06748v1 | # LImpy: A Semi-analytic Approach to Simulating Multi-line Intensity Maps at Millimetre Wavelengths
###### Abstract
Mapping of multiple lines such as the fine-structure emission from [CII] (157.7 \(\mu\)m), [OIII] (52 & 88.4 \(\mu\)m), and rotational emission lines from CO are of particular interest for upcoming line intensity mapping (LIM) experiments at millimetre wavelengths, due to their brightness features. Several upcoming experiments aim to cover a broad range of scientific goals, from detecting signatures of the epoch of reionization to the physics of star formation and its role in galaxy evolution. In this paper, we develop a semi-analytic approach to modelling line strengths as functions of the star formation rate (SFR) or infrared (IR) luminosity based on observations of local and high-z galaxies. This package, LIMpy (Line Intensity Mapping in Python), estimates the intensity and power spectra of [CII], [OIII], and CO rotational transition lines up to the \(J\)-levels (1-0) to (13-12) based both on analytic formalism and on simulations. We develop a relation among halo mass, SFR, and multi-line intensities that permits us to construct a generic formula for the evolution of several line strengths up to \(z\sim 10\). We implement a variety of star formation models and multi-line luminosity relations to estimate the astrophysical uncertainties on the intensity power spectrum of these lines. As a demonstration, we predict the signal-to-noise ratio of [CII] detection for an EoR-Spec-like instrument on the Fred Young Submillimeter Telescope (FYST). Furthermore, the ability to use any halo catalogue allows the LIMpy code to be easily integrated into existing simulation pipelines, providing a flexible tool to study intensity mapping in the context of complex galaxy formation physics.
line intensity mapping, galaxy evolution, reionization, structure formation
## 1 Introduction
Observations of redshifted line emissions from atomic and molecular gas in galaxies and intergalactic medium trace the underlying dark matter density fluctuations. Several factors influence the strength of various spectral lines, including the star formation history (SFH), metallicity, and the host halo mass of galaxies. At high redshifts, \(z\gtrsim 6\), it is an arduous task to resolve each individual galaxy in a survey field to understand the physics behind galaxy formation, evolution, and their connection to the intergalactic medium. Multi-line intensity mapping (MLIM) encapsulates integrated emissions from both luminous and faint sources, providing rich information about galaxy clustering, star formation rate density (SFRD), and galaxy luminosity functions (Visbal & Loeb, 2010; Visbal et al., 2011; Kovetz et al., 2017; Bernal & Kovetz, 2022). The detection of different atomic and molecular line intensities at a particular observational frequency probes the Universe at different epochs (or redshifts); thus, it provides a unique opportunity to construct a three-dimensional (3D) map of the Universe by the measurement of several line emission using a couple of observational frequencies.
Detecting the power spectra of fine structure lines, such as [CII] and [OIII], at high redshift (\(z\gtrsim 6\)) has the potential to reveal the sources of reionization and their clustering properties (Dumitru et al., 2019; Padmanabhan, 2019; Padmanabhan et al., 2022; Karoumpis et al.,
2022; Sun et al., 2022). Furthermore, mapping the Universe using various rotational transitions of CO (J-level transition) can probe the formation of structures at high redshifts, offering insights into the process of reionization and the star formation history of the first-generation galaxies (Kovetz et al., 2017; Breysse et al., 2022). Employing a tomographic approach to exploring the Universe enables the measurement of key quantities, including the growth factor of structures, the Hubble constant, and the equation of the state of dark energy (Kovetz et al., 2017; Karkare & Bird, 2018; Bernal et al., 2019; Silva et al., 2021). A joint analysis of all lines could prove valuable for constraining the inflationary paradigm by limiting \(f_{NL}\)(Moradinezhad Dizgah & Keating, 2019; Bernal et al., 2019; Chen & Pullen, 2022). Detecting the 21 cm\(-\)[CII] cross-power spectrum and cross-bispectrum signals can aid in mitigating the low-redshift contamination of 21 cm data, and incorporating 21 cm observations may enhance constraints on astrophysical parameters (Beane & Lidz, 2018; Dumitru et al., 2019; Schaan & White, 2021).
Several observational efforts have been made to detect the 21 cm line emission to study the cosmic dawn, the epoch of reionization (EoR), and late-time structure formation. Intensity mapping of other lines, such as fine-structure emission from carbon [CII] line (157.7 \(\mu m\)), doubly ionized oxygen [OIII] (88.4 \(\mu m\)), and rotational emission lines from CO are of particular interest of upcoming LIM experiments (Suginohara et al., 1998; Righi et al., 2008; Lidz et al., 2011; Carilli, 2011; Fonseca et al., 2017; Gong et al., 2017; Kovetz et al., 2017; Chung et al., 2020; Padmanabhan, 2018, 2019; Dumitru et al., 2019; Chung et al., 2019; Kannan et al., 2022; Murmu et al., 2021; Karoumpis et al., 2022). Despite the several advantages offered by the MLIM technique, there are key challenges to detecting a particular line emission across a broad redshift range in the presence of foreground and instrumental noise. As many of the lines emitted from other sources can be redshifted to the same observational frequency channel, it will create line confusion by adding extra emission to the particular line emission we aim to detect (Lidz & Taylor, 2016; Cheng et al., 2016). These lines are called 'interlopers', and their contamination presents an obstacle to the detection of a particular line coming from the sources at a certain redshift. Moreover, the uncertainty of the star formation history (SFH) in galaxies and its relation with the mass of the host halos arises from the lack of observational data, particularly at high redshift when the reionization process occurred. However, the uncertainties in star formation and their relation with the host halos can be well explored by the different high-resolution simulations, such as UniverseMachine(Behroozi et al., 2019), IllustrsTNG (Pillepich et al., 2018), and Emerge(Moster et al., 2018).
Multiple experiments like FYST1(Aravena et al., 2021), SPHEREx2(Dore et al., 2018), TIME (Crites et al., 2014), CONCERTO3(Ade et al., 2020), COMAP4(Cleary et al., 2021), EXCLAIM (Ade et al., 2020), aim to cover a broad range of scientific goals, from the detection of EoR signatures to the formation of stars in galaxies. To explore the synergies among these experiments requires modelling and simulating the desired signal over a broad redshift range. In this work, we develop a package, LIMpy5, to model and simulate several line emissions up to \(z\lesssim 10\). We implement a range of models for the star formation histories and the relations between multi-line luminosity and these star formation histories, based on analytic expressions and simulations. Collecting many models in one place allows us to explore the astrophysical uncertainties of the amplitude and shape of the signals as well as the level of contamination from interlopers. Additionally, we determine the power spectrum of line intensities both analytically through the halo model and from simulations. We adopt a map-making approach by utilizing halo catalogues generated from N-body simulations, which enables us to forecast the detectability of power spectra for future line intensity mapping (LIM) experiments. In our analysis, we incorporate the effects of beam convolution and develop realistic simulations of instrumental noise, comparing the results with those of the analytic halo models. By combining the intensity signals of various lines, LIMpy offers valuable insights into the astrophysical processes governing these emissions and their potential detectability in future experiments. Ultimately, this approach paves the way for a deeper understanding of the underlying physics and the potential impact of interlopers on the observed signals.
Footnote 1: [https://www.ccatobservatory.org/](https://www.ccatobservatory.org/)
Footnote 2: [https://spherex.caltech.edu/](https://spherex.caltech.edu/)
Footnote 3: [https://mission.lam.fr/concerto/](https://mission.lam.fr/concerto/)
Footnote 4: [https://comap.caltech.edu/](https://comap.caltech.edu/)
Footnote 5: [https://github.com/Anirbancosmo/limpy](https://github.com/Anirbancosmo/limpy)
The LIMpy package can simulate multi-line intensity maps relatively quickly, which is useful for interpreting the signals once the observation of a particular line is made. This package also comes with several analysis techniques for calculating the three-dimensional isotropic power spectrum (\(P_{3D}\left(k\right)\)), the anisotropic power spectrum (\(P_{3D}\left(k_{\parallel},k_{\perp}\right)\)) in 3D, and the angular power spectrum in 2D (\(C_{\ell}\)), so that line intensity maps can be analyzed in different ways to extract the maximum encoded information. Simulations of multi-line intensity
maps across a broad redshift range are helpful in performing cross-correlations between two line intensity maps at the same redshift, as they probe the same sources and underlying dark matter density fluctuations. Furthermore, scanning the Universe at different redshifts with the MLIM technique is not only a promising probe but also carries an opportunity to perform cross-correlations with galaxy surveys and CMB secondary anisotropies, e.g., CMB weak lensing, thermal and kinetic Sunyaev-Zel'dovich (tSZ and kSZ, respectively) effects (Sato-Polito et al., 2021; Schaan and White, 2021; Chung, 2022).
This paper is structured as follows: in Section 2, we provide an overview of the theoretical framework for line intensities, discussing their connection to the SFR of galaxies. In Section 3, we introduce the halo-model formalism used to calculate the power spectrum of line intensities at specific redshifts and present the results obtained through this approach. In Section 4, we showcase the simulation results by describing the steps taken to generate intensity maps, and we present the detectability of the CII 158 signal in Section 5. Finally, in Section 6, we summarize our findings and conclusions. By exploring the theoretical and simulation-based aspects of line intensity mapping, we provide insights into astrophysical modelling uncertainties and the potential for future observational efforts in this area.
Throughout this study, we assume a flat \(\Lambda\)CDM universe with cosmological parameters as defined by the Planck TT, TE, EE+lowE+lensing results (_Planck_Collaboration, 2018). In the rest of this paper, we denote atomic line emission by writing together the line name and its wavelength in micrometre, e.g. CII 158. For molecular line emission from CO, we denote the lines with the upper rotational transition level to the lower level, e.g., CO (1-0). We followed the same naming convention in LIMpy, and line names can be passed to calculate the necessary quantities.
## 2 Theory of line intensity mapping
The rest-frame frequency of a particular line emission, \(\nu_{\rm rest}\), at redshift \(z_{\rm em}\) will be observed at present by an instrument with an observational frequency \(\nu_{\rm obs}\), such that \(\nu_{\rm obs}=\nu_{\rm rest}/(1+z_{\rm em})\). An instrument can be designed to probe a bright line from a broad redshift range to understand the different physical processes at that time by selecting several frequency channels. For instance, the Epoch of Reionization Spectrometer (EoR-Spec) on FYST with the observational frequency from 220-410 GHz is set up to detect the CII 158 line emission across the broad redshift range of 7.6 to 3.6 (Aravena et al., 2021).
In Figure 1, we show the redshift evolution of a few bright-line emissions that fall mainly in the FYST's EoR-Spec frequency coverage, 220 - 480 GHz. All the lines that intersect the horizontal line representing a frequency channel will carry the information from the redshifts corresponding to the intersection. If EoR-Spec on FYST aims to detect the CII 158 lines from \(z\sim 7.3\) using \(\nu_{\rm obs}=220\)GHz, all other lines that cross the 220 GHz frequency line, such as all CO transitions from CO (2-1) to CO (13-12), OIII 88, and OI 145, etc., will also come from different redshifts into the same frequency channel. In this case, one has to clean the signals of other lines to detect the desired line emission.
The detection of CII 158 and OIII 88 line emissions at \(z\gtrsim 6\) will play a crucial role in understanding the epoch of reionization. In contrast, detecting higher J
\begin{table}
\begin{tabular}{c c c c} \hline \hline \(\nu_{\rm obs}\) & \(z_{\rm CII158}\) & \(z_{\rm OIII88}\) & \(z_{\rm CO76}\) \\ \hline
220 & 7.6 & 14.5 & 2.66 \\
280 & 5.8 & 11.2 & 1.87 \\
350 & 4.4 & 8.7 & 1.30 \\
410 & 3.6 & 7.3 & 0.96 \\ \hline \end{tabular}
\end{table}
Table 1: Redshift of observation \(z\) of CII 158, OIII 88, and CO (7-6) emission lines at different observational frequency bands \(\nu_{\rm obs}\) of EoR-Spec on FYST Aravena et al. (2021).
Figure 1: Redshift evolution of different atomic and molecular lines of interest in the redshift-frequency space. The four horizontal lines show the central frequencies of EoR-Spec on FYST, \(\nu_{\rm cen}\), and the corresponding shaded regions represent the frequency bandwidths, \(\Delta\nu\). Dashed lines show the rotational transitions of CO from the \(J_{\rm up}\) to the \((J_{\rm up}-1)\) level.
ladder transition will provide us with information about structure formation and galaxy evolution during the post-reionization era. To quantify their relative contributions to the total observed signal, we calculate the power spectra of the signals using both the halo model approach and N-body simulations. We show a few faint lines, such as OI 145, OI 63, OIII 52, which could act as interlopers because of their redshift overlaps with FYST's EoRSpec frequencies. Simple modelling of these lines is also important to understand the foreground contamination of CII 158 and OIII 88. The FYST's EoR-Spec survey will scan the sky with a frequency range of 220-410 GHz at a spectral resolution of \(R\sim 100\). This corresponds to the redshift coverage of \(z_{\rm CII158}\sim 3.6-7.6\) for CII 158 line and \(z_{\rm OIII88}\sim 7.3-14.5\). We show the redshifts for the line emission corresponding to a few central frequencies of FYST's EoR-Spec, such as 220 GHz, 280 GHz, 350 GHz, and 410 GHz. With the LIMpy code, the intensities and power spectra of any selected lines can be generated at any redshift between \(z\sim 0\)-10. However, in this paper, we show the results only for the redshifts mentioned in Table 2.
We describe the workflow of the LIMpy package in Figure 2. The main ingredients are fed to the code as input to initialize it. In Section 2.1 we describe the in-build models of star formation histories of galaxies that can be passed to the code by mentioning the model name according to the documentation. The default cosmological parameters are based on the Planck 2018 paper. Users can modify these parameters by modifying the input file. There are several models that convert the SFR to the different line luminosities, and these models can be passed by changing the inputs. Once the basic cosmological and astrophysical parameters are initialized, then the code can calculate the power spectrum either based on the halo-model approach or simulations. Next, we calculate the power spectrum and forecast the signal-to-noise ratio for a particular experiment, providing the configurations of a telescope for the white noise calculation. In principle, different noise sources, such as atmospheric, foreground contamination, instrumental white noise, etc., can be passed in the code altogether to calculate the signal-to-noise ratio. The final goal is to make forecasts for parameters based on particular observations or mock data. Either of Fisher forecasts or MCMC algorithms can be applied for parameter estimation using the LIMpy modules.
In the following subsections, we review and summarize the basic properties of star formation histories and their relationship with atomic and molecular line luminosities. These models are based on several assumptions, and changing those assumptions will typically lead to a change in results. The detailed analysis and interpretation of SFR based on galaxy formation models are out of the scope of this paper. Our main goal is to quantify the SFR as a function of halo mass \(M_{\rm halo}\) and \(z\) so that we can calculate the necessary quantities to estimate the power spectra of line emissions over a broad redshift range.
### Empirical models of SFR
One of the most complex problems in the field of modern astrophysics is how stars form in galaxies and their role in the process of galaxy evolution. The SFR across cosmic time and its relation with halo mass are key to understanding galaxies' morphology, chemical and physical properties. Several simulation suites of galaxy formation incorporate complicated astrophysical processes in galaxies and are capable of shedding light on the SFR\(-M_{\rm halo}\) relation across cosmic time (e.g., Crain et al., 2015; Springel et al., 2018; Henden et al., 2018; Behroozi et al., 2019). Multi-wavelength observations of galaxies in the UV by _HST_ and Far-IR observations by the _Herschel_ telescope reconstructed the cosmic star formation density out to redshift \(z\lesssim 10\)(Madau and Dickinson, 2014). Due to the lack of observational data at high redshift (\(z\gtrsim 4\)), statistical errors on the SFRD increase significantly. We aim to reconstruct the SFR empirically from several models and simulations that vary the mass of host halos across a wide range of redshifts. We incorporate five SFR models that can be used to produce line intensity maps, namely Behroozi19 from UniverseMachine(Behroozi et al., 2019), Tng300 and Tng100(Springel et al., 2018; Pillepich et al., 2018) from IllustrisTNG, as well as fitting functions such as Silva15(Silva et al., 2015) and Fonseca16(Fonseca et al., 2017).
In Silva15(Silva et al., 2015), the average SFR is extracted from the post-processed simulated galaxy catalogue (De Lucia and Blaizot, 2007; Guo et al., 2011), in which the minimum halo mass is set to \(10^{8}\,M_{\odot}/h\). The \({\rm SFR}-M_{\rm halo}\) scaling relation is applicable over a broad redshift range, from \(z=0\) to 20. This empirical relation can be approximated to \(z\lesssim 20\) for studying the high redshift line intensities, particularly for OIII 88 and OIII 52. In the Silva15 SFR model, SFR is parameterized as a function of two power-law terms of halo mass, whereas the Fonseca16 model uses the SFR based on the same simulated catalogue as the three power law exponent of halo masses. The parameters for the SFR function according to the Fonseca16 model are given for the redshift range 0-10, and we keep the SFR fixed for the redshift range \(10-20\) as the same as for the SFR at redshift \(z=10\). Various physical processes, such as
galaxy mergers, the effect of the environment, feedback, etc., are involved in the SFR across a broad range of redshifts, which is very complex to model. Hence, we adopt fitting SFR functions such as Fonseca et al. (2017) and Silva et al. (2015). For comparison, we use the output of SFR for each halo from IllustrisTNG simulations done in box size for \(L=100\) cMpc and 300 cMpc (hereafter TNG100 and TNG300, respectively) (Springel et al., 2018; Pillepich et al., 2018).
We adopt the output of the UniverseMachine simulations6 to infer the SFR across the redshift \(z=0-10\)(Behroozi et al., 2019). The empirical methods for tracking down the SFR of each halo across the redshift range are constrained by the observations such as galaxy UV luminosity functions, observed stellar mass functions, quenched fractions, etc. Furthermore, to evaluate the best-fit function from the scattered SFRs from the TNG100 and TNG300 simulations, we take 100 bins in halo mass from \(10^{10}\,M_{\odot}/h\) to \(10^{15}\,M_{\odot}/h\) and take the median value of all the SFRs that fall into each mass bin. We fix the minimum mass of the line emission sources to \(M_{\rm min}=10^{10}\,M_{\odot}/h\) throughout this paper. We use these star formation histories to understand the multi-line luminosities and their astrophysical uncertainties due to the scatter of SFR for a fixed halo mass.
Footnote 6: [https://www.peterbehroozi.com/data.html](https://www.peterbehroozi.com/data.html)
In Figure 3, we show how the SFR varies with the masses of host dark matter halos as predicted by different empirical models. The ratio between the maximum and minimum SFR for a fixed halo mass varies at different redshifts due to the complex physical process of star formation. For \(M_{\rm halo}=10^{11}\,M_{\odot}/h\), this ratio becomes 250, 96, and 11 at redshifts \(z\approx 1\), 4, and 6, respectively. This figure provides insights into the evolution of star formation in halos of varying masses and at different cosmic epochs, shedding light on the complex interplay between dark matter, gas, and other astrophysical processes that shape the growth and evolution of galaxies over time. The exact nature of the \({\rm SFR-}M_{\rm halo}\) relationship is complex and depends on several factors, including the efficiency of gas cooling, the ability of gas to collapse into small-scale structures, and feedback processes such as supernova explosions that can regulate star formation (Conroy & Wechsler, 2009). Understanding the
Figure 2: The schematic flowchart of the LIMpy package. Several built-in star formation models and multi-line luminosity models are implemented as inputs to the code. Based on these input choices, the package will calculate the power spectrum relying on either the halo model approach or painting the line luminosities on an externally provided halo catalogue. The package can make line intensity maps, and if the specification of an experiment is provided, it can calculate the signal-to-noise ratio. Furthermore, LIMpy can be used for parameter estimation based on Markov Chain Monte Carlo (MCMC) or Fisher matrix methods. These methods incorporate observational data to infer the parameters that best describe the underlying astrophysical processes.
underlying physical mechanisms that govern the star formation process in galaxies and their dependence on halo mass and redshift is crucial for developing a comprehensive picture of galaxy formation and evolution (Sun et al., 2022). For simplicity and optimization purposes, we use the average \(\mathrm{SFR}-M_{\mathrm{halo}}\) relation to estimate the multi-line luminosities, ignoring the dependencies of other astrophysical parameters related to the complex star formation history in haloes.
### SFR - \(L_{\mathrm{line}}\) relation
A crucial question that arises in the development of Line Intensity Mapping (LIM) models is the identification of the key factors that trace the observed multi-line luminosities in galaxies. It is assumed that multi-line luminosities trace the star formation histories, and SFR can be converted to line luminosities using a power-law relation. In the previous subsection, we modelled how the SFR depends on the mass of halos, and we discussed how the multi-line luminosities are related to the SFR so that for a given halo mass, we can estimate the multi-line luminosities. We incorporated several \(L_{\mathrm{line}}-M_{\mathrm{halo}}\) relations in LIMpy to study the modelling uncertainties in the intensity maps.
The scaling relations for \(L_{\mathrm{CII158}}-SFR\) can be called by name as Visbal10(Visbal & Loeb, 2010), Silva15-m1, Silva15-m2, Silva15-m3, Silva15-m4(Silva et al., 2015), Fonseca16(Fonseca et al., 2017), Lagache18(Lagache et al., 2018), Schaerer20(Schaerer et al., 2020). The luminosity of these lines scales with the SFR as \(L_{\mathrm{line}}=R_{\mathrm{line}}\times\) SFR for Visbal10 model, where \(R_{\mathrm{line}}\) is the conversion factor (Visbal & Loeb, 2010), which does not evolve with the redshifts. Assuming all galaxies have the same \(R_{\mathrm{line}}\), their values are given by \(6\times 10^{6}\) and \(2.3\times 10^{6}\) in \(L_{\odot}/(M_{\odot}/yr)\) unit for CII 158 and OIII 88 lines, respectively. The values of \(R_{\mathrm{line}}\) for J-ladder transitions of CO molecules are given in the Visbal & Loeb 2010, see Table 1. In the Silva15-m1 (and m2, m3, and m4) model, the luminosity of CII 158 is modelled as a power law relation; \(\log L_{CII\,158}=\alpha+\beta\log(SFR)\). The four sets of models are given by the different values of \(\alpha\) and \(\beta\) that we specify in LIMpy by the name Silva15-m1, Silva15-m2, Silva15-m3, and Silva15-m4(Silva et al., 2015). For the Fonseca16 model, the luminosity of CII 158, OIII 88, OI 145, OI 63, and OIII 52 lines can be expressed as the same power law relation with SFR, but the coefficients are different than Silva15 model as mentioned in Fonseca et al. (2017). For the Silva15 and Fonseca16 models, the coefficients \(\alpha\) and \(\beta\) do not change with the redshift, but the multi-line luminosity of lines varies with the halo mass only because of the evolution of SFR with redshift. The redshift evolution of the coefficients is captured with a modified version of the power in the Lagache18 model (Lagache et al., 2018). In addition, we also model the scaling relation from low redshift observations of the molecular line from the ALMA-Alpine experiment that can be written in a
Figure 3: We illustrate the assumed SFR models as a function of halo mass, three different redshifts: \(z\sim 1\) (left), \(z\sim 4\) (middle), and \(z\sim 6\) (right). The scatter points represent the star formation histories of individual halos in the TNG300 simulations (Springel et al., 2018), while the pink and purple solid lines depict the best-fit curves based on the TNG100 and TNG300 simulations. For comparison, we show the interpolated SFR from Behroozi et al. (2019), and analytic models of SFR taken from Silva et al. (2015) and Fonseca et al. (2017). This plot captures the complex picture of star formation history in halos, as all dark matter halos with the same mass do not form the same amount of stars. The uncertainty in the SFR can propagate to the luminosity of the various emission lines. Careful consideration of the uncertainties in the SFR is necessary when interpreting intensity mapping observations and making predictions about the underlying astrophysical processes that drive the formation and evolution of galaxies over cosmic time.
similar form. The mean values of \(\alpha\) and \(\beta\) are given in Table 2 of Schaerer et al. (2020).
For modelling OIII 88 lines, we include the \(L_{\rm OIIIiss}-SFR\) scaling relation defined in the code by Visbal10(Visbal & Loeb, 2010), Delooze14(De Looze et al., 2014), Fonseca16(Fonseca et al., 2017), Gong17(Gong et al., 2017), Harikane20(Harikane et al., 2020), and Kannan21(Kannan et al., 2022). The SFR in the far-infrared is modelled in terms of the CII 158, OI 63, and OIII 88 line emissions from the Herschel Dwarf Galaxy Survey, and the scaling relations are obtained from De Looze et al. 2014, see Table 2. They find that OI 63 and OIII 88 trace the SFR better than the CII 158 lines and the disperse in the relation between SFR and \(L_{\rm line}\) for OIII 88 and OI 63 is improved by a factor of \(\sim 2\) compared with the CII 158 lines. In addition to that, we adopt the scaling relations between SFR and luminosity of OIII 88 lines based on the observations by ALMA at \(z\sim 6-9\)(Harikane et al., 2020). They find the ratio \(L_{\rm OIII\,88}/L_{\rm CII\,158}\) can be 10 times higher than this ratio at \(z\sim 0\), suggesting a strong redshift evolution of line luminosities. While we implement this scaling relation in LIMpy, the OIII 88 line luminosities can change across redshift due to the change of SFR. Furthermore, we use the scaling relation of OIII 88 that is derived from the observed luminosity function and SFRD at \(z\lesssim 5\)Gong et al. 2017, see Section 2.
We use several scaling relations to model CO line emissions in our study. One such relation is based on the spectra obtained from the _Herschel_ SPIRE Fourier Transform Spectrometer (Kamenetzky et al., 2016). The luminosity of CO molecular emission, \(L_{\rm CO}\), is found to depend on the FIR luminosity of galaxy samples \(L_{\rm FIR}\). In order to calculate the luminosity of all lines for a given SFR, we convert it into \(L_{\rm FIR}\), where \(L_{\rm FIR}=1.1\times 10^{10}\times SFR\)(Carilli, 2011). Additionally, we incorporate another model for CO molecular transitions based on ALMA observations (Greve et al., 2014). This model allows us to estimate the luminosity of full rotational transitions of CO molecules using data from _Herschel_ SPIRE-FTS and ground-based telescopes Greve et al. 2014, see Table 3. The full J-ladder rotational transitions of CO are essential for understanding the relative contributions in the interlopers which contaminate the desired signal that we aim to detect. Using multiple models allows us to estimate the CO line emissions, which helps us to account for the uncertainties associated with these relations.
Figure 4 shows the evolution of different line luminosities for the Silva15 star formation model. The plot shows the ratio between the maximum and minimum luminosities of the CII 158 lines, the CO (7-6) line, and the OIII 88 line as a function of redshift for a fixed minimum halo mass of \(M_{\rm min}=10^{10}\,M_{\odot}/h\). At a redshift of \(z\sim 3.8\), the ratio between the maximum and minimum CII 158 line luminosities is approximately 45. However, for the same minimum halo mass, the ratio for CO (7-6) luminosity becomes 110 at \(z\sim 2.66\) and 310 at \(z\sim 0.96\), highlighting the increasing spread in luminosity ratios as the redshift decreases. The ratio for the CO (7-6) line decreases to 1.5 and 6, respectively, if the minimum halo mass is set to \(10^{11}\,M_{\odot}/h\), suggesting that the choice of minimum halo mass can significantly impact the luminosity ratios. Finally, for the OIII 88 line luminosity with \(M_{\rm min}=10^{10}\,M_{\odot}/h\), the ratio between the maximum and minimum luminosity is 6 and 8 at redshifts z= 14.5 and 7.3, respectively, indicating that this line is less sensitive to changes in redshift than the other lines considered in the plot.
## 3 Analytic Model
In this section, we employ a halo model formalism to compute the power spectrum of multi-line intensities. The intensity of lines emitted at \(z_{\rm em}\) can be expressed
Figure 4: Redshift evolution of the CII 158, CO76, and OIII 88 luminosities based on the models mentioned in the legends. Solid and dashed lines are the line luminosities for the halo mass \(10^{10}\,M_{\odot}/h\) and \(10^{11}\,M_{\odot}/h\), respectively. The vertical-shaded regions are the redshift coverage of these lines corresponding to the frequency bandwidths of the EoR-Spec on FYST.
Figure 5: The power spectra of the CII 158 line at redshifts 7.6, 5.8, 4.4, and 3.6. The dashed lines in each panel show the contribution to the signal from the clustering term alone, while the solid lines represent the total signal, including both the clustering and shot noise terms. The solid red lines in each panel are the mean line of all the models, representing the average signal predicted by the different theoretical models that we consider here. The shaded grey region represents the scales where the dominant term is shot noise, while the yellow-shaded area roughly corresponds to the scales where the clustering term is larger than the shot noise term. The shape and amplitude of the CII 158 have important implications for studying ISM physics and structure formation at high redshift, as they provide insights into the large-scale structure of matter in the Universe and the properties of the sources that generate the CII 158 signal.
as
\[I_{\rm line}(z)=\frac{c}{4\pi}\frac{1}{\nu_{\rm rest}H(z_{\rm em})}\int_{M_{\rm min }}^{M_{\rm max}}L_{\rm line}(M,z)\frac{dn}{dM}dM\,. \tag{1}\]
In this equation, \(c\) represents the speed of light in a vacuum, and \(H(z_{\rm em})\) denotes the Hubble parameter at the redshift of line emission. The halo mass function is represented by \(dn/dM\). Throughout our study, we utilize the Tinker halo mass function for our calculations (Tinker et al., 2008). Here, \(M_{\rm min}\) refers to the minimum mass of the halos contributing to the intensity maps, while \(M_{\rm max}\) signifies the upper mass threshold of the sources.
The power spectrum of fluctuations of the line intensity is the summation of 1-halo and 2-halo terms that can be written as
\[P_{\rm line}(k,z)=I_{\rm line}(z)^{2}\left[b_{\rm line}(z)P_{m}(k,z)+P_{\rm line }^{\rm shot}(z)\right]. \tag{2}\]
In the above equation, \(P_{m}(k,z)\) is the matter power spectrum and \(P_{\rm shot}\) is the shot noise term. We calculate the matter power spectrum using CAMB (Lewis & Challinor, 2011) under the linear approximation. The bias of the line emission, \(b_{\rm line}\), is proportional to the bias of line emitting sources, which can be written as
\[b_{\rm line}(z)=\frac{\int_{M_{\rm min}}^{M_{\rm max}}dM(dn/dM)L_{\rm line}( z)b_{h}(M,z)}{\int_{M_{\rm min}}^{M_{\rm max}}dM(dn/dM)L_{\rm line}(z)}\,. \tag{3}\]
Here, \(b_{h}\) is the bias of dark matter halos. We use Colossus7 package to calculate the bias of dark matter halos and halo mass function (Diemer, 2018). The bias of lines accounts for the clustering properties of the power spectrum, which has significant contributions at large scales.
Footnote 7: [https://bdiemer.bitbucket.io/colossus/](https://bdiemer.bitbucket.io/colossus/)
Finally, the shot noise term of the power spectrum is proportional to the line luminosity function, which is given by
\[P_{\rm line}^{\rm shot}(z)=\frac{\int_{M_{\rm min}}^{M_{\rm max}}dM(dn/dM)L_{ \rm line}(z)^{2}}{\left[\int_{M_{\rm min}}^{M_{\rm max}}dM(dn/dM)L_{\rm line} (z)\right]^{2}}\,. \tag{4}\]
The shot noise term has the same contributions at all scales.
Figure 5 presents the power spectrum of CII 158 for various models, highlighting the clustering and total signal comprising both the clustering and shot noise terms. Despite using the same SFR as input, the \(L_{\rm line}-SFR\) relation, the dispersion on the amplitude of power spectra at these redshifts due to the modelling differences exceed one order of magnitude at these redshifts. Some models are based on the \(L_{\rm line}-M_{\rm halo}\) relation, while others convert SFR to CII 158 line luminosities. For the latter case, we first calculate the SFR for different halo masses and then determine the line luminosities based on the \(L_{\rm line}-SFR\) relation. Similarly, we can generate the power spectra of OIII 88 and molecular lines from CO (1-0) to CO (13-12) for available models. For an example case, we show the power spectra of OIII 88 and CO (7-6) at redshifts corresponding to FYST's EoR-Spec in Appendix B. In this way, we can assess the contribution of these lines to the total signal at a particular frequency channel and determine their detectability in the presence of interlopers.
In Equations (3) and (4), we made the assumption that there is only one star-forming source in each halo. However, this is not the case for high-mass halos. To account for multiple star-forming sources in halos, we employ the halo occupation distribution (HOD) model. The mean occupation functions for central and satellite galaxies in a halo of mass \(M_{h}\) are given by (Zheng et al., 2005):
\[\langle N_{\rm cen}(M_{h})\rangle=\frac{1}{2}\left[1+{\rm erf}\left(\frac{\log M _{\rm h}-\log M_{\rm th}}{\sigma_{\rm logM}}\right)\right]\,, \tag{5}\]
\[\langle N_{\rm sat}(M_{h})\rangle=\left(\frac{M_{\rm h}-M_{\rm cut}}{M_{1}} \right)^{\alpha_{\rm g}}\,. \tag{6}\]
In the above equations, \(\langle N_{\rm cen}(M_{h})\rangle\) and \(\langle N_{\rm sat}(M_{h})\rangle\) represent the average number density of central and satellite galaxies, respectively. \(M_{\rm th}\) denotes the threshold halo mass required to host a central galaxy, while \(M_{\rm cut}\) represents the minimum mass necessary for hosting satellite galaxies. \(\sigma_{\rm logM}\) is the width of the transition in the step-like error function, \(\alpha_{\rm g}\) refers to the power law exponent, and \(M_{1}\) is the mass normalization factor. The HOD-model parameters are given as \(\log M_{\rm th}=10^{8}\), \(\sigma_{\rm logM}=0.287\), \(\log M_{\rm cut}=12.95\), \(\log M_{1}=13.62\), and \(\alpha=0.98\)(Zheng et al., 2005). By incorporating the HOD model, we can more accurately account for the distribution of star-forming sources in halos, particularly in high mass halos. The inclusion of the HOD model provides a more comprehensive picture of the power spectrum of multi-line intensities.
Figure 6 presents the percentage difference in the power spectra of CII 158 lines resulting from the inclusion of the HOD model in our calculations. The HOD model accounts for the line emissions both from central and satellite galaxies, and its inclusion can significantly affect the 1-halo term of power spectra. The plot shows the increase in the power spectra due to the HOD model
for different redshifts and scales, represented by the percentage difference compared to the power spectra without HOD. At a scale of \(k\sim 5\) \(h\,\)Mpc\({}^{-1}\), the power spectra of CII 158 lines increase by 73%, 25%, 2%, and 0.1% at redshifts 3.6, 4.4, 5.8, and 7.6, respectively, highlighting the significant impact of the HOD model on the power spectra of CII 158 lines.
## 4 Simulated Maps of Mlim
In addition to theoretical modelling, LIMpy also performs the simulation of multi-line intensity maps at several redshifts. We utilize the semi-numerical cosmological simulation, 21cmFAST8, to generate the dark matter halo catalogue. We execute the simulation on a box with a length, L= 800 \(c\)Mpc (\(\approx 544\)\(c\)Mpc/\(h\)) (Mesinger et al., 2011). The initial conditions for generating the perturbations in density are set at \(z=300\). The density field evolves over cosmic time following linear perturbation theory. Next, we generate snapshots of several redshifts for different lines corresponding to the FYST's EoR-Spec frequency channels. We set the minimum mass of the halos to \(M_{\rm min}=10^{10}\,M_{\odot}/h\). The simulation setup uses a number of grids along the box length, \(N_{\rm grid}=1024\), meaning that the total number of dark matter particles is \(N_{\rm grid}^{3}\). This enables us to step down to a length of 0.53 \(c\)Mpc/\(h\). After obtaining the halo field or resolved catalogues from simulations, we assign a specific line luminosity to those halos based on an SFR model and line luminosity model.
Footnote 8: [https://github.com/andreimesinger/21cmFAST](https://github.com/andreimesinger/21cmFAST)
The primary advantage of line intensity mapping simulations is that we do not need to resolve individual sources. Consequently, we use a low-resolution simulation with a smaller \(N_{\rm grid}\). This approach reduces the simulation time. We save all the intensity grids at different redshifts to calculate the power spectrum.
The 3D line intensity power spectrum in a simulation box is expressed by:
\[\Delta_{\rm line}^{2}(k)=\frac{1}{V_{\rm box}}\frac{k^{3}}{2\pi^{2}}\langle \tilde{I}^{2}(k)\rangle \tag{7}\]
Here, \(V_{\rm box}\) represents the total volume of the simulation box, and \(\tilde{I}\) is the Fourier transform of the intensity grid. We then perform the Fourier transform of the intensity grid using the NumPy FFT module. The intensity of each cell is calculated as (Dumitru et al., 2019):
\[I_{\rm cell}=\frac{c}{4\pi}\frac{1}{\nu_{\rm rest}H(z_{\rm em})}\frac{L_{\rm line,cell}}{V_{\rm cell}}\,. \tag{8}\]
In principle, CO transitions with \(J_{up}\geq 4\) at low redshifts will act as interlopers for all four of FYST's EoR-Spec frequency channels. However, we only show the CO (7-6) transition as an example case. We project the intensity grid of the length of 35 Mpc, 16 Mpc, 60 Mpc for CII 158, CO (7-6), and OIII 88 lines, which roughly correspond to the frequency resolution of the EoR-Spec on FYST at the central frequency, 280 GHz.
Once we generate the intensity grid with a reasonable value of \(N_{\rm grid}\) to achieve the \(M_{\rm min}\) for line emitting sources, we need to incorporate the effect of frequency resolution (\(\delta\nu_{\rm obs}\)) as an experiment cannot resolve the sources along the redshift axis for the smaller value than the \(\delta\nu_{\rm obs}\). Therefore, the effective number of grids along the redshift axis will depend on the frequency resolution of an experiment. With a resolution of \(N_{\rm grid}=1024\), we select halos above the mass \(M_{\rm min}\gtrsim 10^{10}\)\(M_{\odot}/h\). The 21cmFAST code does not explicitly resolve each halo in the simulation box, but it generates a halo field semi-numerically that can be accurately compared with the output from N-body simulations (Mesinger et al., 2011; Mas-Ribas et al., 2022). This process saves memory usage and makes it run faster. If \(\delta\nu_{\rm obs}\) is 2.8 GHz around the central observational frequency, \(\nu_{\rm obs}=220\) GHz for an experiment to probe CII 158 line emission, the corresponding redshift resolution is \(\delta z\approx 0.07\). This \(\delta z\) corresponds to the box length along the redshift axis
Figure 6: Percentage difference in the CII 158 power spectrum due to the use of a HOD model. For this example, we calculate the power spectrum based on the Silva15 star formation model and Fonseca16 line luminosity model of CII 158. It shows that the inclusion of the HOD model is more important for the power spectrum of CII 158 at low redshift.
Figure 7: We display simulated maps of CII 158 (first row), OIII 88 (second row), CO (7-6) (third row) line intensities at redshifts corresponding to the central frequencies of the FYST’s EoR-Spec experiment. The simulation boxes are generated from 21cmFAST package for a \(544\,c\)Mpc\(/h\) box, and we keep fixed the same initial condition for all the simulations at different redshifts. The columns correspond to the specific observational frequency, as indicated by the column titles. The fourth row of the plot compares the dimensionless power spectra of these lines based on the Tng300 star formation model and the Visbali0 line luminosity model (Visbal & Loeb, 2010). For visualization purposes, Gaussian beam convolutions were applied to the maps with full-width half-maximum (FWHM) beam sizes of 58, 45, 37, and 32 arc seconds from left to right.
(we define it to be along the \(z\) axis of the cartesian coordinate system) \(\delta L_{z}\approx 44\,c\mathrm{Mpc}/h\) and \(\delta N_{grid,z}\approx 90\). Therefore, the experiment with this configuration cannot resolve the sources along the redshift axis that fall between \(z=5.80\) and \(5.87\). In this case, the total number of cells of the same simulation box becomes \(1024\times 1024\times 11\) as the grid points along the \(z\)-axis reduces to \(N_{grid,z}=1024/\delta N_{grid,z}\approx 11\). We take the average intensity of all the intensity grids within this frequency resolution. We note if there is no mention of \(\nu_{\mathrm{obs}}\) or the length corresponding to \(\nu_{\mathrm{obs}}\) exceeds the length of the simulation box along the z-direction, the code does not apply the effect of the frequency resolution and calculates the power spectrum based on the grid points that were used to generate the halo catalogue.
The code presented in this paper is versatile and can accommodate any type of halo catalogue that is provided as input. Our code requires the halo catalogue to contain two essential pieces of information: the halo mass in units of \(M_{\odot}/h\), and the halo positions \((x,y,z)\) in Cartesian coordinates specified in units of \(\mathrm{Mpc}/h\). By accepting any halo catalogue as an input, the code offers the flexibility to utilize halo catalogues generated from full N-body simulations and perform detailed astrophysical analyses. This functionality is especially useful in studying the properties and evolution of halos in large-scale structure simulations, as well as in exploring the connection between halo properties and other astrophysical observables.
Next, we apply beam convolution techniques to recreate the actual observation. Although the actual beam of an experiment can have a complex pattern, for simplicity we assume the beam can be approximated to be Gaussian. Then the beam pattern is characterized by \(\theta_{\mathrm{FWHM}}\), the beam size in arcminute unit at the full width at half maximum (FWHM), and the standard deviation of the beam is \(\sigma_{\mathrm{beam}}=\theta_{\mathrm{FWHM}}/\sqrt{(8\log 2)}\). We used the Astropy9 package to perform the beam convolution on the simulated line intensity maps. The beam convolution does not change the power spectrum at large scales but reduces the power significantly at small scales, above the scales that correspond to the beam size.
Footnote 9: [https://www.astropy.org/](https://www.astropy.org/)
The shape and amplitude of power spectra of different lines at various redshifts provide valuable information about the properties of the intergalactic medium and galaxy populations. In Figure 7, we present the simulated intensity maps of CII 158, CO (7-6), and OIII 88 line emissions from the halos at several redshifts corresponding to FYST's EoR-Spec central observational frequencies. We project the intensity grid of the length of \(\approx 1.3\)\(c\mathrm{Mpc}/h\) for CII 158, CO (7-6), and OIII 88 lines. We show the three-dimensional power spectra of intensity maps without performing the beam convolution to show both clustering and shot noise terms. However, for the visualization, we convolve the intensity maps with the Gaussian beam, and the FWHM values are varied according to the EoR-Spec on FYST \(\nu_{\mathrm{obs}}\)(Aravena et al., 2021). For all intensity maps, we consider \(M_{\mathrm{min}}=10^{10}\,M_{\odot}/h\) at the redshifts mentioned in Figure 6, except the OIII 88 intensity map at \(z\sim 14.5\). Since there are no high-mass halos \(\gtrsim 10^{10}\,M_{\odot}/h\) present at such a high redshift, we show the halos whose masses are larger than \(10^{9}\,M_{\odot}/h\) for that case.
At \(\nu_{\mathrm{obs}}\sim 220\,\mathrm{GHz}\), the power spectrum of CO (7-6) lines is \(\sim 350\) times larger than CII 158 power spectrum at \(k\sim 0.1\) \(h\)/Mpc, but the ratio becomes 40 to 60 times higher at the shot noise dominated scales, \(k\gtrsim 1\) \(h\)/Mpc. At this frequency corresponding to \(z\sim 14.5\), the OIII 88 signal is negligible as there are very few line-emitting sources. This comparison shows it is impossible to detect OIII 88 from such high redshift by the ongoing and planned MLIM experiments. However, at \(\nu_{\mathrm{obs}}\sim 410\,\mathrm{GHz}\), the CII 158 signal becomes larger than CO (7-6) by a factor of \(\sim 6\) at \(k=0.1\) \(h\)/\(c\)Mpc and \(\sim 2\) at \(k=1\) \(h\)/Mpc. Furthermore, for the same redshift at \(z\sim 7.4\), the OIII 88 power spectrum is approximately 4.5 and 1.7 times larger than the CII 158 power spectrum at \(k\sim 0.1\) \(h\)/Mpc and \(1\) \(h\)/Mpc, respectively. Therefore, by using two frequency bands, 220 and 280 GHz, we could detect OIII 88 and CII lines and perform cross-correlation studies as they come from the same sources.
## 5 Detectability
In this section, we forecast the detectability of CII 158 and OIII 88 by considering the specifications of an EoR-Spec experiment. The signal-to-noise ratio is proportional to the number of observed modes present in the survey volume. To determine the number of modes between the wave number \(k\) and \(k+\Delta k\), we use the following equation:
\[N_{m}(k_{i},z)=k_{i}^{2}\Delta k_{i}V_{\mathrm{surv}}/2\pi^{2}\,, \tag{9}\]
where \(k_{i}\) is the central wave number in the bin width, \(\Delta k_{i}\). The survey volume of an experiment is given by (Gong et al., 2017; Dumitru et al., 2019)
\[V_{\mathrm{surv}}=3.7\times 10^{7}(\mathrm{cMpc}/h)^{2} \left(\frac{\lambda_{\mathrm{line}}}{157.8\,\mu m}\right)\left(\frac{1+z}{8} \right)^{\frac{1}{2}}\\ \times\left(\frac{S_{A}}{16\,\mathrm{deg}^{2}}\right)\left(\frac{ B_{\nu}}{20\,\mathrm{GHz}}\right)\,. \tag{10}\]
In the above equation, \(\lambda_{\rm line}\) symbolizes the rest frame wavelength of the line emission, \(S_{A}\) is the effective survey area of an experiment, and \(B_{\nu}\) is the frequency bandwidth. For an EoR-Spec-like experiment on FYST, we consider \(B_{\nu}=40\) GHz for all frequency channels.
To calculate the covariance matrix for the detection of \(P_{\rm line}(k)\), we employ the following formula:
\[\sigma_{\rm line}(z)=\frac{[P^{\rm line}(k,z)+P^{\rm N}(k,z)]^{2}}{N_{m}(k,z) }\,. \tag{11}\]
Here, \(P_{N}\) represents the noise power spectrum. The source of noise can be a combination of white noise, atmospheric noise, and interloper contribution. However, in this paper, we only take the white noise contribution into account to forecast the signal-to-noise ratio (SNR) for the EoR-Spec experiment on FYST (Aravena et al., 2021).
The signal-to-noise ratio, \((S/N)\), is given by:
\[(S/N)^{2}_{\rm cum}(z)=\sum_{i}P^{2}_{\rm line}(k_{i},z)/\sigma^{2}_{\rm line }(z) \tag{12}\]
By employing these equations, we can effectively forecast the detectability of the CII 158 and OIII 88 spectral lines, taking into account the white noise as the sole source of noise in our analysis. This allows us to estimate the signal-to-noise ratio for the EoR-Spec experiment on FYST and assess the overall feasibility of detecting these spectral lines.
Figure 8 presents the detectability status of the CII 158 power spectrum at different redshifts corresponding to the frequency coverage of the EoR-Spec experiment on FYST. The plot compares the power spectra of CII 158 between the halo model and simulation, demonstrating the potential of EoR-Spec to detect the CII 158 signal at different redshifts. The halo mass function used to generate the halo catalogue is the Sheth-Torman mass function, while we use the Tinker mass function for the Halo model approach. The figure also displays the error bars for CII 158 lines forecasted based on the halo model approach for the different frequency bands of the EoR-Spec. The results of our analysis indicate that the EoR-Spec-like experiment will be able to detect the CII 158 signal at more than 350 \(\sigma\) for ten bins in the range of \(k_{\rm min}\sim 0.1\)\(c{\rm Mpc}/h\) to \(k_{\rm max}\sim 10\)\(c{\rm Mpc}/h\) at \(z\sim 5.8\). At higher redshifts, the signal-to-noise ratio is expected to be lower, with values of 26, 373, and 295 at \(z\sim 7.4,4.6\), and 3.4, respectively.
Table 5 summarizes the signal-to-noise ratio (SNR) for the detection of CII,158 at various redshifts corresponding to the FYST's EoR-Spec frequency coverage. Given the considerable uncertainty in the amplitude of the power spectrum, we present SNR forecasts for three different scenarios: optimistic, moderate, and pessimistic, representing the largest, median, and weakest expected signals, respectively. The results demonstrate that an EoR-Spec FYST-like experiment has the potential to detect the CII 158 signal with high significance, offering a valuable opportunity to constrain theoretical models of galaxy formation and evolution. By probing the complex interplay between various astrophysical processes, these observations could provide crucial insights into the underlying physical mechanisms driving the growth and evolution of galaxies.
## 6 Conclusion
In the study of galaxy formation and line intensity mapping, uncertainties in astrophysical modelling of line intensity signals and variations in star formation histories within dark matter halos are crucial topics. In
\begin{table}
\begin{tabular}{c c c c} \hline \hline & \multicolumn{3}{c}{signal-to-noise ratio} \\ \cline{2-3} \(z_{\rm line}\) & optimistic & moderate & pessimistic \\ \hline
7.6 & 284 & 67 & 4 \\
5.8 & 1890 & 460 & 63 \\
4.4 & 2041 & 524 & 70 \\
3.6 & 770 & 212 & 28 \\ \hline \end{tabular}
\end{table}
Table 2: We quote the forecasted cumulative signal-to-noise ratio of CII 158 lines at several redshifts for an FYST-like MLIM experiment with taking into account the effect of foregrounds.
Figure 8: We compare the power spectra of CII 158 lines at four different redshifts between the halo model and simulations. The figure also shows the error bars for CII 158 lines forecasted based on the halo model approach for the different frequency bands of the EoR-Spec on the FYST experiment.
our research, we have developed a package that brings together various models to enable comparison and facilitate the elimination of models when making MLIM observations. The LIMpy package is a semi-numerical code that allows for the modelling of CII 158, OIII 88, and different CO J-ladder transitions in a single framework. We have implemented several star formation models, including those inferred from analytic prescriptions and state-of-the-art simulations such as IllustrisTNG and UniverseMachine, as well as empirical relations from the abundance matching approach. We assume that the SFR serves as a proxy for line luminosities and have included various SFR\(-L_{\rm line}\) scaling relations based on several best-fit models. Our primary objective in this study is to provide a tool for investigating the astrophysical and cosmological information derived from line intensity mapping while taking into account the modelling uncertainties inherent in such an approach. By integrating various models and scaling relations in a single framework, our approach can help interpret the observed MLIM signal.
The LIMpy package not only allows for the modelling of line intensity maps but also enables the exploration of cosmological parameters and their effects on these maps. Halo catalogues generated from various simulations, such as N-body or cosmological hydro simulations, can be inputted into the code to investigate the impact of these parameters on the line emissions. The code efficiently paints halos with line emissions based on the SFR and line luminosity models, making it an ideal tool for performing MCMC analyses on simulations to be constrained by MLIM observations. Furthermore, the simulated maps produced by LIMpy can be used to determine optimal statistics for analyzing observed MLIM data. For instance, future studies may employ voxel intensity distribution (VID) to analyze the simulated multi-line intensity maps outputted by the package (Ihle et al., 2019; Breysse, 2022; Sato-Polito et al., 2021). Overall, LIMpy provides a versatile and powerful tool for studying both astrophysical and cosmological parameters at the map level and can facilitate current and future MLIM observations.
The CII 158 line emission, which exhibits a large scatter, raises critical questions about interpreting the MLIM signal at these redshifts during observations. The uncertainties in the astrophysical parameters pose a challenge in accurately constraining the properties of the intergalactic medium and galaxy populations responsible for the observed signals. Thus, low-noise MLIM observations are crucial to obtaining a robust analysis of the data and constraining the astrophysical parameters. This highlights the need for improving the modelling of the CII 158 power spectrum and other line intensities to maximize the scientific returns of MLIM experiments.
The high signal-to-noise ratio attainable through an EoR-Spec-like experiment for the CII 158 signal makes it an ideal tool for probing the parameters associated with the reionization process, such as the ionized bubble size and mean free path of photons. These observations could be used to reconstruct the luminosity function of galaxies and provide insight into the ionizing sources. Moreover, the frequency overlap of the EoR-Spec allows for the inter-line cross-correlation of CII 158 (220 GHz) with OIII 88 (410 GHz) to obtain a snapshot of the Universe at \(z\sim 7.4\). However, this analysis is subject to potential issues such as interloper contamination and the effect of beam convolution, which we neglected in our forecasts. Consequently, while our forecasts for detecting CII 158 are optimistic, further work is required to account for these factors and obtain a more accurate estimation of the detectability of the CII 158 signal.
However, the rotational emissions from CO molecules at low redshifts dominate over the CII 158 emissions from high redshifts, presenting a significant obstacle to studying the reionization epoch using CII 158. Additionally, extragalactic foregrounds, such as broadband emission from the cosmic infrared background (CIB), contribute to the contamination. To minimize contamination, linear combinations of maps reconstructed using different channels can be used. This approach can significantly reduce the bias for CII 158 detections. However, to obtain more realistic predictions, future work will explore bias due to interlopers, instrumental and atmospheric noise, and will build estimators to estimate the signal in their presence.
## 7 Acknowledgement
AR would like to thank Anthony Challinor, Steve Choi, Andrea Lapi, Dominik Reichers, and Gordon Stacey for helpful discussions. AR is partially supported by the CCAT-prime collaboration. DV acknowledges the REU program through the CCAPS at Cornell University under the NSF award NST/AST-1950324. NB acknowledges support from NSF grant AST-1910021 and NASA grants 21-ADAP21-0114 and 21-ATP21-0129. AvE acknowledges support from NASA grants 22-ADAP22-0149 and 22-ADAP22-0150.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.